Imagine a computing platform that can read a healthcare organization’s emails, Word documents, PDFs, EHRs and text files at marvelous speeds—all in an effort to combine its own unique learning into a knowledgebase—without any help from you, the healthcare professional. Once the platform has compiled all this information, you can ask it questions to help you develop a working theory based on facts, associations or entities extracted from your enterprise data.
This is interesting on many levels, but what would such power mean for healthcare providers in the brave new world of data science?
Do not panic—at least not yet!
The approach that organizations take in the race to big data and beyond will certainly affect the level of panic. In my efforts to answer healthcare’s most challenging questions, I use machine-learning algorithms to help healthcare providers and health plans across the United States generate better outcomes clinically, financially and operationally. However, there are ethical issues to consider as we leverage machines within healthcare. In my judgment, creating pure, undirected “artificial intelligence” is not as desirable as creating “beneficial intelligence” designed to support the work of healthcare professionals.
Recently, the Department of Veterans Affairs, or VA, made a strategic investment in cognitive computing by forming a joint venture with the Department of Energy to develop healthcare data analytics, machine learning and artificial intelligence, or AI. In broad terms, the VA is using cognitive computing to improve patient outcomes and support population health management. What is meant by cognitive computing and how will it improve health outcomes?
Generally speaking, cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the workings of the human brain. The goal of cognitive computing is to create automated information technology, or IT, systems that are capable of solving problems without requiring human assistance.
It is the last part of that statement that may make executives, clinicians and coders anxious—and for good reason. After all, no one likes the idea of being replaced by a machine. Many healthcare professionals are certainly right to be guarded when vendors claim that data has great potential to create a more personal, automated, value-focused and productive healthcare system, while also potentially reducing the head count.
My interest in cognitive computing involves how we leverage machine-learning for support, rather than singularity. I believe the goal should be about creating a collaborative partnership between clinicians and technology.
See for yourself – data science in action
The VA has amassed a tremendous amount of patient data since its founding in 1930; so, it is easy to see why they are interested in exploring how machine-based intellectual capabilities can be employed to take advantage of that information. However, the question remains, how do physicians fit in to the cognitive computing infrastructure that is being put in place?
While the VA is just starting to dig in to the hundreds of millions of data points at their disposal, there are already many examples of machine-learning applications that incorporate physicians’ insight and actions to support clinical decision–making, rather than replace it.
At Intermedix, for example, we have created virtual grand rounds to accelerate the identification of sepsis within minutes of patient admission, registration or arrival. To do this, AI in our Condition Awareness solution is trained using years of physicians’ diagnostic history. The AI learning process can be thought of like the medical teaching practice of grand rounds, in which a patient’s condition is presented to a group of physicians, specialists and/or residents who share experiences and provide opinions on treatment. In the same way that physicians benefit from grand rounds, Intermedix’s AI learns from the site-specific diagnoses of physicians and applies what it learns to more than 2,000 related variables in an effort to quickly classify patients according to their risk for sepsis.
Our solution is just one example of how AI can provide decision support. It supplies physicians with information based on what the AI has learned in its analysis, but does not dictate how clinicians should respond. The clinical team partners with the technology, but remains in control of what actions to take based on the information the tool provides.
You can learn more about how data science is helping physicians address sepsis in our latest webinar, available on-demand.