AI Policy Congress – Part 3 Healthcare

2019-02-07 - 8 minutes read

Written by Willie Boag

Improving Healthcare with AI Technology

During her research presentation on bringing AI to the healthcare system, Professor Regina Barzilay noted many opportunities for improvement of the critical medical decisions caregivers make daily. For the past three years, Dr. Barzilay has been applying Computer Vision and Natural Language Processing techniques to identify dense tissue and other risk factors for breast cancer. She found that with the right data and training, ML models can be faster and more consistent than doctors. In addition, these models would not need to be constrained by categorizations developed for human cognitive limitations. They could also serve as more robust and fair predictors than standard clinical models, which are often developed and validated on non-representative subpopulations, leading to disparate effectiveness for different racial and ethnic groups.

Regina Barzilay discusses AI in healthcare. Photo by Leilani Gilpin.

Evaluating AI in the Healthcare Sector

After her presentation, Barzilay participated in a panel discussion with Tom Price, Former U.S. Secretary of Health and Human Services (HHS), and Jason Furman, Professor at the Harvard Kennedy School, that was moderated by MIT Sloan Professor, Simon Johnson.

Despite the benefits of AI, the healthcare industry has been reluctant to adopt. The panel discussed reasons for this: concern for patient safety, a highly regulated environment, and a need to address complex issues that sit at the intersection of technology and policy. For example, the medical community is currently debating what age women should start having mammograms for early breast cancer detection. Although nearly everyone agrees that it is useful to detect cancers in their early stage, opponents of early-age mammographies argue that false positives cause a large amount of unnecessary anxiety and cost. Although limitations in current technology have brought forth another dimension to this policy issue, it’s also possible that advances in technology may help. If AI could drive down the false positive rate, it could give policymakers better options to choose from.

Many medical professionals are also concerned that AI systems are not explainable enough to be trustworthy. The panel considered trade-offs related to the cost and requirements of explainability. Are we limiting the accuracy of AI systems and holding them to higher standards than necessary? How much explainability do we expect from medical professionals or other medical technology? To prove efficacy, perhaps a system modeled on the pharmaceutical industry’s randomized control trials, testing outcomes (e.g. whether the patient survives) instead of process (e.g. whether the the recommendation is explainable) would result in better, albeit less explainable, care. Although some researchers have tried taking health AI models to clinical trial, most projects do not reach that step.

Accessing Data May be More Challenging in the U.S. than Internationally

By far, the most impassioned part of the panel discussion was about data sharing. Better AI tools have the potential to save tens of thousands of lives per year. So, what — according to the panel — is holding us back? Access to data to train and develop these systems. Data privacy is a longstanding policy priority in medical and health context but it is not clear that this confidentiality can be maintained and protected in AI systems. Laws like HIPAA regulate health data privacy and strongly punish breaches. HIPAA compliance is often stated, perhaps out of an abundance of caution, as a barrier to research-based data sharing.

The “How AI is Changing Healthcare” panel listens to a question. Photo by Leilani Gilpin.

During the Q&A portion, many questions aimed to contextualize AI for U.S. healthcare against the backdrop of other nations. What lessons can the U.S. learn from how European Union’s General Data Protection Regulation (GDPR) allows the use of Personally Identifiable Information (PII) in specific areas, such as research? Is the U.S. at a disadvantage compared to other nations with more centralized healthcare systems — such as the United Kingdom — which can more easily share data with researchers? Furman believes that, “we need more AI. We need more data to have more AI,” and that there should be a conversation about the costs and benefits of how U.S. privacy laws impede innovation.

“How many people do you want to die? How many diseases do we not want to address?” said Furman. The panel agreed that using de-identification tools could reduce the privacy risks associated with sharing data. Some members of the audience questioned the notion of whether de-identification could ever be perfect, or even adequately reliable for the sensitive context of health data. The panel acknowledged that no de-identification is 100% accurate, but wondered if there could be a threshold at which we would be willing to make a trade-off.

As the benefits of AI continue to grow, the pressure to have the cost-benefit discussion will likely follow. In fact, last month HHS released a Request for Information (RFI) about a related issue: whether HIPAA presents undue regulatory burdens for coordinated, value-based care. There are still a few weeks left for public input, and it’s not all consensus, as some patient advocates have concerns that these revisions will not achieve the stated benefits. However, if AI proves up to the task of improving existing care, then we may see more conversations like this in the future.

Learn More About the MIT AI Policy Congress

This post is part of a series on the first MIT AI Policy Congress, edited by Grace Abuhamad. Read the rest of the series on the IPRI Blog.

Tags: