By MICHAEL MILLENSON
Imagine a government program where private contractors boost their bottom line by secretly mining participants’ personal information, such as credit reports, shopping habits and even website logins.
It’s called Medicare.
This is open enrollment season, when 64 million elderly and disabled Americans choose between traditional fee-for-service Medicare and private Medicare Advantage (MA) health plans. MA membership is soaring; within a few years it’s expected to encompass the majority of beneficiaries. That popularity is due in no small part to the extra benefits plans can provide to promote good health, ranging from gym membership and eyeglasses to meal delivery and transportation assistance.
There is, however, an unspoken price for these enhancements that’s being paid not in dollars but in privacy. To better target outreach, some plans are routinely accessing sophisticated analytics that draw upon what’s euphemistically labeled “consumer data.” One vendor boasts of having up to 5,000 “certified variables for every adult in America,” including “clinical, social, economic, behavioral and environmental data.”
Yet while companies like Facebook and Google have faced intense scrutiny, health care firms have remained largely under the radar. The ethical issue is obvious. Since none of this sensitive personal information is covered by the privacy and disclosure rules protecting actual medical data, it is being deliberately used without disclosure to, or explicit consent by, consumers. That’s simply wrong.
But a more fundamental concern involves the analyses themselves.
The claims of predictive accuracy have never been subjected to public third-party scrutiny examining possible bias or even basic effectiveness. Since more than half of all Black and Hispanic Medicare beneficiaries already choose MA plans, that’s a flashing warning sign.
The human and financial stakes – the government pays MA plans some $350 billion annually – are high. The failures of transparency urgently need to be addressed.
A recent Federal Trade Commission (FTC) forum explored what’s been termed “surveillance capitalism.” FTC chair Lina Khan notes that Americans often “have limited insight into what information is being collected about them and how it’s being used, sold or stored.”
That’s particularly true here. Giant data brokers, privately-funded startups and others are using artificial intelligence (AI) techniques to uncover both patient risk factors and the best way to influence behavior. For instance, an affiliate of billionaire Richard Branson’s Virgin Group said its analytics showed that Philadelphia Eagles fans would be likelier to join a disease management program if they were contacted by text rather than email.
The for-profit mining of consumer data for health purposes is a somewhat paradoxical outgrowth of public health research, which has long stressed the need to address so-called “social determinants of health” (SDOH). SDOH refers to the environment in which people are born, live, learn, work and play. Many health care organizations now use questionnaires to try to discover who has SDOH issues that might make them more vulnerable to later developing expensive medical problems.
But questionnaires are often completed partially, inaccurately or not at all. The data mining mavens believe they’ve found a better and more scalable solution. Because MA plans are paid a flat rate per member, effective SDOH interventions can yield both better health and a healthy return on investment. Moreover, the health systems and physician groups that actually provide care are increasingly signing contracts that incent wellness, both for Medicare patients and others. When you add in the renewed national attention to health equity, the result is an SDOH industry worth $18.5 billion as of July, 2021, according to one estimate.
While it’s difficult to identify which organizations use the data and how, specifics sometimes slip out.
At a 2019 Department of Health and Human Services seminar, a physician executive at a New York City health system, explained how his group applies AI to information gathered from the electronic health record mixed with commercial data.
“For instance, if people don’t live near a bus stop or subway station and haven’t purchased an oil change or wiper blades, we can reach out to ask questions [about mobility],” said the system’s head of population health. That conversation required discretion, Fields added, since revealing why someone was contacted “would be creepy.”
A Humana slide from that same seminar showed that its Grandkids-on-Demand program, which provides companionship and assistance to lonely seniors, was in part enabled by “consumer information from an external vendor.”
Meanwhile, United Healthcare’s Optum group has said it uses consumer data to “close gaps in care and reduce medical costs.” Separately, an Optum algorithm was identified in 2018 as being unintentionally biased against Black patients.
Humana and United enroll nearly half of all MA members, and in many U.S. counties control at least three-quarters of MA enrollees, according to the Kaiser Family Foundation.
An overwhelming 81 percent of Americans believe they have little or no control over the data companies collect on them, according to a Pew Research Center poll. So what should be done about this secret health care surveillance?
Government regulators could move to mandate transparency, but there’s a simpler path. United’s market-leading MA share has been powered by its long affiliation with AARP. As a senior advocacy group, AARP should immediately demand that United, and all MA plans, disclose their consumer data use. Perhaps that would prod insurers and providers to treat those in their care as genuine partners, not objects.
The Center for Medicare & Medicaid Services should similarly publicly ask MA plans to disclose. That call for “voluntarism” could be echoed by the members of Congress who introduced bipartisan legislation to strengthen data privacy and security.
But beyond disclosure, the government should demand that researchers be allowed to examine the assertion that the data miners are providing predictive accuracy without bias. This is crucial, and it can be done while protecting intellectual property rights. As one researcher put it, “We have to make sure this pays off both for the health care system and the patient.”
That’s exactly the right standard. I believe “big data” could provide a genuine leap forward in finding and helping individuals whose health is at risk. But good intentions are not good enough to protect consumers. Health care decisions relying upon secret information secretly used is a risk vulnerable Americans should not have to take.
Michael Millenson is a consultant specializing in quality of care, patient empowerment and web-based health. He is President of Health Quality Advisors, and an adjunct associate professor of medicine at Northwestern University’s Feinberg School of Medicine