How many of us have read and know – really know – what we are agreeing to when we sign user agreements to set up accounts with Google, FitBit, Apple, etc. Who is making health-related decisions using my Internet search history and subscription information? Did I grant them permission to use my data? What standards are organizations following to determine if they should use my data?

These questions were in the front of our minds at this year’s Health Datapalooza, as virtually every speaker expressed excitement about new technologies to leverage health data. In each session, we heard presenters and attendees alike discussing what new data was available and how various individuals could use certain data, but they did not always discuss how these data should be used.

In the presentation, “AI, Blockchain, Machine Learning (ML), IOT…from Buzzwords to Reality in HealthCare,” Jules Polonetsky alluded to issues that might arise from machine learning (ML) integration in healthcare. Mr. Polonetsky shared an anecdote about a client that developed ML to distinguish pictures of huskies from pictures of wolves. The ML initially performed at a high degree of accuracy, but over time, errors continuously increased. The ML observed that huskies often appear in snow and began to wrongly designate every animal depicted in the snow as a husky. Similar algorithmic flaws leading to attribution errors is particularly disconcerting in healthcare because life or death decisions might be at stake. For example, if ML determines that any patient enrolled in the Dunkin Donuts rewards program is ineligible for inclusion in a diabetes management intervention, it may exclude some individuals erroneously (perhaps a patient buys a dozen donuts every week for a staff meeting at work, but never eats a single donut). 

Similarly, the panel “Real World Evidence for Patients and Consumers,” discussed the importance of context in data. For example, use of “dirty data,” in which the source or reason for using data is unclear, may come with skeletons that render entities using large data sets reluctant to disclose the sources of data or the purpose of its use. This lack of transparency may be particularly disconcerting when patients find out providers and other entities are using their social determinants data, as one audience member pointed out during the presentation, “Collecting and Using Social Determinants of Health Data to Improve Patient Care.”

To respond to these types of challenges, we are working as a research team, (along with Eldesia Granger) from The MITRE Corporation, a not-for-profit organization that connects people and data to reinvent the health experience for a safer world. In partnership with subject matter experts from the University of Maryland, we are developing an ethical and policy framework to guide consumer-generated data (CGD) use in healthcare.

CGD (e.g., wearables data, social media use, Internet searches, buying behaviors, and memberships) has been shown to improve forecasting of health outcomes, risks, and healthcare utilization. However, a lack of ethical standards for CGD use in health care may harm patient privacy and autonomy, disrupt the patient-provider relationship, and cause other negative impacts.

As Datapalooza highlighted, many patients are unaware that healthcare organizations are using their data, including CGD. Moreover, federal privacy laws, such as HIPAA, do not cover CGD use in healthcare. Therefore, establishing ethical standards to guide CGD use in healthcare may help prevent harm to individuals and populations.

MITRE’s team of clinicians, lawyers, ethicists, policy analysts, and health communication specialists is working to assess: (1) whether CGD use in healthcare is ethical; (2) what ethical considerations and constraints should inform its use; and (3) how health providers, systems, and payers might best determine when to use CGD in an ethical manner.

To learn more about our research project, connect with our research team at https://health.mitre.org/contact-us/.

An ethical framework would hold providers, healthcare systems, and payers more accountable for what data they use and how they use it; help them use CGD in a more critical and objective manner to safeguard patients from negative impact; and make us feel a little safer knowing we had better control over the data our devices collect during our next workout.

The authors' affiliation with The MITRE Corporation is provided for identification purposes only, and is not intended to convey or imply MITRE's concurrence with, or support for, the positions, opinions or viewpoints expressed by the authors.

The opinions expressed in this blog post are the author's own and do not necessarily reflect the view of AcademyHealth.

Organizational Affiliates are a critical link in AcademyHealth’s ability to effectively advocate for the field, and support the future field of health services researchers. Organizational Affiliates gain visibility among AcademyHealth membership, enjoy unique networking opportunities, and benefit from event discounts. Click here to learn more.

Author

Susan Mbawuike

Health Communication Scientist - MITRE

Susan Mbawuike is a health communication scientist in MITRE’s Health Systems and Strategy Department. Read Bio

Author

Jessica Skopac

Health Policy Analyst - MITRE

Jessica Skopac, PhD, JD, MA, is a Health Policy Analyst with 12 years of experience in healthcare. Read Bio

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.