person using chatGPT

The deleterious mental health and wellbeing impacts of social media, which are driven by artificial intelligence (AI) technologies for ranking and curation, are being recognized in the public health community. Several public health policies are being pursued to mitigate these specific impacts that may have been exacerbated by the COVID pandemic, as well as to achieve broader goals of reducing loneliness. Yet, there is a new AI technology on the block: generative AI. ChatGPT captured significant attention when it launched in November 2023 and already achieved 100 million monthly active users within only two months; TikTok took nine months to reach that level. Rather than just curating content like the AI in social media, generative AI is used to create content that engages people directly. Black-box generative AI services like ChatGPT are becoming widely used to create social media content and in many others ways. One might wonder what the impacts of generative AI will be on health and wellbeing. 

Since large language models (LLMs) such as ChatGPT and other similar neural network-based AI models are black-box, they are largely out of people’s direct control with only limited ability to prompt. They may reduce human autonomy. Although it is still early days, it is clear that even simple autocomplete changes the way people write, with writing becoming more succinct, more predictable, and less colorful.  Further, AI use at work leads to more loneliness, insomnia, and alcohol use, as measured through mixed method investigations. AI use at work may also lead to self-esteem threat and burnout. There is even growing evidence that youth are using general-purpose chatbots for mental health therapy despite not being designed for such purposes and these chatbots often steer youth into such conversations from other starting points. 

Despite respect for human autonomy being a key part of standard biomedical ethics frameworks and being mentioned in several sets of AI ethics principles, it has largely been put to the side in development and deployment of black-box generative AI. In addition to a human rights perspective that is the basis for many discussions of human autonomy, one can also approach human autonomy from a happiness perspective. Indeed, self-determination theory in positive psychology posits that autonomy, together with connectedness and mastery, are the keys for human wellbeing (with autonomy being most important). One might wonder whether there are either technology or health policy approaches that might increase human autonomy in the age of AI rather than reducing it.

On the technology side, there are emerging AI approaches that are white-box rather than black-box, which strongly support human engagement with AI and with other people in creative and generative activities. Rather than being inscrutable and nonintuitive like neural networks, such AI algorithms are directly human-understandable and human-controllable. One example of such an approach is information lattice learning, which makes it much easier for people to compose authentic music that respects autonomy and the intellectual property of others.

On the policy side, too, there are many possibilities and one can think of the introduction of generative AI as a second bite at the apple when the first bite on AI-driven social media may have gotten away from policymakers. Indeed having this lesson in mind, national security policymakers are aiming to act now when generative AI is still relatively nascent and some regulations can be imposed by design rather than later add-ons that are less effective and more costly. One example is the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that was put forth by the Biden-Harris Administration in late 2023. 

Health policy could take this lesson from quick action in national security policy. There are several steps that can be developed.

  • Expert convenings that establish science-based evidence for policymaking on the health and wellbeing impacts of black-box generative AI;
  • Health labels and advisories that build on such evidence and on transparency requirements such as fact sheets and “nutrition labels” that were given in the AI Executive Order;
  • Health regulations that are aligned with the rights-impact (human rights view of autonomy) and safety-impact (self-determination theory view of autonomy) approach to policymaking that are aligned with the Office of Management and Budget memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence; and perhaps eventually even consider a
  • Narrowly-crafted excise tax on black-box generative AI that is also strategically coordinated with policies motivated by information environment, intellectual property, environment, and national security concerns.

Although there are risks of AI, now is the time for technology and policy evolution to ensure that we not only survive with this powerful class of technology, but also thrive at greater levels of human wellbeing that might emerge with white-box AI operating under appropriate policy frameworks.

Lav_Varshney_headshot
Presenter

Lav Varshney, S.M., Ph.D.

Associate Professor, Electrical and Computer Engineering - University of Illinois Urbana-Champaign

Lav Varshney is an associate professor of electrical and computer engineering at the University of Illinois Ur... Read Bio

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.