In recent months, a flurry of policy doctrines has been staked out by government, technology, economic, and R&D leaders who aim to solidify the future societal and economic impact of artificial intelligence (AI). As is the case for most disruptive technologies, the pace of discovery, development and application for AI has outpaced the evolution of governing and guiding policies. For AI, the societal and economic drivers are seemingly taking center stage and now creating the momentum for action.   

Recent domestic and global actions set a course for advancement of AI

Beginning in February, the White House issued an executive order that set the high level framework for U.S. government-wide coordination by the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence Actions and implementation. Two of the first steps were a request for information on AI standards and a public meeting in May coordinated by the National Institutes of Standards and Technology (NIST). This input will be used to inform a federal-wide plan for technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies. More agency level public engagements in other domains is anticipated in the months to come.

On the international front, the Organization for Economic Cooperation & Development (OECD) Council of Ministers approved recommendations to promote innovation and trust in AI through responsible stewardship that ensures respect for human rights and democratic values. Complementing existing OECD standards in areas such as privacy, digital security risk management, and responsible business conduct, this new doctrine sets reference concepts that are implementable and sufficiently flexible to stand the test of time in this rapidly evolving field. The recommendations frame five values-based principles for the responsible stewardship of trustworthy AI on the following themes:

  • promoting inclusive, sustainable growth and well-being;
  • human-centered values and fairness;
  • transparency and explainability;
  • robustness, security and safety; and
  • accountability.

In addition to and consistent with these value-based principles, they also provides five recommendations to policy-makers:

  1. investing in AI research and development;
  2. fostering a digital ecosystem for AI;
  3. shaping and enabling policy environments for AI;
  4. building human capacity and preparing for labor market transformation; and
  5. international co-operation for trustworthy AI.

Coinciding with the OECD actions, the U.S. government endorsement and actions were announced in Paris by Michael Kratsios, Deputy Assistant to the President for Technology Policy and U.S. Chief Technology Officer during a keynote speech at the OECD Forum and Ministerial Council meeting in May. Following these actions, the G20 Trade Ministers and Deputy Digital Economy Ministers met in Japan on June 9 and endorsed the OECD framework and provided a supporting statement.

This week, I asked Mr. Kratsios to reflect upon the impact of these developments and he remarked that “for the first time in history, America and likeminded democracies of the world will commit to common AI principles reflecting our shared values and priorities. These principles send a strong message – that the OECD countries stand together in unleashing AI innovation, understanding that it is an essential tool to drive economic growth, empower workers, and lift up quality of life for all.”

Digital health & health policy experts weigh in on AI policy directives

Tim Kelsey, Chief Executive of the Australian Digital Health Authority shared his views on implementation strategies. “The OECD principles for AI very much resonate with the core values of digital health implementation itself – putting data and technology best to work for patients and citizens. Digital technologies can empower people to take more control of their health and wellbeing - when they wish to, support social inclusion and equity of access to health care, as well as the quality and financial sustainability of improved health outcomes.” Anticipating that digital health initiatives will be a gateway for AI integration in health care, Mr. Kelsey added, “digital health is a priority for governments around the world – the work of the Global Digital Health Partnership, for example, underscores the importance of international collaboration in the design of digital services to support global health and wellbeing – these are the foundations of successful AI innovation.”

Similar to data policy movements led by the European Union and United Kingdom last year, these AI policies build upon aspects of the General Data Protection Regulations (i.e., GDPR) that were implemented locally with global impact on the health care industry. Here in the U.S., health care innovation and policy frameworks for programs aimed at efficiently and responsibly guiding AI are accelerating in federal R&D grants as well as in internal government operations to enhance overall government performance. 

Ed Simcox, Chief Technology Officer and Acting Chief Information Officer at the U.S. Department of Health and Human Services, said: “In my office we are always looking for ways to leverage technology and innovation to improve health care. One initiative at the forefront of innovation is ReImagine HHS. The project is using AI to analyze large datasets in a variety of ways. ReImagine is using AI to streamline acquisitions, identify obsolete regulations, and identify fraudulent Medicaid payments.”

Ed Simcox Chief Technology Officer and Acting Chief Information Officer - U.S. Department of Health and Human Services (HHS)

Key takeaways for researchers & policymakers

First, there is a need for mechanisms to promote sharing of methods and resources to enhance collaboration and accelerate progress in applying AI, including and emphasizing international collaborations. Stakeholder and public input to these processes are crucial to avoid barriers and distrust. Second, we should anticipate greater calls for privacy and security measures and public engagement to strengthen the confidence that scientific advances are being made in the interest of public good. Undoubtedly, health care data will be the epicenter for these discussions. Third, there will be increased calls for cross-disciplinary training, such as leveraging STEM education programs, to help bolster the workforce needed to meet domestic demands needed to capture the potential value of AI. Finally, I encourage everyone to play an active role in the public forums that will be emerging on specific domain applications of AI, and take a lead in helping educate patients, collaborators, and institutional officials about the responsibilities that will be needed in research and health care practices.

There are widespread indications that machine learning, AI, the Internet of Things, biometrics, and advanced data technologies will become mainstream in the continuum from research to health care delivery. Despite a plethora of hyperbole and rhetoric, these technologies are now at the future’s doorstep and being engaged for advanced diagnostics, predictive analytics, and as tools to guide patients in navigating care experiences. My sense of this is that informed and open dialogue in the AI policy channels is key. Health care professional communities should use the high-level policy frameworks as the opportunity to engage and be prepared for the consequences that these technologies will bring. 

Greg Downing
Committee Member

Gregory Downing, D.O., Ph.D.

Founder - Innovation Horizons, LLC

Gregory Downing, D.O., Ph.D. is the Founder of Innovation Horizons, LLC, a consulting practice with an emphasi... Read Bio

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.