Thank you, Chairwoman Stevens, for holding today’s hearing on this important issue.

And thank you, to our distinguished panel of witnesses for joining us here today. Artificial intelligence is fundamentally changing the way we solve some of our society’s biggest challenges.

From healthcare to transportation; commerce to cybersecurity; A.I. technologies are revolutionizing almost every aspect of daily life. But with every new and emerging technology comes new and evolving challenges and risks. Over the years, the Science Committee has held several hearings on A.I., discussing challenges ranging from ethics to workforce needs. 

I hope we can use today’s hearing as an opportunity to further these important discussions, and to shed light on the importance of enabling safe and trustworthy A.I. To do that, we have to first define what makes A.I. safe and trustworthy. I believe our witnesses can help shed light on this today.

But in general, I think we can agree that safe and trustworthy A.I. will meet certain criteria like including accuracy, privacy, and reliability. Additionally, it is important that trustworthy A.I. systems utilize robust data while also protecting the safety and security of user data.

Some other important factors of trustworthy A.I. include transparency, fairness, accountability, and mitigation of harmful biases. These factors are particularly important to keep in mind, as these technologies are being deployed for use in our daily lives.

It is also critical that data used by A.I. technologies is accurate because the input data is the foundation of A.I. So that must be our general goal: transparent and fair A.I. with accurate data and strong privacy protections. 

We can ensure that by having standards and evaluation methods in place for these technologies. The integration of trustworthy A.I. in key industries has the potential to be a significant competitive advantage for U.S. industry. A.I. and other industries of the future like quantum sciences can revolutionize how businesses and economies operate, improving efficiency, expanding services, and integrating operations. The key to these benefits, of course, is the trustworthiness of A.I.

Here in Congress, Members of the Science Committee introduced the bipartisan National Artificial Intelligence Initiative Act of 2020, which was made law through the FY21 NDAA. This legislation created a broad national strategy to accelerate investments in responsible A.I. research, development, and standards, as well as education for the A.I. workforce. It facilitated new public-private partnerships to ensure the U.S. leads the world in the development and use of responsible A.I. systems.

Related to today’s hearing, this initiative required the National Institute of Standards and Technology (NIST) to create a framework for managing risks associated with A.I. systems and best practices for sharing data to advance trustworthy A.I. systems.  As a leader in A.I. research, measurement, evaluation, and standards, NIST has been developing its voluntary A.I. Risk Management Framework since last July. The framework has been developed through a consensus-driven, open, transparent, and collaborative process with multiple workshops for industry to provide input.

I look forward to hearing more about the progress NIST is making in implementing this directive and finalizing this important guidance from Ms. Tabassi. I believe the A.I. Risk Management Framework will be a critical tool for industry to better mitigate risks associated with A.I. technologies as well as promote the incorporation of trustworthiness into every stage from design to evaluation of A.I. technologies.

I am also looking forward to hearing from the U.S. Chamber of Commerce to learn more about their work through the Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, and how they are working to help build consumer confidence in A.I. technologies.

I want to thank our witnesses again for their participation.

Madam Chair, I yield back.