Thank you, Chairman Obernolte and Chairman Collins, for holding this hearing today. Artificial intelligence is transformational technology, and it deserves thoughtful, informative discussions about any potential government action in this area.

As we discussed during our last hearing on AI, rushing to regulate this technology could have detrimental effects on our ability to innovate and maintain American leadership. We want to be sure that we allow this technology to grow and advance, and to be sure that we develop it in a safe and trustworthy manner that maintains American values of fairness and transparency.

That requires a measured approach, one which I’m proud to say our Committee has long embraced. From the time we passed the National AI Initiative Act, we’ve focused on the strategic development of AI and risk management standards.

A crucial part of that legislation was directing the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that could serve as a tool for the trustworthy design, development, use, and evaluation of AI.

I’m very pleased with the work NIST put into this product, which was developed with years of input from stakeholders, government, industry, and academia.

This represents the kind of approach we need to ensure AI remains safe, trustworthy, and fair.

Other international entities are moving forward with regulations as we speak. The European Union is taking a particularly prescriptive approach to managing the risks of AI.

While we cannot afford to fall too far behind other nations, I urge caution to anyone who would move forward too quickly with strict regulations here in the U.S.

We need to find a balance between allowing the technology the freedom to grow and developing and a framework that helps us manage potential risks from AI.

That’s why today’s hearing is so important. We can’t rush to regulate before knowing the tools and resources we need in place in order to successfully approach this governance challenge.

AI is grabbing big headlines right now and I understand the impulse to make rushed decisions about how to manage it. But that leads us down unproductive paths. Our friends in the Senate, for instance, are choosing to make an end run around the normal committee process and hold confidential meetings with big tech companies instead. The White House, instead of steadily building on the smart risk management framework put out by NIST, is issuing reports, bills of rights, and executive orders somewhat haphazardly.

I’m not arguing that we shouldn’t regulate the development of AI at all, just that we should approach any new requirements methodically and openly so we can ensure the best possible outcome for Americans and American businesses.

I believe that if we approach this correctly, we will get this right and ensure the safe, trustworthy, and fair development of AI.

Thank you to our witnesses for joining us today, and I look forward to a productive discussion.