Go Back
Image

Nick White

Data Strategy Director Kin + Carta
Subscribe

It’s Impossible to Regulate AI in a Vacuum. It’s Inextricably Linked to People and Their Existing Laws.

 

Is it possible to regulate AI and put limits on this seemingly boundless technology? Senate Majority Leader Chuck Schumer has announced nine “AI Insight Forums” this fall as a first step towards understanding such an overwhelming and revolutionary technology. The initiative aims to educate Congress members on topics ranging from copyright to national security and the impact of AI on democracy. The objective is clear: to create regulations that prevent undesirable outcomes and promote beneficial ones. This is a complex task that requires regulators to be well-versed in the technology, rather than relying solely on industry experts who might have vested interests.

The existing legal framework provides a starting point, but legislators and other industry leaders  must also identify and address the most pressing needs, particularly those that could cause harm. For example, one of Schumer’s major concerns includes the potential for deepfakes to undermine democracy.

Ultimately, it’s crucial to remember that this technology is not an isolated entity; it’s deeply intertwined with people, organizations, and governments. The challenge of regulating AI isn’t just about understanding the technology, but also about comprehending its societal implications and potential risks.

Nick White, Data Strategy Director at Kin + Carta, helps bridge the gap between technology and policy to further navigate this uncharted territory.

Nick’s Thoughts:

“It is a technology, but it involves people, organizations, government, involves everybody. So look at the laws that exist. How do these relate to the existing laws? And really work from there. And then, of course, where do you start? Like, what are the most pressing needs? Start with things that are going to cause harm to people. As regulators think about how they are going to regulate AI and create a sustainable framework, it has to start with them understanding the technology and how it relates to people and process and things that are very important.

So first, they need to have an understanding. They cannot rely on industry experts that could, you know, gain something from the regulations that get made. From there, what are the objectives? Be very clear on these are the outcomes we want. These are the outcomes we don’t want. And make sure that those are guiding stars. Another thing to think about is this stuff is, it is a technology, but it involves people, organizations, government, involves everybody. So look at the laws that exist. How do these relate to the existing laws? And really work from there.

And then, of course, where do you start? Like, what are the most pressing needs? Start with things that are going to cause harm to people. That is the most important. From there, understanding the risks and where there is low risk and high risk, you will ultimately start creating regulations and start creating laws that actually have a positive impact and contain this and enhance people’s lives like AI should.”

Fields with ( * ) are required

To submit a comment, please provide your name and email or sign in at MarketScale.com

200

Register to MarketScale.com for Nick White episodes, events, and more.


Already have an account?