President Biden’s AI Executive Order is an Imperfect but Needed First Step for Future AI Development

 

President Biden’s AI Executive Order puts the U.S.’s first stake in the ground for serious artificial intelligence regulation, but as the old saying goes, “You’ve got to start somewhere.”

President Biden’s Executive Order on AI represents a pivotal moment in the intersection of technology and governance, aiming to position the United States at the forefront of developing safe, secure, and trustworthy artificial intelligence (AI). This directive not only sets new standards for AI safety and security but also underscores the importance of protecting American privacy, equity, and civil rights. It addresses the need for responsible AI use across various sectors, including healthcare and education while supporting workers in navigating AI-driven market changes. Notably, the order also highlights the role of small businesses in AI development, a point that resonates with the insights of Igor Jablokov, CEO & Founder at Pryon.

Jablokov’s perspective, as an experienced AI practitioner and the man behind Amazon Alexa, provides a crucial understanding of the practical implications of these new standards. Jablokov has been at the forefront of AI solutions research and development for the past several years; he previously founded AI industry pioneer Yap, where the firm’s inventions were then built upon to develop Alexa, Echo, and Fire TV.

His views on the balance between innovation and regulation and the inclusion of diverse players in the AI field offer a nuanced look at the challenges and opportunities presented by President Biden’s AI Executive Order. As AI continues to reshape the world, Jablokov’s thoughts provide a window into the future of AI development and its impact on society.

Igor’s Thoughts

“I think a lot of people were pleasantly surprised with the output because it essentially gave a stopwatch to all of these different agencies trying to figure something out in terms of what regulation looks like under their purviews over the course of the next year or so.  It talks about adopting AI. It talks about educating AI. It talks about hiring personnel that understand it. It talks about understanding some of the underlying technologies that are part of it.

AI is full of carnival barkers and drama llamas. And for us, old guard practitioners, I mean, we’re rolling our eyes almost every day when they talk about AGI and things of that sort; they’re missing, you know, key areas where it could be used in service to us as an augmented intelligence. And the reason why people are wary and feeling like they need to regulate it a bit faster than they’ve been able to relegate things like social media because we saw, you know, how disaster-prone social media is with election interference and stuff with the vaccines, all sorts of other forms of disinformation and things of that sort, and they’re trying to get ahead of it.

So, I commend them on the attempt. It was a solid attempt. It’s pretty good scaffolding for all the things that need to be built on top of it. And I do think there’s going to be continuity in terms of it being a bipartisan effort because everybody’s concerned in terms of what this, you know, means to us at work, what it means to us being governed, what it means to us being educated and informed. And so, there’s going to be a lot of attention paid to it.”

The Executive Order emphasizes the protection of privacy and civil rights in the context of AI development. Given your background in creating AI solutions, what are your thoughts on the practical challenges and potential solutions for ensuring AI respects and enhances these fundamental rights?

“What we’ve experienced in the last year is lions pretending they’re vegetarians. Everybody talks a big game about how they’re working on ethical AI, and yet they’ve built platforms that hallucinate and literally, you know, publish falsehoods that can’t tell you what source data they used. In some cases, they’ve ingested toxic content or copyrighted content, you know, from elsewhere. These are all taboos.

Again, practitioners knew about and were against these things. So, as a practical matter, it was more, these large-scale research institutes competing with one another. When you think of the open AIs versus deep minds and things of that sort, they let caution go to the wind. And yes, they made some discoveries because they did a bunch of things that everybody knew, you know, a gray area in terms of alignment problems and things of that sort and not, you know, taking in certain types of content.

Now they’re asking for forgiveness because they certainly didn’t have the permission to do such things as well. And so, I think that’s part of, you know, what everybody’s witnessing. Now, we have to be careful with regulation because these very same entities now are going to try to bind us into a certain glide slope, which is this. Enough regulation to keep out competition, but not enough that they don’t have freedom of action. And so, this is something that we’ve seen in banking and other sectors as well, where large entities actually start welcoming, you know, regulation. This is why they’ve been on Capitol Hill, you know, rather often, because they feel like it’ll be of a style that will essentially secure their businesses and kind of show them then, hey, look, we’re following the rules of engagement, but at the same time, they’re creating a moat, you know, for a new entrance to not come in and disrupt, you know, markets that they want to secure for themselves. And so that’s something that we have to be wary of. And that’s why the administration even put in language on the participation of small businesses in this new technical world as well.

As a technologist, I get the chaotic, you know, iterations, but these things are also being released at scale, you know, monkey with elections and things of that sort. So, there can’t be chaotic iteration when it comes to preserving our way of life, our health, and what’s happening on a global scale. You know, if you’re going to do it, you know, in a microcosm with academic partners, national labs, and things of that sort, I get it. You know, go ahead, and throw caution to the wind and try to push the boundaries of science. It’s a totally different thing to push it on a, you know, on a global scale and make it available for API level access and say, oops, sorry, you know, some bad cyber actors use this to breach firewalls at your medical records site and things of that sort. That’s wildly inappropriate. I mean, that’s absolutely, you know, nonsense. And they know they can guardrail these things, but again, all of these folks try to get to scale.

Why is it that social media platforms don’t push out and cancel accounts for a mess of folks who are tormenting us or sowing discord and disinformation? Because they don’t want to do anything to reduce their audience counts so they can have, you know, charge advertisers as much as possible based on their reach. Same thing here. If these folks are getting paid on API level access to this platform, they’re going to do the Pontius Pilate thing and wash their hands and say, oops, you know, you’ve got to regulate the application, not the base foundation model and things of that sort as well. That’s not so for Pryon’s platform. We get to choose, you know, who accesses our work. We’re not going to be washing our hands of what the applications built on our platform do.”

The Executive Order calls for increased federal support for AI research and development. What areas of AI would benefit most from this support, and how might this influence the global competitiveness of the U.S. in AI?

“A couple of obvious areas where public-private partnerships and good use of funds would be everything relating to energy efficiency. So that would be one way. That way, these things don’t turn on us and essentially start using a bigger piece of the energy grid than they otherwise should be as more applications become enhanced with cognitive abilities. That area is capital-intensive in terms of net new semiconductors and things that would benefit from a capital infusion.

The other side is all the governance and security stuff, right? So, think what we have now in DevSecOps, MLSecOps and things of that sort. This is where there typically tends to be a lot of interesting IP that comes out of the national labs that, you know, require some coordination and acceleration to get commercialized as well. So that would be another place that I would look for it.

So, all manners of security technologies related to this to harden the ecosystem and then everything having to do with energy efficiency. The other stuff the commercial markets can find their way because it’s in their best interest to drive accuracy. It’s in their best interest to drive scale and things of that sort.

But right now, I’m seeing even from the largest providers, the thing that they’re not sharing is that they’re running out of AI accelerators no matter how fast they’re racking and stacking, you know, all manners of accelerators, whether it’s GPUs or their own home-built assets, it’s not enough. And they’re not charging people the actual cost because they’re trying to get everybody hooked on their products and things of that sort. You’re not paying market prices for access to these things yet. Now either the prices will go up, or we have to find, you know, more energy-efficient alternatives so that you can keep paying the prices that you’re seeing today.”

Article by James Kent

Follow us on social media for the latest updates in B2B!

Image

Latest

Gerald Hill
The Blessings of Being There: Insights from Gerald Hill
November 6, 2024

Join us on the Grand Dads podcast as we delve into the rich family life and multigenerational lessons of Gerald Hill, a great-grandfather whose life experiences offer invaluable insights into parenting and community engagement. From starting out as young parents in college to navigating careers and community responsibilities, Gerald and his wife have fostered…

Read More
ICN: Investing in the Future of the Global Nurse Workforce | Howard Catton | EP04
ICN: Investing in the Future of the Global Nurse Workforce
November 6, 2024

In the latest episode of Care Anywhere, host Lea Sims sits down with Howard Catton, CEO of the International Council of Nurses (ICN), to discuss the state of the global nursing workforce. As a respected figure in global nursing, Howard shares insights from ICN’s recent findings, addressing a projected global shortage of 13 million…

Read More
Waldo Waldman
Building Profitable Relationships: Leadership Lessons by Decorated Fighter Pilot Waldo Waldman
November 5, 2024

Keynote speaker and leadership expert Waldo Waldman is the author of the New York Times and Wall Street Journal bestseller Never Fly Solo. He teaches tactics on how to build trusting, revenue-producing relationships with employees, partners, and customers while sharing his experiences as a decorated fighter pilot and sales expert. A graduate of the…

Read More
How the use of K9's can change the Culture & Climate on Campuses
How the Use of K9’s Can Change the Culture & Climate on Campuses
November 5, 2024

On the latest episode of Secured, host Mike Matranga of M6 Global Defense welcomes Greg Guidice, President and CEO of Zebra K9, to discuss the critical role K9 units play in enhancing security and well-being across multiple sectors. Greg shares how Zebra K9 has evolved from primarily explosive detection to addressing emerging needs in…

Read More