As Organizations Ask If They “Should” Launch AI Projects, A Risk Management System Becomes Essential

 

Four major US AI firms, including Anthropic, Google, Microsoft, and ChatGPT’s creator OpenAI, have launched the Frontier Model Forum to address advanced AI risks and establish industry standards. This collaborative initiative emphasizes safety, security, and trust, with companies committing to risk management system tools like watermarking systems to differentiate AI-generated from human-produced content.

Companies know AI is not merely about integrating the latest algorithms or using vast data sets. At its core, it’s about making critical decisions that affect stakeholders, users, and the general public. That’s why, as AI weaves its way deeper into our daily operations, the real question companies should grapple with isn’t, “Can I implement this AI?”, but rather, “Should I?”

As such, before diving headfirst into AI development, organizations need to assess the ethical dimensions, from bias and accuracy to transparency and data privacy. That’s where a risk management system comes into play. Expert Mark Beccue, Research Director of AI at The Futurum Group, weighs in with further analysis on just how a risk management system can help a business set ethical standards for their AI practices.

Mark’s Thoughts:

“An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

Hi. I’m Mark Beccue. I’m the Research Director of AI at the Futurum Group.

Thinking about this question, it is interesting that some of the leaders in the hyperscalers space have come forward to volunteer some best practices. And I think the first thing we have to think about when you talk about recommendations is that any organization that’s looking at AI or working on AI right now should build their own AI risk management structure for their organization. And what I mean by that is you’re going to look for governance tools, AI governance tools. It starts with risk management, risk assessments. But it really should include an oversight team and really life cycle management.

It’s really kind of soup to nuts how you do AI really comes down to working on understanding the risks for your company. And, you know, that’s really aside from things that are happening with standards and stuff like that. Because there’s lots of risks involved and you need to understand them.

When you look at AI risk management, and another best practice is to think about the core areas of focus, which really evolve around, AI ethics. And those are accuracy, which has to do with bias, transparency of AI, security, which also includes data privacy, and then fairness. So those are the core pieces that are in an AI risk management structure.

And there’s really another piece that you have to think about when you’re looking at AI going forward and how you’re going to look at standards, what best practices are. And it really comes down to a very simple question. An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

So, if you take that approach, the standards are going to come. It’s super early. There’s not a lot out there right yet that are very specific to AI. They will come. The laws and the standards will come.

There are some protections for organizations to kind of think about this a little more within GDPR. So that’s data privacy, which will cover a lot of AI things. And then there’s maybe some security issues. Not so much in standards, but what organizations do to vet technology or applications they use and think about that from their own perspective of what they face with security.

Last thing I’ll leave is there’s a really good resource for any organization that’s looking into this step, and that is it’s called aiethicist.org. And that has lots of frameworks, lots of resources from multiple organizations that are free that you can look at to start to think about how you’re going to set up your organization.”

Article written by Cara Schildmeyer.

Follow us on social media for the latest updates in B2B!

Image

Latest

medical worker shortage
Experiential Learning: A Cure for the Medical Worker Shortage with Jason Aubrey of Skilltrade
January 26, 2026

Healthcare systems across the U.S. are facing a persistent and worsening medical worker shortage, particularly in allied health roles that keep hospitals, clinics, and surgery centers running. Rural access gaps, rising tuition costs, and skepticism about the ROI of traditional degrees are colliding with urgent employer demand. At the same time, momentum is building…

Read More
Broadband
2025 Broadband Year in Review, Part 2
January 23, 2026

In this episode of Wavelengths, the Amphenol Broadband Solutions podcast, host Daniel Litwin continues his conversation with Alex Rozek, Founder and CEO of Mac Mountain, to examine how technology shifts, capital discipline, and changing consumer expectations reshaped broadband in 2025, and what those changes lock in for the future. As the broadband industry closes…

Read More
branding
Bonfire Branding: How Solo Stove Sparked a Customer Movement with Liz Vanzura (Episode One)
January 22, 2026

When pandemic restrictions shut down restaurants, paused travel, and compressed social lives, connection didn’t disappear; it moved closer to home. Backyards quietly emerged as important gathering spaces, offering a simple way to be together without screens, schedules, or spectacle. What began as a workaround evolved into a familiar rhythm of gathering. In that shift,…

Read More
customer movement
Bonfire Branding: How Solo Stove Sparked a Customer Movement with Liz Vanzura (Episode Three)
January 22, 2026

As audiences tune out polished ads and lean into trust, brands are being forced to rethink how they show up for the customer. Research consistently shows that consumers rate peer-created content as more credible than traditional brand messaging, and algorithmic discovery is increasingly rewarding authenticity over polish. With AI reshaping how people search and…

Read More