As Organizations Ask If They “Should” Launch AI Projects, A Risk Management System Becomes Essential

 

Four major US AI firms, including Anthropic, Google, Microsoft, and ChatGPT’s creator OpenAI, have launched the Frontier Model Forum to address advanced AI risks and establish industry standards. This collaborative initiative emphasizes safety, security, and trust, with companies committing to risk management system tools like watermarking systems to differentiate AI-generated from human-produced content.

Companies know AI is not merely about integrating the latest algorithms or using vast data sets. At its core, it’s about making critical decisions that affect stakeholders, users, and the general public. That’s why, as AI weaves its way deeper into our daily operations, the real question companies should grapple with isn’t, “Can I implement this AI?”, but rather, “Should I?”

As such, before diving headfirst into AI development, organizations need to assess the ethical dimensions, from bias and accuracy to transparency and data privacy. That’s where a risk management system comes into play. Expert Mark Beccue, Research Director of AI at The Futurum Group, weighs in with further analysis on just how a risk management system can help a business set ethical standards for their AI practices.

Mark’s Thoughts:

“An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

Hi. I’m Mark Beccue. I’m the Research Director of AI at the Futurum Group.

Thinking about this question, it is interesting that some of the leaders in the hyperscalers space have come forward to volunteer some best practices. And I think the first thing we have to think about when you talk about recommendations is that any organization that’s looking at AI or working on AI right now should build their own AI risk management structure for their organization. And what I mean by that is you’re going to look for governance tools, AI governance tools. It starts with risk management, risk assessments. But it really should include an oversight team and really life cycle management.

It’s really kind of soup to nuts how you do AI really comes down to working on understanding the risks for your company. And, you know, that’s really aside from things that are happening with standards and stuff like that. Because there’s lots of risks involved and you need to understand them.

When you look at AI risk management, and another best practice is to think about the core areas of focus, which really evolve around, AI ethics. And those are accuracy, which has to do with bias, transparency of AI, security, which also includes data privacy, and then fairness. So those are the core pieces that are in an AI risk management structure.

And there’s really another piece that you have to think about when you’re looking at AI going forward and how you’re going to look at standards, what best practices are. And it really comes down to a very simple question. An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

So, if you take that approach, the standards are going to come. It’s super early. There’s not a lot out there right yet that are very specific to AI. They will come. The laws and the standards will come.

There are some protections for organizations to kind of think about this a little more within GDPR. So that’s data privacy, which will cover a lot of AI things. And then there’s maybe some security issues. Not so much in standards, but what organizations do to vet technology or applications they use and think about that from their own perspective of what they face with security.

Last thing I’ll leave is there’s a really good resource for any organization that’s looking into this step, and that is it’s called aiethicist.org. And that has lots of frameworks, lots of resources from multiple organizations that are free that you can look at to start to think about how you’re going to set up your organization.”

Article written by Cara Schildmeyer.

Follow us on social media for the latest updates in B2B!

Image

Latest

Leadership
Leading Change from Within: The Power of Transformational Leadership
February 7, 2026

Leadership is being tested in real time. As organizations navigate AI adoption, remote work, and constant structural change, many leaders are discovering that strategy alone isn’t enough. People are asking deeper questions about purpose, trust, and what it really means to show up for teams when uncertainty is the norm. In a world where burnout…

Read More
technology
Clarity Under Pressure: Technology, Trust, and the Future of Public Safety
February 7, 2026

When something goes wrong in a community—a major storm, a large-scale accident, a violent incident—there’s often a narrow window where clarity matters most. Leaders must make fast decisions, responders need to trust the information in front of them, and the systems supporting those choices have to work as intended. Public safety agencies now rely…

Read More
weather Intelligence
Clarity in the Storm: Weather Intelligence, GIS, and the Future of Operational Awareness
February 6, 2026

For many organizations today, weather has shifted from an occasional disruption to a constant planning factor. Scientific assessments show that extreme weather events—including heatwaves, heavy rainfall, and wildfires—are occurring more frequently and with greater intensity, placing growing strain on infrastructure, utilities, and public services. As weather-related disruptions become more costly and harder to manage,…

Read More
AI in sterile processing
AI in Sterile Processing Is Proving Its Value by Acting as a Co-Pilot, Not a Replacement
February 5, 2026

Sterile processing departments are dealing with persistent operational pressures. Surgical case volumes are rising, instruments are more complex, and staffing shortages remain across many health systems. Accuracy and documentation requirements continue to tighten, leaving little room for error. In busy hospitals, sterile processing teams may handle 10,000 to 30,000 surgical instruments per day, with…

Read More