As Organizations Ask If They “Should” Launch AI Projects, A Risk Management System Becomes Essential

 

Four major US AI firms, including Anthropic, Google, Microsoft, and ChatGPT’s creator OpenAI, have launched the Frontier Model Forum to address advanced AI risks and establish industry standards. This collaborative initiative emphasizes safety, security, and trust, with companies committing to risk management system tools like watermarking systems to differentiate AI-generated from human-produced content.

Companies know AI is not merely about integrating the latest algorithms or using vast data sets. At its core, it’s about making critical decisions that affect stakeholders, users, and the general public. That’s why, as AI weaves its way deeper into our daily operations, the real question companies should grapple with isn’t, “Can I implement this AI?”, but rather, “Should I?”

As such, before diving headfirst into AI development, organizations need to assess the ethical dimensions, from bias and accuracy to transparency and data privacy. That’s where a risk management system comes into play. Expert Mark Beccue, Research Director of AI at The Futurum Group, weighs in with further analysis on just how a risk management system can help a business set ethical standards for their AI practices.

Mark’s Thoughts:

“An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

Hi. I’m Mark Beccue. I’m the Research Director of AI at the Futurum Group.

Thinking about this question, it is interesting that some of the leaders in the hyperscalers space have come forward to volunteer some best practices. And I think the first thing we have to think about when you talk about recommendations is that any organization that’s looking at AI or working on AI right now should build their own AI risk management structure for their organization. And what I mean by that is you’re going to look for governance tools, AI governance tools. It starts with risk management, risk assessments. But it really should include an oversight team and really life cycle management.

It’s really kind of soup to nuts how you do AI really comes down to working on understanding the risks for your company. And, you know, that’s really aside from things that are happening with standards and stuff like that. Because there’s lots of risks involved and you need to understand them.

When you look at AI risk management, and another best practice is to think about the core areas of focus, which really evolve around, AI ethics. And those are accuracy, which has to do with bias, transparency of AI, security, which also includes data privacy, and then fairness. So those are the core pieces that are in an AI risk management structure.

And there’s really another piece that you have to think about when you’re looking at AI going forward and how you’re going to look at standards, what best practices are. And it really comes down to a very simple question. An organization shouldn’t be asking, ‘Can I do this AI?’ It really should be asking, rather, ‘Should I do this AI?’ And you look at that through the lens of: Does it make sense for our organization? What risks are we going to take?

So, if you take that approach, the standards are going to come. It’s super early. There’s not a lot out there right yet that are very specific to AI. They will come. The laws and the standards will come.

There are some protections for organizations to kind of think about this a little more within GDPR. So that’s data privacy, which will cover a lot of AI things. And then there’s maybe some security issues. Not so much in standards, but what organizations do to vet technology or applications they use and think about that from their own perspective of what they face with security.

Last thing I’ll leave is there’s a really good resource for any organization that’s looking into this step, and that is it’s called aiethicist.org. And that has lots of frameworks, lots of resources from multiple organizations that are free that you can look at to start to think about how you’re going to set up your organization.”

Article written by Cara Schildmeyer.

Follow us on social media for the latest updates in B2B!

Image

Latest

mid-level
Stop Hunting Unicorns: Invest in Trainable Talent to Solve the Mid-Level Hiring Crisis
October 27, 2025

Companies are moving through a cautious but competitive hiring landscape in late 2025. HR leaders face rising pressure to fill key roles quickly. They must balance speed with quality, culture, and long-term potential. That balance has become harder to achieve. In fields like architecture, engineering, and consulting(AEC), the demand for skilled professionals continues to…

Read More
HR
Building a More Human Workplace: Julie Develin on the Future of HR and Workforce Data at HR Southwest 2025
October 27, 2025

In a business world defined by disruption, HR leaders are racing to keep pace with rapid shifts in technology, workforce expectations, and organizational culture. Recent Gartner research shows that 75% of HR leaders say their managers are overwhelmed by the growing pace of change and new responsibilities, while nearly three-quarters report their teams are…

Read More
background checks
Fast or Forgotten: Speeding Up Background Checks Is HR’s New Competitive Edge
October 27, 2025

As the labor market tightens and hiring rates continue to decline, HR leaders face a double-edged challenge: move fast to fill crucial roles without cutting compliance corners. According to LinkedIn’s August 2025 Workforce Report, national hiring was 4.9% lower in July 2025 compared to July 2024, reflecting a broader slowdown across industries. Against that…

Read More
MedeAnalytics
From Data to Humanity: MedeAnalytics’ Lisa King Is Redefining Employee Engagement in a Tech-Driven Workplace
October 27, 2025

The HRSouthwest Conference 2025 brought together HR innovators and business leaders to explore how organizations can stay people-centered in an increasingly data-driven world. In conversation with Daniel Litwin, the Voice of B2B at MarketScale, Lisa King, Chief People Officer at MedeAnalytics, emphasized that lasting engagement isn’t built on flashy initiatives but on authentic connection….

Read More