Weighing the Risks of AI Tools, From Demographic Bias to Privacy Violations

 

With Microsoft announcing a multibillion dollar investment into ChatGPT, Google launching Bard, and China’s search engine giant Baidu, Inc. entering the race with Ernie, the AI party has officially begun. More companies are integrating ChatGPT into their daily operations as the tool proves itself as flexible for a variety of use cases, and adoption is hot; the number of users on ChatGPT crossed 100 million over a month ago. However, even with all this use case validation and excitement around generative AI’s possibilities, experts are increasingly warning against the risks of AI.

Recently, the Federal Trade Commission (FTC) warned companies against making baseless claims and failing to see the risks posed by their AI-enabled products. The warning comes nearly two years after the FTC had raised concerns about the “troubling outcomes” produced by some AI tools. Here, the FTC pointed out how an algorithm in the healthcare industry was found to show racial bias. And a few years ago, Amazon found that its recruiting tool discriminated against women.

In the past, the FTC has fined Facebook billions of dollars for violating users’ privacy through its facial recognition software. Even White Castle, a hamburger chain, could face a fine worth billions of dollars for the automated collection and sharing of the biometric data of its employees without prior consent.

Scott Sereboff, the general manager for the North American for Deeping Source, a spatial analytics company that offers software for businesses to collect physical and virtual data without infringing on individual customers’ or employees’ privacy, gives his perspectives on the risks of AI and why he has been on a campaign to highlight its ethical uses.

Scott’s Thoughts:

When it comes to artificial intelligence, machine learning and the things that go with it, perhaps the key topic on which industry thought leaders should be focused is ethics and morality, and where we’re going with this and how we’re going to use it. The benefits of AI and machine learning are probably too numerous to count, but are we letting it grow past our ability to guide and shape it into something that is at least more difficult to use in a negative way. The conversations that all these industry experts should be having are around that — demystifying AI and machine learning, helping people to understand that at the end of every one of these algorithmic chains is a human who has either programmed it or categorized the data or has had a hand in determining the shaping of a database or of the AI itself. The part of this process that becomes scary is if we can write the ultimate AI for good programming in any subset (facial recognition, voice recognition), so too can a bad person write the fascist version or the apartheid version. IBM assisted the South African government in the creation of a database that was used for the suppression of black South Africans.

Now with every part of the AI development process, there’s something important consider in light of how it is or is not regulated. In the United States, we don’t really have a specified GDPR-style legislation. Illinois has an incredibly tough legislation, however and California too. Another discussion we should really be having is about potential legal ramifications.  Are you and your company protected against the potential legal trouble you can find yourself in if you reveal personal information or take personal information from a person without his/her permission? It is a legal gray area but are we paying enough attention to that topic as well?  Let’s say I’m running a multistage corporation that has front-end retail or shopping or a hospital network and I’ve got some sort of AI that is collecting data through video cameras or audio interfaces or gate analysis. The questions I must ask is:  am I paying attention to whether or not this can be used to track everything back to me?

I don’t know if everyone realizes the potential danger and with what just happened in Illinois with the White Castle case, although it’s certainly — I should say, almost certainly — not going to wind up as bad as it is right now. The industry’s response to it has been one of surprise, and yet, why are they surprised?

We have spent decades watching social media become this incredibly divisive sort of societal upheaval mechanism. If we’re not careful and we don’t keep an eye on what we’re doing with artificial intelligence, it will do exactly the same thing and history will repeat itself. It’s really easy to say, ‘I’m not prejudiced. The database made me do it.’ So I would suggest that across all of the three questions you’ve asked, the key topics, important conversations, important questions, are all questions around morality and ethics.”

Article written by Aarushi Maheswhari

Follow us on social media for the latest updates in B2B!

Image

Latest

team
When Your Team Becomes the Bottleneck
February 25, 2026

In a candid take on organizational blind spots, Mollie Gaby, Principal at CG Infinity, highlights a hard truth many leaders avoid: sometimes your biggest pain point isn’t your technology or your strategy — it’s your staff. A common red flag is resistance to change. When team members are unwilling to explore new tools, automate…

Read More
asset visibility
Diagnosing Your Capital Asset Health: Why Asset Visibility Is the New Financial Imperative in Healthcare
February 25, 2026

Hospitals and surgery centers own millions of dollars in equipment — but owning assets and having actionable visibility into them are two different things. Most systems maintain inventories, yet many struggle with outdated records, fragmented tracking, and limited insight into useful life or service contracts. With nearly half of U.S. hospitals reporting negative operating…

Read More
CFO
From Public Accounting to CFO: The Leadership Wake-Up Call
February 25, 2026

The CFO seat is being rewritten in real time. Today’s finance leaders are expected to drive growth, lead enterprise-wide systems transformations, and shape AI strategy—while still keeping the close, controls, and capital story airtight. Gartner reports that 59% of finance leaders are already using AI in the finance function, underscoring how rapidly the role is…

Read More
restorative practices
Building Safer Schools Through Restorative Practices
February 24, 2026

School Safety Today podcast, presented by Raptor Technologies. In this episode of Principals of Change, host Dr. Amy Grosso sits down with D’Jon Pitchford, Assistant Principal at Kelly Lane Middle School in Pflugerville ISD, to explore what school safety really means. Pitchford reframes safety as more than physical security—emphasizing trust, restorative practices, campus culture,…

Read More