Weighing the Risks of AI Tools, From Demographic Bias to Privacy Violations

 

With Microsoft announcing a multibillion dollar investment into ChatGPT, Google launching Bard, and China’s search engine giant Baidu, Inc. entering the race with Ernie, the AI party has officially begun. More companies are integrating ChatGPT into their daily operations as the tool proves itself as flexible for a variety of use cases, and adoption is hot; the number of users on ChatGPT crossed 100 million over a month ago. However, even with all this use case validation and excitement around generative AI’s possibilities, experts are increasingly warning against the risks of AI.

Recently, the Federal Trade Commission (FTC) warned companies against making baseless claims and failing to see the risks posed by their AI-enabled products. The warning comes nearly two years after the FTC had raised concerns about the “troubling outcomes” produced by some AI tools. Here, the FTC pointed out how an algorithm in the healthcare industry was found to show racial bias. And a few years ago, Amazon found that its recruiting tool discriminated against women.

In the past, the FTC has fined Facebook billions of dollars for violating users’ privacy through its facial recognition software. Even White Castle, a hamburger chain, could face a fine worth billions of dollars for the automated collection and sharing of the biometric data of its employees without prior consent.

Scott Sereboff, the general manager for the North American for Deeping Source, a spatial analytics company that offers software for businesses to collect physical and virtual data without infringing on individual customers’ or employees’ privacy, gives his perspectives on the risks of AI and why he has been on a campaign to highlight its ethical uses.

Scott’s Thoughts:

When it comes to artificial intelligence, machine learning and the things that go with it, perhaps the key topic on which industry thought leaders should be focused is ethics and morality, and where we’re going with this and how we’re going to use it. The benefits of AI and machine learning are probably too numerous to count, but are we letting it grow past our ability to guide and shape it into something that is at least more difficult to use in a negative way. The conversations that all these industry experts should be having are around that — demystifying AI and machine learning, helping people to understand that at the end of every one of these algorithmic chains is a human who has either programmed it or categorized the data or has had a hand in determining the shaping of a database or of the AI itself. The part of this process that becomes scary is if we can write the ultimate AI for good programming in any subset (facial recognition, voice recognition), so too can a bad person write the fascist version or the apartheid version. IBM assisted the South African government in the creation of a database that was used for the suppression of black South Africans.

Now with every part of the AI development process, there’s something important consider in light of how it is or is not regulated. In the United States, we don’t really have a specified GDPR-style legislation. Illinois has an incredibly tough legislation, however and California too. Another discussion we should really be having is about potential legal ramifications.  Are you and your company protected against the potential legal trouble you can find yourself in if you reveal personal information or take personal information from a person without his/her permission? It is a legal gray area but are we paying enough attention to that topic as well?  Let’s say I’m running a multistage corporation that has front-end retail or shopping or a hospital network and I’ve got some sort of AI that is collecting data through video cameras or audio interfaces or gate analysis. The questions I must ask is:  am I paying attention to whether or not this can be used to track everything back to me?

I don’t know if everyone realizes the potential danger and with what just happened in Illinois with the White Castle case, although it’s certainly — I should say, almost certainly — not going to wind up as bad as it is right now. The industry’s response to it has been one of surprise, and yet, why are they surprised?

We have spent decades watching social media become this incredibly divisive sort of societal upheaval mechanism. If we’re not careful and we don’t keep an eye on what we’re doing with artificial intelligence, it will do exactly the same thing and history will repeat itself. It’s really easy to say, ‘I’m not prejudiced. The database made me do it.’ So I would suggest that across all of the three questions you’ve asked, the key topics, important conversations, important questions, are all questions around morality and ethics.”

Article written by Aarushi Maheswhari

Follow us on social media for the latest updates in B2B!

Image

Latest

technology
Clarity Under Pressure: Technology, Trust, and the Future of Public Safety
February 7, 2026

When something goes wrong in a community—a major storm, a large-scale accident, a violent incident—there’s often a narrow window where clarity matters most. Leaders must make fast decisions, responders need to trust the information in front of them, and the systems supporting those choices have to work as intended. Public safety agencies now rely…

Read More
weather Intelligence
Clarity in the Storm: Weather Intelligence, GIS, and the Future of Operational Awareness
February 6, 2026

For many organizations today, weather has shifted from an occasional disruption to a constant planning factor. Scientific assessments show that extreme weather events—including heatwaves, heavy rainfall, and wildfires—are occurring more frequently and with greater intensity, placing growing strain on infrastructure, utilities, and public services. As weather-related disruptions become more costly and harder to manage,…

Read More
AI in sterile processing
AI in Sterile Processing Is Proving Its Value by Acting as a Co-Pilot, Not a Replacement
February 5, 2026

Sterile processing departments are dealing with persistent operational pressures. Surgical case volumes are rising, instruments are more complex, and staffing shortages remain across many health systems. Accuracy and documentation requirements continue to tighten, leaving little room for error. In busy hospitals, sterile processing teams may handle 10,000 to 30,000 surgical instruments per day, with…

Read More
IC-SAT100
Meet IC-SAT100, a Satellite PTT Radio Built for the World’s Most Demanding Environments
February 5, 2026

Let’s have a look at Icom’s IC-SAT100, a satellite Push-To-Talk radio designed for moments when ordinary communication just isn’t an option. Powered by the Iridium satellite network, this rugged handheld delivers instant one-to-many communication at the push of a button—no cell towers or ground infrastructure required. Built to thrive in harsh environments, it’s waterproof,…

Read More