Weighing the Risks of AI Tools, From Demographic Bias to Privacy Violations

 

With Microsoft announcing a multibillion dollar investment into ChatGPT, Google launching Bard, and China’s search engine giant Baidu, Inc. entering the race with Ernie, the AI party has officially begun. More companies are integrating ChatGPT into their daily operations as the tool proves itself as flexible for a variety of use cases, and adoption is hot; the number of users on ChatGPT crossed 100 million over a month ago. However, even with all this use case validation and excitement around generative AI’s possibilities, experts are increasingly warning against the risks of AI.

Recently, the Federal Trade Commission (FTC) warned companies against making baseless claims and failing to see the risks posed by their AI-enabled products. The warning comes nearly two years after the FTC had raised concerns about the “troubling outcomes” produced by some AI tools. Here, the FTC pointed out how an algorithm in the healthcare industry was found to show racial bias. And a few years ago, Amazon found that its recruiting tool discriminated against women.

In the past, the FTC has fined Facebook billions of dollars for violating users’ privacy through its facial recognition software. Even White Castle, a hamburger chain, could face a fine worth billions of dollars for the automated collection and sharing of the biometric data of its employees without prior consent.

Scott Sereboff, the general manager for the North American for Deeping Source, a spatial analytics company that offers software for businesses to collect physical and virtual data without infringing on individual customers’ or employees’ privacy, gives his perspectives on the risks of AI and why he has been on a campaign to highlight its ethical uses.

Scott’s Thoughts:

When it comes to artificial intelligence, machine learning and the things that go with it, perhaps the key topic on which industry thought leaders should be focused is ethics and morality, and where we’re going with this and how we’re going to use it. The benefits of AI and machine learning are probably too numerous to count, but are we letting it grow past our ability to guide and shape it into something that is at least more difficult to use in a negative way. The conversations that all these industry experts should be having are around that — demystifying AI and machine learning, helping people to understand that at the end of every one of these algorithmic chains is a human who has either programmed it or categorized the data or has had a hand in determining the shaping of a database or of the AI itself. The part of this process that becomes scary is if we can write the ultimate AI for good programming in any subset (facial recognition, voice recognition), so too can a bad person write the fascist version or the apartheid version. IBM assisted the South African government in the creation of a database that was used for the suppression of black South Africans.

Now with every part of the AI development process, there’s something important consider in light of how it is or is not regulated. In the United States, we don’t really have a specified GDPR-style legislation. Illinois has an incredibly tough legislation, however and California too. Another discussion we should really be having is about potential legal ramifications.  Are you and your company protected against the potential legal trouble you can find yourself in if you reveal personal information or take personal information from a person without his/her permission? It is a legal gray area but are we paying enough attention to that topic as well?  Let’s say I’m running a multistage corporation that has front-end retail or shopping or a hospital network and I’ve got some sort of AI that is collecting data through video cameras or audio interfaces or gate analysis. The questions I must ask is:  am I paying attention to whether or not this can be used to track everything back to me?

I don’t know if everyone realizes the potential danger and with what just happened in Illinois with the White Castle case, although it’s certainly — I should say, almost certainly — not going to wind up as bad as it is right now. The industry’s response to it has been one of surprise, and yet, why are they surprised?

We have spent decades watching social media become this incredibly divisive sort of societal upheaval mechanism. If we’re not careful and we don’t keep an eye on what we’re doing with artificial intelligence, it will do exactly the same thing and history will repeat itself. It’s really easy to say, ‘I’m not prejudiced. The database made me do it.’ So I would suggest that across all of the three questions you’ve asked, the key topics, important conversations, important questions, are all questions around morality and ethics.”

Article written by Aarushi Maheswhari

Follow us on social media for the latest updates in B2B!

Image

Latest

skilled trades mentorship
Why the Modern Data Center Is Forcing Communities and Policymakers to Rethink Infrastructure
April 21, 2026

Data centers have moved from largely invisible digital infrastructure to a highly visible source of public debate as artificial intelligence accelerates demand for power, fiber, and compute capacity. The modern data center is now being built closer to population centers to support low-latency services, bringing critical infrastructure into direct contact with residential communities for…

Read More
Inside the Spot Freight Shift: How Manifold Is Simplifying a Fragmented Logistics Market
April 21, 2026

The freight market is in the midst of a notable shift. With national tender rejection rates approaching 14% by the end of Q1, freight conditions have shifted back in carriers’ favor, often coinciding with increased activity in the spot market. At the same time, logistics teams are juggling an increasingly fragmented ecosystem of portals, emails,…

Read More
healthcare 2026
Healthcare’s 2026 Reality: Growing Workforce Gaps, Tiered Access, and the Rise of AI Support
April 20, 2026

Healthcare systems are entering 2026 under mounting pressure. A growing, aging population and rising disease burden are colliding with persistent workforce shortages—highlighted by projections that new cancer diagnoses in the U.S. will surpass two million this year alone. The stakes are no longer theoretical: delays in care, limited specialist access, and widening disparities are…

Read More
Mental Health Care
Policy, AI, and New Funding Models Are Reshaping Mental Health Care Delivery
April 16, 2026

Mental health care isn’t a new problem—but it’s finally being treated like an urgent one. After years of being sidelined, the cracks in the system are becoming impossible to ignore: overstretched clinicians, long wait times, and entire communities without consistent access to care. In the U.S., the scale is striking—more than one in five…

Read More