Weighing the Risks of AI Tools, From Demographic Bias to Privacy Violations

 

With Microsoft announcing a multibillion dollar investment into ChatGPT, Google launching Bard, and China’s search engine giant Baidu, Inc. entering the race with Ernie, the AI party has officially begun. More companies are integrating ChatGPT into their daily operations as the tool proves itself as flexible for a variety of use cases, and adoption is hot; the number of users on ChatGPT crossed 100 million over a month ago. However, even with all this use case validation and excitement around generative AI’s possibilities, experts are increasingly warning against the risks of AI.

Recently, the Federal Trade Commission (FTC) warned companies against making baseless claims and failing to see the risks posed by their AI-enabled products. The warning comes nearly two years after the FTC had raised concerns about the “troubling outcomes” produced by some AI tools. Here, the FTC pointed out how an algorithm in the healthcare industry was found to show racial bias. And a few years ago, Amazon found that its recruiting tool discriminated against women.

In the past, the FTC has fined Facebook billions of dollars for violating users’ privacy through its facial recognition software. Even White Castle, a hamburger chain, could face a fine worth billions of dollars for the automated collection and sharing of the biometric data of its employees without prior consent.

Scott Sereboff, the general manager for the North American for Deeping Source, a spatial analytics company that offers software for businesses to collect physical and virtual data without infringing on individual customers’ or employees’ privacy, gives his perspectives on the risks of AI and why he has been on a campaign to highlight its ethical uses.

Scott’s Thoughts:

When it comes to artificial intelligence, machine learning and the things that go with it, perhaps the key topic on which industry thought leaders should be focused is ethics and morality, and where we’re going with this and how we’re going to use it. The benefits of AI and machine learning are probably too numerous to count, but are we letting it grow past our ability to guide and shape it into something that is at least more difficult to use in a negative way. The conversations that all these industry experts should be having are around that — demystifying AI and machine learning, helping people to understand that at the end of every one of these algorithmic chains is a human who has either programmed it or categorized the data or has had a hand in determining the shaping of a database or of the AI itself. The part of this process that becomes scary is if we can write the ultimate AI for good programming in any subset (facial recognition, voice recognition), so too can a bad person write the fascist version or the apartheid version. IBM assisted the South African government in the creation of a database that was used for the suppression of black South Africans.

Now with every part of the AI development process, there’s something important consider in light of how it is or is not regulated. In the United States, we don’t really have a specified GDPR-style legislation. Illinois has an incredibly tough legislation, however and California too. Another discussion we should really be having is about potential legal ramifications.  Are you and your company protected against the potential legal trouble you can find yourself in if you reveal personal information or take personal information from a person without his/her permission? It is a legal gray area but are we paying enough attention to that topic as well?  Let’s say I’m running a multistage corporation that has front-end retail or shopping or a hospital network and I’ve got some sort of AI that is collecting data through video cameras or audio interfaces or gate analysis. The questions I must ask is:  am I paying attention to whether or not this can be used to track everything back to me?

I don’t know if everyone realizes the potential danger and with what just happened in Illinois with the White Castle case, although it’s certainly — I should say, almost certainly — not going to wind up as bad as it is right now. The industry’s response to it has been one of surprise, and yet, why are they surprised?

We have spent decades watching social media become this incredibly divisive sort of societal upheaval mechanism. If we’re not careful and we don’t keep an eye on what we’re doing with artificial intelligence, it will do exactly the same thing and history will repeat itself. It’s really easy to say, ‘I’m not prejudiced. The database made me do it.’ So I would suggest that across all of the three questions you’ve asked, the key topics, important conversations, important questions, are all questions around morality and ethics.”

Article written by Aarushi Maheswhari

Follow us on social media for the latest updates in B2B!

Image

Latest

MedTech
From Finance to the Future of Cardiac Innovation: Ken Nelson on Building, Leading, and Investing in MedTech
May 21, 2025

Over the past two decades, digital health has evolved from a niche concept to a cornerstone of modern care—especially in cardiology, where MedTech innovations like remote monitoring and wearable devices are redefining how patients are diagnosed and managed. Yet behind every billion-dollar breakthrough is a leader who knew how to commercialize it. As cardiovascular startups…

Read More
Restoring Safety: A Holistic Approach to Threat Assessment
May 20, 2025

School Safety Today podcast, presented by Raptor Technologies. In this episode of School Safety Today by Raptor Technologies, host Dr. Amy Grosso speaks with Ray Fuller about how threat assessment can be deeply rooted in empathy, structure, and restorative practices. Fuller shares his journey from educator to districtwide safety leader, offering a look into…

Read More
Private Aviation
Private Aviation Demystified: Fractional, Leases, and the Real Costs
May 20, 2025

Private aviation is no longer just for the rich and famous. Industry data shows that fractional jet ownership grew significantly during the pandemic, which is a clear indication that it is far outpacing traditional charter services and full ownership models. Amid post-COVID shifts in travel habits and corporate belt-tightening, the interest in private aviation…

Read More
SPD
Bridging Gaps in Instrument Tracking: A Global Perspective on SPD Innovation
May 19, 2025

In healthcare environments where precision and accountability are paramount, sterile processing departments (SPDs) play a critical behind-the-scenes role. As hospitals worldwide embrace technology to close compliance gaps and improve operational transparency, the introduction of patient-centric tracking tools marks a major leap forward. Tying surgical instruments directly to patient records not only strengthens traceability—it reinforces trust…

Read More