In the Race to Build Smarter AI, Technology Leaders Shouldn’t Forget That Innovation Needs Oversight
When a résumé is filtered out, a loan is denied, or a piece of content never reaches its audience, artificial intelligence may be the unseen hand behind the outcome. As these systems spread across the tools and institutions that shape daily life, the assumptions and priorities of their designers are carried forward into decisions made at scale. The real challenge isn’t the sophistication of the technology itself, but whether accountability, care, and human judgment are being designed in alongside speed and efficiency.
What happens when the race to build faster, smarter AI outruns our ability to use it responsibly?
In this episode of Professional Quotient, we meet with Jasen Zubcevik, President of The American Council for Ethical AI and Chairman of The American AI Association, to explore what it really means to build and deploy ethical AI in a world that’s moving at breakneck speed.
Jasen shares how his studies at MIT and Oxford opened his eyes to both the technical power of AI and the human risks when systems are built without guardrails. From privacy and data security to bias, accessibility, and AI “hallucinations,” we talk through the biggest red flags leaders should be watching for right now.
We also dig into the future of work: which jobs are most vulnerable, why AI will replace some roles but not people who know how to use it well, and how each of us can start building “AI fluency” as part of our professional equity.
Along the way, Jasen reflects on the relationships, resilience, and relentless curiosity that have shaped his own PQ—from launching a marketing firm in a crowded field to now convening leaders across government, tech, and nonprofits around responsible AI.
What you’ll learn…
-
Ethical AI requires human oversight, not just technical capability.
The discussion emphasizes that AI systems must remain under meaningful human control, especially as they are increasingly used in high-stakes areas like security, governance, hiring, and data analysis. Speed and efficiency alone are insufficient without accountability, transparency, and safeguards. -
The biggest risks today include privacy, misinformation, and accessibility.
The episode highlights red flags such as users unknowingly uploading sensitive data to cloud-based AI systems, AI hallucinations generating false or misleading information, and the risk of building systems that exclude people with disabilities or only serve limited groups. -
AI will reshape work—but fluency, not fear, determines who benefits.
While many roles will be replaced or transformed, individuals who understand how to use AI at a high level will become more valuable, not less. Continuous learning and adaptability are framed as essential components of long-term professional resilience.
Jason Zubcevik is the President of The American Council for Ethical AI and Chairman of the American AI Association, U.S.-based nonprofits working to promote responsible and ethical AI across sectors. He has studied AI from both technical and human-centered perspectives, completing executive education at MIT and advanced studies at Oxford, where his focus shifted toward AI governance, ethics, and societal impact. Zubcevik has worked across government, technology, nonprofit, and association sectors, and now convenes leaders from organizations such as Microsoft, Amazon, Verizon, and U.S. federal agencies to address the ethical challenges emerging as AI adoption accelerates.
If you’ve been wondering how to engage with AI without losing the human element, this conversation is for you.