ChatGPT is Proving its Utility at Work. Should Educators Encourage Using ChatGPT in the Classroom?

If ChatGPT were human, it would have become a corporate executive and doctor by now. This AI tool has already passed medical and MBA exams, which is making education professionals rethink their approach to test design. At the same time, it’s proving how capable generative AI is for maneuvering the academic field and digging through complex curriculum. As it proves its utility, though, it’s also gathering a crowd of detractors saying ChatGPT is an ethical concern and has no place in students’ tool belt. Do students need to be policed for using ChatGPT in the classroom and for homework? Or is disincentivizing the tool a disservice to students who should be developing AI skills?

Soon after ChatGPT went viral, teachers reported a rise in cases of AI-assisted cheating. A professor of philosophy, for instance, caught 14 students cheating with its help. In response, New York City’s education department blocked access to the tool across its network. Furthermore, nonprofits in the educational sphere like CommonLit.org and Quill.org launched a free tool aimed at helping teachers identify what is AI-generated text and what isn’t. It seems there’s energy behind encouraging a crackdown on students’ use of ChatGPT in the classroom.

Some educators and experts disagree on this method. ChatGPT, it turned out, managed only a C+ in a law exam, so it’s not a test-taking panacea for students. And even though it fared better in the MBA exam, it struggled with in-depth, complex questions. While students are using it to help with homework, even professors who are concerned about the tool’s ethics in education are acknowledging that it’s actually pretty hard to cheat with ChatGPT because it’s producing “uninspiring, milquetoast, and often wrong essays…that almost say nothing and they have no author’s voice or personality.” Others believe AI should be integrated with education to improve teacher’s work lives, using ChatGPT to customize lesson plans and generate quizzes.

Michael Horn, co-founder and distinguished fellow at the Clayton Christensen Institute for Disruptive Education, author, and host of the Future of Education podcast, weighed in with his analysis of the role of ChatGPT in the classroom.

Michael’s Thoughts:

“[OpenAI] certainly turned a lot of heads in the world of education, when it released a tool that effectively allows students to write their own essays. And so you’re seeing all sorts of organizations, like Quill.org and CommonLit.org, and more, introducing tools to help detect essays that are written by artificial intelligence.

In my opinion, this is a race to nowhere. I just don’t think it’s the right approach to be thinking about this. Instead of moving from a plagiarism and sort of cheating-first propensity around students, I think what we ought to do is what Sean Michael Morris urged us on Future U to do, from Course Hero, where he told me and Jeff Selingo more broadly, not just about AI, but that the focus ought to be on the learning process of students and how they collaborate on the work itself, as opposed to trying to catch them or something like that.

What Quill.org and CommonLit.org are doing is, they’re saying, ‘Don’t ban these AI tools that can help students write essays, learn how to use them responsibly.’ And so, even though I’m not wild about tools that catch plagiarism, I get their purpose. And I’m really glad that they’re shifting the conversation to ‘how do we use this to uplevel the quality of work that students are doing?’ And even more important, uplevel the learning that’s actually happening. That’s where I’d love to see the shift: From the grades to the actual learning and objectives that students take away from it.”

Article written by Aarushi Maheshwari.

Follow us on social media for the latest updates in B2B!

Image

Latest

skilled trades mentorship
Why the Modern Data Center Is Forcing Communities and Policymakers to Rethink Infrastructure
April 21, 2026

Data centers have moved from largely invisible digital infrastructure to a highly visible source of public debate as artificial intelligence accelerates demand for power, fiber, and compute capacity. The modern data center is now being built closer to population centers to support low-latency services, bringing critical infrastructure into direct contact with residential communities for…

Read More
Inside the Spot Freight Shift: How Manifold Is Simplifying a Fragmented Logistics Market
April 21, 2026

The freight market is in the midst of a notable shift. With national tender rejection rates approaching 14% by the end of Q1, freight conditions have shifted back in carriers’ favor, often coinciding with increased activity in the spot market. At the same time, logistics teams are juggling an increasingly fragmented ecosystem of portals, emails,…

Read More
healthcare 2026
Healthcare’s 2026 Reality: Growing Workforce Gaps, Tiered Access, and the Rise of AI Support
April 20, 2026

Healthcare systems are entering 2026 under mounting pressure. A growing, aging population and rising disease burden are colliding with persistent workforce shortages—highlighted by projections that new cancer diagnoses in the U.S. will surpass two million this year alone. The stakes are no longer theoretical: delays in care, limited specialist access, and widening disparities are…

Read More
Mental Health Care
Policy, AI, and New Funding Models Are Reshaping Mental Health Care Delivery
April 16, 2026

Mental health care isn’t a new problem—but it’s finally being treated like an urgent one. After years of being sidelined, the cracks in the system are becoming impossible to ignore: overstretched clinicians, long wait times, and entire communities without consistent access to care. In the U.S., the scale is striking—more than one in five…

Read More