ChatGPT is Proving its Utility at Work. Should Educators Encourage Using ChatGPT in the Classroom?

If ChatGPT were human, it would have become a corporate executive and doctor by now. This AI tool has already passed medical and MBA exams, which is making education professionals rethink their approach to test design. At the same time, it’s proving how capable generative AI is for maneuvering the academic field and digging through complex curriculum. As it proves its utility, though, it’s also gathering a crowd of detractors saying ChatGPT is an ethical concern and has no place in students’ tool belt. Do students need to be policed for using ChatGPT in the classroom and for homework? Or is disincentivizing the tool a disservice to students who should be developing AI skills?

Soon after ChatGPT went viral, teachers reported a rise in cases of AI-assisted cheating. A professor of philosophy, for instance, caught 14 students cheating with its help. In response, New York City’s education department blocked access to the tool across its network. Furthermore, nonprofits in the educational sphere like CommonLit.org and Quill.org launched a free tool aimed at helping teachers identify what is AI-generated text and what isn’t. It seems there’s energy behind encouraging a crackdown on students’ use of ChatGPT in the classroom.

Some educators and experts disagree on this method. ChatGPT, it turned out, managed only a C+ in a law exam, so it’s not a test-taking panacea for students. And even though it fared better in the MBA exam, it struggled with in-depth, complex questions. While students are using it to help with homework, even professors who are concerned about the tool’s ethics in education are acknowledging that it’s actually pretty hard to cheat with ChatGPT because it’s producing “uninspiring, milquetoast, and often wrong essays…that almost say nothing and they have no author’s voice or personality.” Others believe AI should be integrated with education to improve teacher’s work lives, using ChatGPT to customize lesson plans and generate quizzes.

Michael Horn, co-founder and distinguished fellow at the Clayton Christensen Institute for Disruptive Education, author, and host of the Future of Education podcast, weighed in with his analysis of the role of ChatGPT in the classroom.

Michael’s Thoughts:

“[OpenAI] certainly turned a lot of heads in the world of education, when it released a tool that effectively allows students to write their own essays. And so you’re seeing all sorts of organizations, like Quill.org and CommonLit.org, and more, introducing tools to help detect essays that are written by artificial intelligence.

In my opinion, this is a race to nowhere. I just don’t think it’s the right approach to be thinking about this. Instead of moving from a plagiarism and sort of cheating-first propensity around students, I think what we ought to do is what Sean Michael Morris urged us on Future U to do, from Course Hero, where he told me and Jeff Selingo more broadly, not just about AI, but that the focus ought to be on the learning process of students and how they collaborate on the work itself, as opposed to trying to catch them or something like that.

What Quill.org and CommonLit.org are doing is, they’re saying, ‘Don’t ban these AI tools that can help students write essays, learn how to use them responsibly.’ And so, even though I’m not wild about tools that catch plagiarism, I get their purpose. And I’m really glad that they’re shifting the conversation to ‘how do we use this to uplevel the quality of work that students are doing?’ And even more important, uplevel the learning that’s actually happening. That’s where I’d love to see the shift: From the grades to the actual learning and objectives that students take away from it.”

Article written by Aarushi Maheshwari.

Follow us on social media for the latest updates in B2B!

Image

Latest

comedy
Laughter as a Service: How Comedy Can Power Trust, Teamwork, and Career Growth
February 19, 2026

Comedy might be the most underused business skill in your toolkit… In a world of back-to-back Zoom calls, Slack threads, and AI-generated everything, real human connection can start to feel like an afterthought. We’re moving faster than ever, but sometimes we’re listening less, reacting more, and missing the small moments that actually build trust. The…

Read More
founder-led brand
The Art of Evolution: Leading a Founder-Led Brand Into Its Next Chapter with Mary Beth Sheridan
February 19, 2026

For many retail brands, growth today isn’t just about innovation — it’s about keeping pace with customers whose expectations are evolving in real time, led by younger generations who expect brands to reflect their values and show up with cultural relevance. In fact, recent research from MG2 found that the overwhelming majority of Gen Z…

Read More
computer vision
Censis’ Final Check Uses Computer Vision to Eliminate Tray Errors Before They Reach the OR
February 19, 2026

Artificial intelligence used to live in strategy decks and conference keynotes—but now it’s showing up in a very different place: right on the assembly tables where SPD technicians build trays for the next case. And it’s arriving at a time when the pressure on sterile processing has never been higher. As surgical volumes climb and…

Read More
Scaling AI
QumulusAI Provides A Clear Roadmap for Scaling AI Platforms to Thousands of Users
February 18, 2026

Scaling AI platforms can raise questions about how to expand across locations and support higher user volumes. Growth often requires deployments in multiple data centers and regions. Mazda Marvasti, the CEO of Amberd, says having a clear path to scale is what excites him most about the company’s current direction. He notes that expanding…

Read More