ChatGPT is Proving its Utility at Work. Should Educators Encourage Using ChatGPT in the Classroom?

If ChatGPT were human, it would have become a corporate executive and doctor by now. This AI tool has already passed medical and MBA exams, which is making education professionals rethink their approach to test design. At the same time, it’s proving how capable generative AI is for maneuvering the academic field and digging through complex curriculum. As it proves its utility, though, it’s also gathering a crowd of detractors saying ChatGPT is an ethical concern and has no place in students’ tool belt. Do students need to be policed for using ChatGPT in the classroom and for homework? Or is disincentivizing the tool a disservice to students who should be developing AI skills?

Soon after ChatGPT went viral, teachers reported a rise in cases of AI-assisted cheating. A professor of philosophy, for instance, caught 14 students cheating with its help. In response, New York City’s education department blocked access to the tool across its network. Furthermore, nonprofits in the educational sphere like CommonLit.org and Quill.org launched a free tool aimed at helping teachers identify what is AI-generated text and what isn’t. It seems there’s energy behind encouraging a crackdown on students’ use of ChatGPT in the classroom.

Some educators and experts disagree on this method. ChatGPT, it turned out, managed only a C+ in a law exam, so it’s not a test-taking panacea for students. And even though it fared better in the MBA exam, it struggled with in-depth, complex questions. While students are using it to help with homework, even professors who are concerned about the tool’s ethics in education are acknowledging that it’s actually pretty hard to cheat with ChatGPT because it’s producing “uninspiring, milquetoast, and often wrong essays…that almost say nothing and they have no author’s voice or personality.” Others believe AI should be integrated with education to improve teacher’s work lives, using ChatGPT to customize lesson plans and generate quizzes.

Michael Horn, co-founder and distinguished fellow at the Clayton Christensen Institute for Disruptive Education, author, and host of the Future of Education podcast, weighed in with his analysis of the role of ChatGPT in the classroom.

Michael’s Thoughts:

“[OpenAI] certainly turned a lot of heads in the world of education, when it released a tool that effectively allows students to write their own essays. And so you’re seeing all sorts of organizations, like Quill.org and CommonLit.org, and more, introducing tools to help detect essays that are written by artificial intelligence.

In my opinion, this is a race to nowhere. I just don’t think it’s the right approach to be thinking about this. Instead of moving from a plagiarism and sort of cheating-first propensity around students, I think what we ought to do is what Sean Michael Morris urged us on Future U to do, from Course Hero, where he told me and Jeff Selingo more broadly, not just about AI, but that the focus ought to be on the learning process of students and how they collaborate on the work itself, as opposed to trying to catch them or something like that.

What Quill.org and CommonLit.org are doing is, they’re saying, ‘Don’t ban these AI tools that can help students write essays, learn how to use them responsibly.’ And so, even though I’m not wild about tools that catch plagiarism, I get their purpose. And I’m really glad that they’re shifting the conversation to ‘how do we use this to uplevel the quality of work that students are doing?’ And even more important, uplevel the learning that’s actually happening. That’s where I’d love to see the shift: From the grades to the actual learning and objectives that students take away from it.”

Article written by Aarushi Maheshwari.

Follow us on social media for the latest updates in B2B!

Image

Latest

the future of mobility
The Future of Mobility: Insights from LeddarTech’s CTO
January 3, 2025

On the LeddarTech Lab Podcast, CTO Pierre Olivier offers an in-depth look at the future of ADAS and autonomous driving. He explores the rise in global consumer acceptance of automation, the pivotal role of AI in driving performance and innovation, and the industry’s shift toward software-defined vehicles. Olivier also highlights LeddarTech’s unique low-level sensor fusion…

Read More
front-view innovation
LeddarTech’s Front-View Innovation: Safer Roads Ahead
January 3, 2025

In this episode of the LeddarTech Lab Podcast, experts spotlight the groundbreaking LeddarVision Front-View (LVFE) system designed to transform Advanced Driver Assistance Systems (ADAS). Solutions Marketing Manager Abhishek Singh shares how LVFE addresses key adoption barriers by boosting performance and cutting costs. The system ensures compliance with global safety regulations, including Euro NCAP 2025, enhancing…

Read More
public and driving innovation
LeddarTech’s Next Chapter: Going Public and Driving Innovation
January 3, 2025

In this exciting episode of the LeddarTech Lab Podcast, President and CEO Frantz Saintellemy shares groundbreaking news with host Michelle Dawn Mooney—LeddarTech has gone public, now listed on NASDAQ under LDTC. Reflecting on the company’s journey from a research spinoff in Quebec City to a leader in sensor fusion technology, Frantz highlights their shift from…

Read More
Scott Mann
Transition and Mental Health with Former Green Beret, and NYT Best-Selling Author, Scott Mann
January 3, 2025

Dr. Travis Hearne links up with Retired Green Beret Lieutenant Colonel, Scott Mann, on the newest edition of the Through the Storm podcast. Lieutenant Colonel (Retired) Scott Mann is a former U.S. Army Green Beret with tours all over the world, including Colombia, Peru, and multiple tours in Afghanistan. He is a warrior storyteller…

Read More