Can Scientists Use ChatGPT for Scientific Research?
In December 2022, University of Colorado School of Medicine’s Casey Greene and Perelman School of Medicine’s Milton Pividori performed an experiment using ChatGPT to improve three research papers. The focus of the experiment was to see whether OpenAI’s language models were robust enough to supplement academic papers to make writing and revising manuscripts more efficient. Surprisingly for the academic community, the experiment proved successful, with their research finding that OpenAI’s models “can capture the concepts in the scholarly text and produce high-quality revisions that improve clarity.” More readable and accessible academic manuscripts sounds like a win for everyone involved, but how else can AI-supported chatbots support the academic community? Greene and Pividori’s research leaves many wondering whether scientists dealing in rigorous and highly-technical subjects can leverage ChatGPT for scientific research from start to finish.
ChatGPT was launched in November 2022 and got over 1 million users within the first week of its launch; OpenAI’s tool is clearly highly popular and gaining mainstream appeal, validating new use cases every week. The tool can perform complex tasks that save loads of time and energy, and can produce text like poetry, prose, computer code, and even topical research ideas. However, it’s not a perfect tool by any stretch. High level thought leaders in the tech community, including Steve Wozniak, warn that ChatGPT’s text generation can be riddled with errors. Obviously, anything of academic quality can’t have erroneous revisions applied.
Scientists use various types of research methods to gather information to develop their hypotheses and supplement academic texts, including digging for sources, conducting hypothetical scientific models to prove their research, running surveys for new data, and more. With ChatGPT Plus hot on our heels, should scientists start to use ChatGPT for scientific research? How confident can they be in its efficacy? Justin Bean, author of What Could Go Right and an experienced sustainability and smart city strategist working with Fortune 500s and cutting-edge start-ups, weighs in with his take. Bean is highly familiar with the research process; he spent many years as a clean tech and green energy consultant, where he conducted layered and rigorous research of financial data, venture capital investments, trend reports and more.
Justin’s Thoughts:
“So, if Siri gave everyone a digital assistant, ChatGPT gives everyone a digital intern. This means it can do way more work for us and including in science. So, a lot of the work in science and data analysis is all about gathering and blending and cleaning and compiling data, and about making interpretations that lead to some kind of insight, which they can then take and do science with.
Now because of ChatGPT, a lot of that early stage can be done by an AI and that’ll free up scientists to have a lot more time doing the science. But not just that, it’s going to free a lot of people up to be able to do science who aren’t conventional scientists and people who don’t have PhDs. This is because it’s going to do a lot of that initial work and be able to explain it in everyday layman’s terms for you and me to be able to do the science.
And I think that’s going to democratize a lot of inventions and a lot of science as innovators from around the world and different perspectives, it will bring their perspectives to science and leverage the information they can get through chatGPT. Now, as this develops, all these AIs are going to become more and more specialized.
So you may have a specialized artificial intelligence for cancer science and detection, and another one for weather science or climate science. Many different ones will proliferate. In addition, we’re going to have autonomous economic agents that will take that information and go out and acquire materials or goods or contracts for us just to make a lot of that process a lot easier and make us all more effective.”