ChatGPT Brings the Promise of AI to the Forefront of Business Operations
What site has nearly five million visits daily, more than 30 million users, a $10 billion deal with Microsoft, and only launched in November 2022? Welcome to the phenomenon known as ChatGPT, OpenAI’s little chatbot that could.
AI has waited patiently for its scalable moment, and ChatGPT may have kicked those doors wide open with its open-source code, which allows companies to create untapped solutions. ChatGPT has already helped detect and correct faulty code, and is assisting businesses to stretch their resources by taking on mundane tasks in industries such as hospitality. And while large-scale companies will benefit from ChatGPT, smaller businesses may reap the most significant rewards.
Any major technological breakthrough is bound to have its admirers, detractors, adopters, and skeptics. But what else does it have? A list of followers wanting in on the success. Chinese tech giant Baidu is looking to follow ChatGPT’s blueprint with a similar chatbot in March. And Google is making a ploy for market share with more than a dozen AI tools in the works. All of this activity…and that’s before the release of GPT-4 later in 2023.
Keeping up with the growth of ChatGPT can be a full-time job these days. What does all this mean for business operations? Which organizations will benefit most from ChatGPT and other daily emerging AI solutions, and how can they begin to integrate them into their day-to-day? Scott Castle, Chief Strategy Officer at Sisense, provided some context. Sisense provides their clients with AI-driven analytics, infused into their company’s workflows, products, and customer experiences; the company is already deploying ChatGPT internally and externally.
“At Sisense, we’ve built adapters to make it easy for our internal users to get data augmented by a ChatGPT and use it immediately in analysis without help from the data team or the BI team.
I’m seeing data engineers and ops pros building augmented data sets and figuring out how to get the data they have analyzed by these large language models. They’re augmenting data with facts, industry classifications, and revenue, with information that you can collect from the internet, or you can collect from other documentation, but that can be expensive to collect or to apply.
I’m also seeing applications of these large language models for sentiment analysis and prediction, looking at large bodies of texts, and saying, what does this mean?
Models like ChatGPT will free both business users and engineers from collecting, cleaning, storing, and paying for all this data before they need it. Not all data, but facts and predictions. Why would you store those pre-need when you can summon them on demand?
I predict that the work around these large language models will fall into three pieces: adapters that connect to these models, wrappers that make the models more relevant for specific user tasks, and validators to determine whether these models’ outputs are actually right.
I suspect that validation is going to be key in this space. Are my automated insights accurate? Can I get that validation done as automatically as I can get the insights? Where am I getting the validations done, and do they align with my understanding of reality?
We’ll see a lot of competition between large language models to be the most accurate, the most up-to-date, and maybe even be domain-specific and be right in one industry or domain. There’s going to be an arms race, filtering out generated content, out of sites, out of email, out of social posts, out of page rank, and then trying to fend ways to fit it back into the conversation in useful ways.
We’re already seeing this has a tremendous impact on sales and marketing budgets. We’re all rethinking spend in the context of the ability to generate an infinite amount of incremental variations of the same content. The real business opportunity here around large language models will be getting the technology adapted to users and their use cases.
It’s going to be all about building integrations, building adapters, and seamlessly fitting these kinds of capabilities into existing workflows. The rise of chatGPT is a gold rush, and you want to be in the picks and shovels business.
These large language models can analyze data. They can do it for you. They can tell you how to do it. They can generate the code for it. This will make it possible for many more people to apply basic analysis before they go to professionals or experts to get help. This will allow a lot more users to apply basic and even not-so-basic analysis before they have to go to specialists or get help from data analysts and data scientists.
It will also change the expectations for what an employee has done with the data before they come to you. Have you analyzed this? How have you analyzed this? Have you gotten more help?
It’s going to mean that data teams focus on more valuable work because a lot of the basic questions have been answered before the user even gets to their office. A huge challenge for application developers who are building analytics into their products is that analytics can be hard to use, and these large language models can help do it for users.
One of the biggest challenges for software developers is building analytics into their products in ways that end users find accessible. These large language models can actually do a lot of analysis for the user, so figuring out how to adapt these models to their end users is a real challenge and a real opportunity for application developers.
Embedding analytics into applications and products is a hot topic right now, but analytics can be difficult for end users to navigate. These large language models can make it much easier.
Just like we’ve seen NLG and NLQ make it easier for end users to analyze information within applications, large language models will make that even simpler. In fact, these large language models can do a lot of the analysis for users just by asking.
So, integrating these large language models into embedded analytics is critical because it lowers the entry barrier for analytics and makes it more valuable for all users for an ISV.
Embedding the capabilities of large language models into applications, into workflows is going to be a critical opportunity. ISVs and OEMs who want to make their products easier to use to make the data and the insights inside of them more accessible will want to integrate these models into their products, and they’ll need a way to do that. It will not be as simple as just talking to chatGPT; it will have to be integrated into workflows.”
Follow us on social media for the latest updates in B2B!