Generative AI Writing is Here and Integrating into Daily Life. Still, Society Gets the Final Say on How the Technology is Used

 

ChatGPT and other robust generative AI writing solutions already impact every business. One industry pushed back and got a temporary pause on its use. As more industries face similar dilemmas, the technology continues to improve. So, who gets the final say in how best to integrate AI solutions? Societies will have to play decision-maker.

The Writers Guild of America (WGA) recently emerged from a prolonged strike and challenging contract negotiations with a richer, more comprehensive deal than anticipated. A significant aspect of the new contract is the mandate on minimum staffing levels for TV series, adjusting based on the number of episodes and an exception for solo writer shows. A notable inclusion is the restriction on using generative AI in the creative process, ensuring that AI-generated material doesn’t undermine a writer’s credit or separate rights.

The entanglement of Artificial Intelligence (AI) within daily life transcends beyond a technological marvel; it has morphed into social discourse, determining how society wants to integrate it into its workforce, creativity, and commercial areas.

While the new WGA contract firmly sides against AI in writing, does that mean that any benefits brought by generative AI solutions are a non-starter in the film and Television industry? And are other industries lining up to regulate or create standards for AI?

Brenda Leong, a Partner with Luminos.Law, emphasizes the debate isn’t merely about the capabilities of AI but how those legal, policy, and regulatory frameworks align with societal values.

Brenda’s Thoughts

“So, I think where I would start at one of the most important aspects of your question and the whole contextualization of this is this is not really a question about the technology and the capabilities of the technology. This is very much a social values question. How do we want to use this technology in relation to our workforce, our productivity, our creative outlets, our commercial services like streaming TV shows, and things like that? And the aspects of that are about legal decisions. They are about policy, regulatory, and legal decisions based on what we want to protect.

And we do that in many ways in society. We have health and safety standards for food that we have to buy. We have safety standards for vehicles that we drive that are not at all related to the actual functioning of those industries. They are standards and levels that we set in order to protect the people and the values of both the businesses and the consumers that use those services. And so AI is reaching that point. And if anything, this is a really good example of why it’s not just about a new gee-whiz developing technology now. It’s about this technology in our lives in various ways.

And now we need to start doing things like this kind of line drawing. If we have values about our workforce that say we want to protect people’s livelihoods and protect the human creativity of a writer, then we are going to make the choice to limit the use of AI in writing of Hollywood scripts. I don’t know anything about Hollywood writing. I don’t have anything to do with the entertainment industry or movies or TV shows or anything like that. But I can project that AI systems, first of all, right now, are not really probably good enough to do very much of that.

But they can be used to jumpstart things. They can be used to generate different ways to approach something that might sort of speed along brainstorming sessions speed along different aspects of that. Maybe those are some of the things that will be allowed. As you said, we don’t know what the details of this are yet, but then those ideas can only be carried forward by a human writer. I don’t know. That’s just speculation. But those might be the kind of ideas that are limiting where it can be used in ways that it probably can contribute the most value, which is to sort of sometimes jumpstart those areas. But then the actual fleshing out and completely developing those ideas and then putting them into practice in a script would be by a human.

So, I think it’s great that it seems like there’s been some kind of accommodation reached. I hope that the details, once we know them, turn out to be all that the hype is making it sound like. This is the writer’s union resolution. I will point out that the Screen Actors Guild, I don’t think, has resolved yet. They’re still watching this to resolve theirs. And there are corollary issues there. For example, with the ability of AI systems, generative AI systems, which is the kind of systems that can create written content like for scripts, they can also create video content. One of the things I’ve heard people talk about in this context is the role of extras in films.

If you’re, you know, a man in the blue shirt who’s in the background of two or three different scenes, you know, that’s how people make their living for a little while until they get that first big break or that’s how they build up their resume or get to meet other actors or network. And that would be the kind of thing that would be very easy to fill in with an AI in small batches throughout recorded productions where they would save the cost of paying an actor even very low rates to be an extra on a set.”

What Industries are Leading Generative AI Regulation?

“I don’t think there’s anything yet approaching a consensus of what standard or ethical use cases of AI would be. I think we’re just very much still in the initial churn and turmoil of how that’s looking. Also just want to double down again on a distinction I made a minute ago, which is that this kind of AI is considered generative AI. That is, it creates new content of its own. The output of the AI system is new and theoretically unique content. This is a fairly new capability and is still very much developing, impressive, and capable. It also has some very specific risks and limitations, but it’s in contrast to what we would call traditional AI, which has been around all of, well, it’s been around for decades, but what we mostly mean when we talk about it is the machine learning aspects that have been around for 10 to 15 years. And those were the shiny new toys until about eight months ago, but those are already also incorporated everywhere across our ecosystems.

And from the AI community, the legal community, the regulatory perspective, and the privacy world, a lot of the focus is still on those kinds of systems because they are the ones that are doing pattern recognition, predictive analytics about individual people’s behavior, being used in loan approvals and fraud detection, healthcare analysis, financial services, rent control, and all those kinds of things that are really truly in individual people’s day-to-day lives. That is where a lot of the focus of regulatory emphasis still is targeting. And then this new idea of this generative AI, oh my goodness, what are we going to do with that? And what new issues is that bringing up is kind of layered on top of that.

One of the most common applications for generative AI systems, as you probably know, as most people probably think about, is chatbots. So, customer service interactions, platforms, and engagements, and even a level up from customer service with more substantive advice and guidance to consumers. And that crosses industries. So we see that being implemented in healthcare, finance, education, in many different various targeted aspects that are finding ways to apply that and figure out how far it can go in the context of just using the generative AI and still be protected before you really need either human review, or you really need to bump to a human entirely, or what that might look like. For example, financial services may want to rely on this for a lot of consumer discussions and questions that consumers might have. But at the same time, the last thing you want from any perspective is generative AI, LLM, or a large language model system giving you financial advice. Even though they might, at least a good part of the time, provide very good, reasonable, or usable advice. So that’s a place where there’s a lot of caution.

Now, that’s a very highly regulated industry. So, there will be a lot of caution, safeguards, and double checks as they start to implement it. In other places, like a more retail-based environment or just something that’s not quite as regulated generally, there’s going to be room to innovate a little faster for better and worse. You know, they’ll find things out faster, but they might also have some more spectacular crashes along the way as they sort of try to figure out how to implement this. And I would say healthcare is another space that replicates that, where people are trying to facilitate communication, information flows, general educational outreach, and resources in ways that are really, really beneficial, and useful. You know, we could go off on a whole tangent about the healthcare system in the US, but one of the clearly defining issues is access to experts and lack of access to experts. And so anything that can push information out that’s reliable and useful will be a good thing. But then again, there, you know, how much detail targeted information to me, particularly in my medical history particular, can you go before the risk of misinformation starts outweighing the potential benefit of giving me more than I have access to without making an appointment, showing up in an office and talking to a doctor, you know, whose time is very limited. So those are, I think, some of the examples I would think of.”

Article by James Kent

Follow us on social media for the latest updates in B2B!

Image

Latest

grocery prices
Grocery Retailers Need to Adopt Adaptive Supply Chain Strategies to Stabilize Rising Grocery Prices
April 26, 2024

As recent reports highlight a cooling in overall inflation rates, the grocery sector tells a different story. Over the past three years, grocery prices have surged by 21%, outpacing the general inflation rate of 18% during the same period. This divergence is particularly pronounced in certain food items, where price increases have reached as […]

Read More
Cybersecurity Challenges in healthcare
Old Systems are Creating Cybersecurity Challenges for Healthcare Orgs
April 26, 2024

Healthcare organizations face significant hurdles in maintaining strong and secure cybersecurity measures as tech evolves. Some of that is due to aging network infrastructures and high costs of essential software, which have created complex cybersecurity challenges. As healthcare continues to rely increasingly on digital solutions for patient care, the stakes for securing these systems […]

Read More
cybersecurity challenges
Healthcare Providers Must Combine Zero Trust Architecture and Threat Modeling to Address Cybersecurity Challenges
April 26, 2024

In today’s increasingly digital world, the healthcare sector faces significant cybersecurity challenges, necessitating urgent and sophisticated responses. The recent draft guidance issued by the FDA on cybersecurity for medical devices highlights a critical juncture for the industry: the need to implement and scale best practices in cybersecurity is more pressing than ever. As healthcare […]

Read More
New Penalties is a Push to Mitigate Cybersecurity Threats in Telecommunications and Healthcare
April 26, 2024

Cybersecurity has emerged as a critical issue in telecommunications and healthcare—two industries intertwined as essential services. With both sectors recognized as critical infrastructure, the consequences of cyber attacks can be far-reaching, impacting everything from individual privacy to national security. While recent regulatory changes are aiming to tighten security protocols, it also raises questions about […]

Read More