Skip to Content

AI is not ready for primetime

Analysis by Samantha Murphy Kelly, CNN

(CNN) — AI tools like ChatGPT have gone mainstream, and companies behind the technologies are pouring billions of dollars into the bet that they will change the way we live and work.

But alongside that promise comes a constant stream of concerning headlines, some highlighting AI’s potential to churn out biases or inaccuracies when responding to our questions or commands. Generative AI tools, including ChatGPT, have been alleged to violate copyright. Some, disturbingly, have been used to generate non-consensual intimate imagery.

Most recently, the concept of “deepfakes” hit the spotlight when pornographic, AI-generated images of Taylor Swift spread across social media, underscoring the damaging potential posed by mainstream artificial intelligence technology.

President Joe Biden urged Congress during his 2024 State of the Union address to pass legislation to regulate artificial intelligence, including banning “AI voice impersonation and more.” He said lawmakers need to “harness the promise of AI and protect us from its peril,” warning of the technology’s risks to Americans if left unchecked.

His statement followed a recent fake robocall campaign that mimicked his voice and targeted thousands of New Hampshire primary voters in what authorities have described as an AI-enabled election meddling attempt. Even as disinformation experts warn of AI’s threats to polls and public discourse, few expect Congress to pass legislation reining in the AI industry during a divisive election year.

That’s not stopping Big Tech companies and AI firms, which continue to hook consumers and businesses on new features and capabilities.

Most recently, ChatGPT creator OpenAI introduced a new AI model called Sora, which it claims can create “realistic” and “imaginative” 60-second videos from quick text prompts. Microsoft has added its AI assistant, Copilot, which runs on the technology that underpins ChatGPT, to its suite of products, including Word, PowerPoint, Teams and Outlook, software that many businesses use worldwide. And Google introduced Gemini, an AI chatbot that has begun to replace the Google Assistant feature on some Android devices.

Concerned experts

Artificial intelligence researchers, professors and legal experts are concerned about AI’s mass adoption before regulators have the ability or willingness to rein it in. Hundreds of these experts signed a letter this week asking AI companies to make policy changes and agree to comply with independent evaluations for safety reasons and accountability.

“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable, with the threat of legal action, cease-and-desist letters, or other methods to impose chilling effects on research,” the letter said.

It added that some generative AI companies have suspended researcher accounts and changed their terms of service to deter some types of evaluation, noting that “disempowering independent researchers is not in AI companies’ own interests.”

The letter came less than a year after some of the biggest names in tech, including Elon Musk, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” (The pause did not happen).

“The most concerning thing I see around AI is the continued gap between promise and practice,” Suresh Venkatasubramanian, a computer scientist and professor at Brown University, told CNN. “Companies continue to promise to deliver the moon when it comes to AI and still provide moldy green cheese.”

Venkatasubramanian, who was appointed to the White House Office of Science and Technology Policy in 2021 to help advise on AI policy, is among the experts who signed the latest letter.

“Access to major generative AI systems in widespread use is controlled by a few companies,” said Venkatasubramanian, noting that these systems easily make errors and can produce damaging content. “Without the ability to evaluate AI independently and with the fear of legal threats, it will continue to be very difficult for researchers to do the important job of evaluating the safety, security, and trustworthiness of generative AI systems so that policy makers are well informed to take action.”

He said he looks to policymakers and the work being done by the White House’s Office of Management and Budget, which oversees President Biden’s vision across the Executive Branch, to clarify and set “rules of the road.”

Arvind Narayanan, a computer science professor at Princeton, who also signed the letter, agreed, telling CNN he’s particularly concerned about the pace at which AI is accelerating; a speed far faster than our ability to adapt to it.

“Tech companies have gotten rich off of a business model where they reap profits from the benefits of new technologies, while the costs of those technologies are borne by society,” said Narayanan, acknowledging that this was the case long before generative AI.

“Guardrails for specific harms are needed but they won’t fix the underlying business model,” he added.

He believes bolder reforms may be necessary too, such as taxing AI companies to fund social safety nets.

For now, current day generative AI users must understand the limitations and challenges of using products that are still quite far from where they need to be.

Asking AI about AI

When CNN asked ChatGPT if it (and other generative AI tools) are ready for mass adoption, it responded: “Yes.” But added a caveat: “Ongoing efforts to address ethical, societal, and regulatory challenges are critical for responsible and beneficial mass adoption.”

Google’s Gemini AI tool – previously named Bard – answered similarly but with a bit more caution: “Generative AI is having a moment, but there’s mixed signals about mass adoption.”

“Despite wide use, studies haven’t shown a major productivity boost yet,” Gemini wrote. “Employees may need more training to fully utilize generative AI.”

Gemini also nodded to ethical issues: “Bias in training data can lead to biased AI outputs,” it wrote. “[And] there are concerns about responsible use and accountability.”

CNN’s Brian Fung contributed to this report

The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN-opinion

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content