Welcome to The Week in Generative AI, a weekly column for marketers from Quad Insights that quickly sums up need-to-know developments surrounding the rapidly evolving technology.
FTC investigating OpenAI
The Federal Trade Commission is investigating OpenAI, the maker of ChatGPT.
The FTC claims that OpenAI has violated consumer protection laws by putting personal reputations and data at risk. This development “represents the most potent regulatory threat to date to OpenAI’s business in the United States, as the company goes on a global charm offensive to shape the future of artificial intelligence policy,” Cat Zakrzewski writes in the The Washington Post.
Though the agency just sent OpenAI a demand for documents this week, the investigation was prompted by a March complaint filed by the Center for AI and Digital Policy (CAIDP), an AI policy think tank. The CAIDP complaint alleges that OpenAI’s GPT-4 text generation tool is biased, deceptive and a risk to public safety. It also argues that OpenAI has not done enough to mitigate the risks of GPT-4. As announced on its website, the CAIDP “has escalated its case against OpenAI… by filing a supplement to the original complaint.”
Meanwhile, with Congress back in session, Nicole Greenberg at MarketWatch writes that “Senate Majority Leader Chuck Schumer released a ‘Dear Colleague’ letter revealing the Democratic Party’s policy goals for the upcoming session, and AI was a central focus.”
Speed-reading Anthropic chatbot debuts
Claude 2 from Anthropic is making headlines for its ability to summarize novels in a few sentences. The latest chatbot to hit the market comes from siblings Daniela and Dario Amodei, both former OpenAI researchers. (Startup Anthropic recently raised $450 million from a host of deep-pocketed investors, including Google and Salesforce Ventures, as previously reported by Krystal Hu and Jaiveer Shekhaway of Reuters.)
Beyond summarizing large tracts of text (up to 75,000 words or so), the chatbot can perform other tasks, including translation, coding and solving math problems.
In The Guardian, Dan Milmo writes that the company describes its “safety method as ‘Constitutional AI,’ referring to the use of a set of principles to make judgments about the text it is producing. But he also notes “the chatbot appears to be prone to ‘hallucinations’ or factual errors, such as mistakenly claiming that AS Roma won the 2023 Europa Conference League, instead of West Ham United.”
Emma Roth from The Verge kicked the tires on Claude 2 and notes that “unlike Bard and Bing… Claude 2 still isn’t connected to the internet and is trained on data up to December 2022. While that means it can’t surface up-to-the-minute information on current events (it doesn’t even know what Threads is!), its dataset is still more recent than the one that the free version of ChatGPT uses.”
Dr. Chatbot will see you now
Healthcare is getting a boost from artificial intelligence, as Google is testing medical chatbots with the Mayo Clinic, according to Miles Kruppa and Nidhi Subbaraman of The Wall Street Journal. The pair write that “Google is betting that its medical chatbot technology, which is called Med-PaLM 2, will be better at holding conversations on healthcare issues than more general-purpose algorithms because it has been fed questions and answers from medical licensing exams.”
Leana S. Wen writes in the The Washington Post’s Opinion section that “the AI revolution in health care is already here” and that the Mayo Clinic has “has created more than 160 AI algorithms in cardiology, neurology, radiology and other specialties. Forty of those have already been deployed in patient care.” The concern with generative AI, she adds, is that while “the quality of predictive AI can be measured, generative AI models produce different answers to the same question each time, making validation more challenging.”
Thanks for your attention as we follow the generative AI beat. Check back next week for more.
Previously: “The Week in Generative AI: July 7, 2023 edition”