Welcome to The Week in Generative AI, a weekly column for marketers from Quad Insights that quickly sums up emerging developments surrounding tools such as ChatGPT and Bard, while also offering the latest details on how generative AI tools are being incorporated into advertising products and workflows.
Apple’s WWDC: A focus on machine learning, not AI
The biggest story of the week in generative AI barely mentioned AI at all. When Apple unveiled its new Vision Pro headset at WWDC 2023, the company used the term “machine learning” to describe the technology that powers the device. This is in contrast to the term artificial intelligence, which Apple has used in the past to describe its other products in the realm.
Why is Apple leaning into machine learning instead of artificial intelligence? Machine learning is a more specific term that refers to a type of artificial intelligence that learns from data, while artificial intelligence is a broader term that encompasses any type of technology that simulates human intelligence. It’s a “tomato-tomahto” situation for the general-interest buyer that is experiencing this whiz-bang technology as a user, depending on the uptake at a $3,500 luxury price point.
Apple’s focus on machine learning is a sign of the company’s confidence in the technology, even as the hype around generative AI creates billion-dollar valuations for companies in the space. As James Vincent writes in The Verge, the question is, “How long can the company sit on the sidelines? And will a push into VR distract it from reaping comparatively attainable rewards in AI? We’ll have to wait until the next WWDC.”
Meanwhile, a big-picture message for marketers: Start looking to create immersive digital experiences for consumers as we enter the age of “spatial computing.”
Can you save a month a year using AI?
A recent survey by Salesforce and YouGov of over 1,000 full-time marketers in the United States, the UK and Australia found that those marketers estimate generative AI could save them about five hours of work weekly. About 75% of respondents were favorable towards the technology, 51% are already using or experimenting with generative AI at work, and 22% plan to integrate it soon. The top use cases are content creation and writing marketing copy (76%), inspiring creative thinking (71%), analyzing market data (63%) and generating image assets (62%).
Venture Beat’s Shubham Sharma writes that “even as a majority of marketers see generative AI as transformative to their role, many have also raised concerns about the quality and accuracy of generative AI outputs and the lack of skills needed to get the most out of these tools.” According to the survey data, the main issues are a lack of human-specific creativity and contextual knowledge (73%) and a concern that its results can be biased (66%). To address these issues, marketers called for human oversight (66%), the use of trusted customer data for models (63%) and sufficient training (54%) to properly leverage the AI in their workflow, but that’s the devil in the details for managers and creatives.
AI joins the enterprise, from campus to government
ChatGPT is now available to government users through Microsoft Azure, as Anusaya Lahiri reports in Yahoo Finance. Coming on the heels of OpenAI CEO Sam Altman’s recent high-profile testimony on Capitol Hill, this seems like quite the embrace for the feds as they attempt to regulate this technology. As Cecilia Kang writes in Tthe New York Times, Altman has been on an agenda-setting travel bender and “has run toward the spotlight, seeking the attention of lawmakers in a way that has thawed icy attitudes toward Silicon Valley companies…. And instead of protesting regulations, he has invited lawmakers to impose sweeping rules to hold the technology to account.”
In the wake of international press about the feared end-of-days due to AI, the largest American employer will now feed content into ChatGPT while major universities such as Vanderbilt create research institutes that will “train Vanderbilt students, faculty and staff to leverage the best of this cutting-edge technology.”
And yet! The story of the rogue AI-powered drone that killed its operator in a U.S. Air Force simulation swept across social media this week, playing into the doom-and-gloom fears sowed by Altman himself. In Wired, Will Knight says the story “sounds like just the sort of thing AI experts have begun warning that increasingly clever and maverick algorithms might do. The tale quickly went viral, of course, with several prominent news sites picking it up, and Twitter was soon abuzz with concerned hot takes.”
As Knight continues: “There’s just one catch—the experiment never happened,” according to the Air Force, which denies the simulation ever took place. “This was a hypothetical thought experiment, not a simulation,” an Air Force spokesperson told Wired.
Just keep in mind that the while the U.S. Government seeks to regulate this emerging space, it is also increasingly using AI as part of daily operations.
Thanks for checking in as we cover the trials and travails in the generative AI beat. We’ll see you next Friday.
Previously: “The Week in Generative AI: June 2, 2023 edition”