Welcome back to The Prompt.
Today, a group of 13 current and former employees at both OpenAI and Google Deepmind published an open letter warning that leading AI companies are not doing enough to make their AI systems safe and accusing them of prioritizing profits and growth instead. The letter highlights the lack of transparency from companies about the risks and limitations of their AI applications, which could “potentially result in human extinction.” The letter also called for more protections for whistleblowers by developing processes for raising concerns to company decision makers and removing restrictive non-disparagement agreements.
This letter is just one more battle line being drawn in the AI industry, where bitter arguments over the right balance of safety and innovation are pitting billionaire versus billionaire, as we’ll discuss later in this edition.
Now, let’s get into the headlines.
ETHICS + LAW
The Ansel Adams estate publicly chastised Adobe for selling AI-generated stock images in the style of the landscape photographer best known for his black-and-white images of the American West. “You are officially on our last nerve with this behavior,” the estate posted on Threads, adding that it has been trying to reach Adobe since August 2023 to take down the images. In response, Adobe said it removed the images, which violated its AI content policies.
Meanwhile, an AI-generated image with the slogan “All eyes on Rafah,” went viral last week with over 50 million shares on Instagram and Facebook. The image was created to draw attention to Israel’s airstrike on an encampment for the displaced in Gaza — but it also spurred an onslaught of AI-generated spinoff images showing distressed Palestinian women and children, according to 404. The original image was created by a Malaysian woman, but the version that went viral was modified by another Malaysian artist, who removed a watermark that signaled the image was made with AI.
TALENT SHUFFLE
OpenAI is reviving its previously abandoned in-house robotics team, multiple sources told Forbes. The company is hiring research engineers for a team that will train and integrate AI models into robots manufactured by “external partners” like Figure AI. Before it released ChatGPT, OpenAI had considered building a “general purpose robot,” but dropped the effort in late 2020 due to a lack of training data.
CHIP WARS
On Sunday, Nvidia announced its next AI chip, dubbed Rubin, which will be available in 2026. Though CEO Jensen Huang didn’t disclose many details about the chip, he said Nvidia plans to release chips at a “one-year rhythm” as the demand for AI applications heats up. A day later, rival firm AMD unveiled new AI processors it said have better performance and more memory than previous versions. AMD also plans to launch new AI chips at an “annual cadence,” CEO Lisa Su said.
AI DEAL OF THE WEEK
ChatGPT isn’t just text now — it can see, hear and speak to humans. But processing audio and video data in real-time requires a faster, more efficient type of network infrastructure provided by three-year-old startup LiveKit, which today announced it has raised about $22 million in series A funding at a $110 million valuation. Joining the round are angel investors from around the AI industry including Google Chief Scientist Jeff Dean, tech investor Elad Gil and founders of prominent AI startups like Perplexity CEO Aravind Srinivas, Pika CEO Demi Guo and ElevenLabs CEO Mati Staniszewski.
DEEP DIVE
Artificial intelligence has heralded a technological revolution on par with that of the mainframe or PC, one that has put a cheap virtual doctor on every smartphone, or a free tutor for every child. It can serve as a great equalizer, a deflationary cheat code, that can help save lives and reduce poverty.
But in Silicon Valley and increasingly in Washington, D.C., an ideological debate is now raging for AI’s future. A powerful, small group—billionaire investors of the internet era—are now locked in a war of words and influence to determine whether AI’s future will be one of concentrated safety, or of unfettered advancement.
For tech legends Vinod Khosla and Reid Hoffman, both placing in the top 10 of the Forbes Midas List of top venture capitalists that launched today, their optimism about AI’s potential is tempered by concern about its potentially terrible cost. Khosla, the former Sun Microsystems CEO and founder of Khosla Ventures, warns of a potential dystopian AI arms race with China and the risk of the U.S. sharing cutting-edge AI models with its rival. Reid Hoffman, cofounder of LinkedIn and an investor at Greylock, imagines the proliferation of an AI-powered bioweapon that could harm 100 million people.
To Hoffman, Khosla and a powerful set of tech leaders, there’s a clear way to mitigate regrettable, unintended consequences: Control AI’s development and regulate how it’s used. Giants Google and Microsoft are onboard, along with ChatGPT maker OpenAI, in which both Khosla and Hoffman were early investors. Their view that guardrails are necessary to reach AI’s utopian potential also has the ear of President Biden, to whom the VC duo are donors.
But an increasingly vocal faction is doing all they can to thwart Khosla, Hoffman and their “Big Tech” allies. Led by fellow Midas Lister Marc Andreessen, the cofounder of Netscape and venture capital firm a16z, colleague Martin Casado and an amorphous group of deregulation advocates and open-source absolutists such as investor Bill Gurley and Meta chief AI scientist Yann LeCun, as well as Tesla CEO and X owner Elon Musk (sometimes)—they consider such talk of disasters and state-level risk a shameless play by AI’s early power holders to keep it.
“There is no safety issue. The existential risks do not exist with the current technology,” LeCun says.
But everyone feels a sense of urgency, whether it’s to shape the conversation with legislators or to ensure that less powerful stakeholders don’t get left behind. Fei-Fei Li, a pioneer of the field and the co-director of Stanford’s Institute for Human-Centered AI, says she feels “true worry” about what regulatory restrictions mean for academia and the public sector.
Read the full cover story here.
AI INDEX
The Washington Post prompted three popular text-to-image AI tools, Midjourney, Dall-E 3 and Stable Diffusion to generate illustrations of a “beautiful woman.”
9 in 10
Images generated using Midjourney’s AI showed a light-skinned woman.
100%
Depictions generated by all the three AI tools showed a thin body type.
Less than 10%
Images created by the three AI tools accurately showed a woman with single-fold eyelids.
YOUR WEEKLY DEMO
On Friday, text-to-voice AI startup ElevenLabs launched a new AI model that can generate sound effects. Users can prompt the tool to create sounds of “horses galloping,” or “coins jingling,” and the model will spin up several samples of sounds to choose from within seconds. The tool is trained on data sourced from Shutterstock’s audio library, the company said.
The AI-generated soundscapes also include “emotive” sounds such as that of a “a woman filled with fear screaming loudly,” and “people cheering in a stadium.” I tried out the tool and created the sound of a woman sobbing. The model produced four versions, ranging from a loud wail to a soft whimper, all of which sounded hyper realistic and nearly indistinguishable from real crying — to my horror.
QUIZ
This investor played a significant role in returning Sam Altman to power after his brief ouster from OpenAI in November 2023.
- Marc Andreessen
- Vinod Khosla
- Reid Hoffman
- Alfred Lin
Check if you got it right here.
MODEL BEHAVIOR
People are using Midjourney’s AI image generator to create iPhone-style realistic photos of different foods, but with a twist: raisins sprinkled on everything. A Reddit user posted images of raisins-topped dishes including hot dogs, sushi and donuts that were all generated with the help of bad food photos sourced from Yelp.
Read the full article here