The most successful branding campaign of all time is “artificial intelligence” or AI. Invented in 1955, the brand has experienced ups and downs, over-the-top hype and near-extinction. Recently, it has re-emerged as the widely popular brand name for a successful approach to endowing computers with seemingly human-like intelligence, an approach that has nothing to do with the method promoted by the field’s founders. What have remained constant over the last 7 decades, however, are the emotional reactions to the brand—anxiety and adulation, fear and fantasy, horror and (human) hallucinations.
John McCarthy, a 27-year-old assistant professor of mathematics at Dartmouth College, coined the term in a 1955 proposal for “a summer research project on artificial intelligence.” His definition of AI, repeated with only slight variations by AI researchers and observers to this day, was “making a machine behave in ways that would be called intelligent if a human were so behaving.”
Just before submitting the proposal, McCarthy co-edited with Claude Shannon a book titled Automata Studies. McCarthy’s bio on the Turing Award website—which he received in 1971 for his contributions to artificial intelligence—states that given that he “had little use for puffery, it is ironic that his most widely recognized contribution turned out to be in the field of marketing, specifically in choosing a brand name for the field. Having noticed that the title of the Automata Studies book didn’t stir up much excitement… he introduced the name artificial intelligence… and saw that it was embraced both by people working in the field and the general public.”
Branding is a business strategy that gives a target audience a reason to choose the organization’s products “over the competition’s, by clarifying what this particular brand is and is not,” says The Branding Journal. In the 1950s and 1960s, the competition in this case was over funding for research and academic resources, mostly those provided by the U.S. government.
In “Conjoined Twins: Artificial Intelligence and the Invention of Computer Science,” historian Thomas Haigh describes the benefits of the branding campaign: “[Artificial Intelligence] began as a brand used by researchers at a small set of elite institutions to tie their work to lofty goals, win research support, and bolster their position within the emerging field of computer science.”
Tracing the simultaneous evolution of the field of AI research and the new academic discipline of computer science, with its many subfields, Haigh observes that “AI loomed large as one of the most prestigious of these subfields,” helping attract talent and funds to computer science departments with prominent AI researchers. The “powerful legacy” of the successful AI brand, writes Haigh, is that “MIT, Stanford, and Carnegie Mellon are still ranked as the top three academic programs in the U.S., not just for AI but for computer science itself.”
Beyond academia and computer science, McCarthy’s branding success was emulated by the companies making the computers that AI researchers tried to turn into intelligent machines. Two such widely successful branding campaigns were created by engineers turned entrepreneurs, aiming to attract talent, customers, and funding (in their case, from venture capitalists and public markets, not the government). Unlike McCarthy’s branding campaign, their campaign became associated with them personally when others included their last name in the brand: “Moore’s Law” and “Metcalfe’s Law.”
The “Moore’s Law” campaign turned the ingenuity of semiconductor engineers into a law of nature, convincing all who wanted to be convinced (the required target audience for any branding campaign) of the inexorable upward trajectory of computer technology, leading the denizens of Silicon Valley to greet any small or large (or even negligible and soon-to-be-forgotten) new feature and functionality of computers with the exclamation “this changes everything.” The “Metcalfe’s Law” campaign promoted “network effects,” convincing all who wanted to be convinced that the only thing that matters in the technology business is (audience) growth, leading the denizens of Silicon Valley to insist on doing everything “at scale.”
Brands are usually created by marketing professionals, stressing all the positive aspects of the product or company they promote. In contrast, the three brands discussed here (to avoid unnecessary confusion, I will refrain from calling them the “3M brands”), generate both positive and negative reactions, in the manner of “technology is a double-edge sword.” Artificial Intelligence, however, stands out even in the crowded technology-related brands as the brand that elicits the most extreme reactions, both positive and negative.
That explains its enduring power over the last 7 decades. But this is only a partial explanation because dreams (hallucinations?) about intelligent machines that could eliminate humanity’s problems or humanity itself have been prevalent in books, movies, and the popular imagination for at least two centuries before the invention of the “artificial intelligence” brand. Just one example: In the early 1830s, when Charles Babbage displayed in his London house a scale-model of his Difference Engine, one of the first mechanical computers, his contemporaries called it “the thinking machine.”
Why are we so eager to create a machine in our image?
I have always suspected that the answer is modern man’s conviction that humans have replaced God to conquer more and more of nature, including human nature, and that science (and its derivative, technology) is what makes humans so powerful, rational, omnipotent, and helps them overcome their very human limitations.
But only recently I discovered one of the best expressions of this modern religion, the line with which Stewart Brand opened the first edition of the Whole Earth Catalog: “We are as gods and we might as well get good at it.” And he continued on the first page: “The insights of Buckminster Fuller are what initiated this catalog: I see God in the instruments and the mechanisms that work reliably, more reliably than the limited sensory departments of the human mechanism.”
Humans are like machines, ergo humans can be replicated in machines and humans can create these human-like machines. This is the modern religion or, more accurately, an important dimension of the current prevalent belief system of the Western world.
The “we are as gods” dogma is at the root of tech bubbles and backlashes and behind the frequent rise (based on unrealistic expectations, fantasies and delusions) and fall (based on the letdown caused by unrealized expectations, fantasies and delusions) of “artificial intelligence.” This dogma is also responsible for the simultaneous oscillation, sometimes by the same people, between exuberance and doomerism, both irrational, regarding the future of AI and humanity.
Already in the 1950s, IBM did not want to fan the popular fears that man was losing out to machines, “so the company did not talk about artificial intelligence publicly,” wrote Arthur Samuel years later, recalling his 17-year IBM career where he developed a chess-playing computer program and coined the term “machine learning.”
IBM’s salesmen were not supposed to scare customers with speculation about future computer accomplishments. So IBM, among other activities aimed at dispelling the notion that computers were smarter than humans, sponsored the movie Desk Set, featuring a “methods engineer” (Spencer Tracy) who installs the fictional and ominous-looking “electronic brain” EMERAC, and a corporate librarian (Katharine Hepburn) telling her anxious colleagues in the research department: “They can’t build a machine to do our job—there are too many cross-references in this place.” By the end of the movie, she wins both a match with the computer and the engineer’s heart.
That the ominous aspects of the AI brand, along its exuberant ones, have continued to impact corporate decisions to this day, was demonstrated by IBM when, in 2014, it attempted to capitalize on its Jeopardy-winning Watson by establishing a dedicated business unit to develop and sell applications based on the question answering system. While it was widely discussed, just like IBM’s chess-winning Deep Blue in 1997, as “artificial intelligence” (e.g. “technologists have long regarded this sort of artificial intelligence as a holy grail” explained the New York Times), IBM elected to invent a new brand name, “cognitive computing.”
The negative connotations of AI, again drove IBM to opt-out of the AI branding campaign and instead emphasize the “partnership with humans” and “human augmentation” aspects of “cognitive computing.” This was a rare PR blunder for IBM, as it turned out that the 2010s were not like the 1950s in terms of the public’s embrace of computer technology, warts and all. Within a few years, “artificial intelligence” or “AI” eliminated all competing brands, including IBM’s “cognitive computing.”
The competing brands, soon to be eclipsed by AI, included “deep learning.” This was the brand name given in 2007, in a brilliant marketing move, by Geoffrey Hinton (today popularly known as “the Godfather of AI”) to the method for developing AI he has been pursuing since he got his PhD (in artificial intelligence) in 1978. Based on the statistical, machine learning approach called “artificial neural networks,” it has been called “connectionism” since its first prototype in 1958 was described by The New Yorker as “the first serious rival to the human brain ever devised,” with The New York Times reporting that its sponsor, the US Navy, expected it “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
Connectionism did not deliver on these lofty expectations and Symbolic AI, the rival approach to developing artificial intelligence, has dominated the field for the next forty years or so. Its success, mostly in terms of getting research grants (and in the case of “expert systems,” also attracting VC investments), came regardless of the failure of its proponents to deliver on their own grandiose promises.
The greatest promoter of the AI branding campaign (in its Symbolic AI version), Marvin Minsky, received the Turing Award in 1970. In the same year, Minsky was quoted in Life Magazine, saying with “certitude” that “In from three to eight years we will have a machine with the general intelligence of an average human being.” Anticipating today’s talk of “superintelligence,” Minsky also predicted that “once the computers got control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”
When their papers were refused at the 2007 NIPS conference, established in the 1980s for researchers investigating biological and artificial neural networks, a small band of connectionists organized an offshoot conference, transporting participants to a different location to discuss their work, an approach that the proponents of the dominant machine learning and AI methods at the time still considered archaic and alchemistic.
At that offshoot meeting, Geoffrey Hinton rebranded their approach as “deep learning,” a term that was introduced before to the machine learning and artificial neural networks research communities but that now designated specifically the type of algorithms and machine learning process that Hinton and his students have pursued for years working in the AI periphery.
Five years later, the periphery became AI mainstream when the deep learning method was combined with GPUs to produce an efficient “intelligent machine,” succeeding first in correctly identifying images. Additional triumphs followed, especially in natural language processing, again giving rise to promises of machines more intelligent than humans in the very near future and the specter of these machines, at best, keeping humans as pets.
The speed by which “deep learning” has evolved to automate additional cognitive tasks has been surpassed by the speed in which this brand has been replaced by the seventy-year-old “artificial intelligence” brand. From about 2016 or so, the public discussion of its new exploits—and the inflated expectations, the fears, the calls for regulation—has been centered around “AI” and not “deep learning,” “machine learning” or any other brand name.
“Artificial intelligence” conveys better than competing brand names the conviction that modern humans have solved the mysteries of creation and are so smart they can design machines that are even smarter than them. The ”we are as gods” powerful sentiment has ensured that “artificial intelligence” has remained the most successful branding campaign of all time.
Read the full article here