In two prior blog posts, we talked about the first two sections of ‘The Mind’s Mirror’ by Daniela Rus (the director of our MIT CSAIL lab) and Gregory Mone. The third section of the book is largely about how to deal with the impact of AI, and how to practice good stewardship of the tools that we have.
Early on, Rus calls for robust frameworks for ethical design and employment, but also for solid processes for monitoring, certification and regulation of AI, to make sure that the technology, in her words, “benefits all humanity, and our planet.”
Importantly, she lays out some of the newest innovations in AI that might make their way into open source communities – responses to real time events, customized video, and other aspects of deepfakery that could confuse us.
“(These are) further blurring the line between reality and fiction,” she writes, estimating that an AI entity can make 40,000 toxic agents in six hours. (Think about people spamming orders, emails and other things Into the cybersphere, and what effects that would have: on commerce, on digital trust…)
One thing Rus says we’re going to need is a sort of ‘language defense’ – a new way of understanding whether a message is credible and legitimate, or not.
Where is that going to come from?
“I hope that more people will deepen their understanding of AI in a way that is relevant to their life and work,” Rus writes. “Our world leaders and lawmakers would be well served by having a broad understanding of how AI works if they are going to oversee the economic, societal and political impacts of the technology, along with concerns around bias and data security.”
In addition, she points out, models also hallucinate, and they have overfitting problems. All of this will create its own challenges, for humans to deal with. In order to do this well, she posits, we’ll need a deeply collaborative approach.
“Public policy regarding AI should be shaped by a broad and inclusive conversation that goes beyond the perspective of the companies developing the large foundational models,” she writes. “We need to include academic researchers, ethicists, representatives from various industries, policy makers, community advocates, economists, sociologists, and experts on diversity, education, and many other experts and stakeholders.”
This point starts to address the deeper underlying problem: that the free market doesn’t provide incentives to get these additional people involved in these big decisions. If we leave AI up to the free market, we could fail.
In addressing other technical challenges, as well as societal and economic challenges, Rus talks about data quality and the need to address ‘critical corner cases’ or black swan peripheral outcomes with classical systems using supervised learning models.
She notes that the rise of self-supervised and unsupervised learning systems helps to solve some of these issues. But then, there’s also bias, and copyright Issues, too.
To these points, Rus describes a VISTA tool developed at MIT that helps provide synthetic data, and elevates what the AI engines can do. She uses the example of a self-driving program avoiding trees, by learning to understand what trees are, and where they may be located.
In addition, Rus talks about the complexity of big monolithic models, and issues like the cost, and the carbon footprint. She estimates that training a model of significant size will equal the lifetime emissions of several American cars, and will also require 700,000 liters of water for cooling. She also talks about how a new technology called liquid networks can ameliorate some of these conditions. We’ve talked about this technology in depth on the blog (Disclaimer: I am affiliated with the work of the liquid AI group.)
Other technical problems with AI systems involve security and reliability. Rus talks about adding additional network layers to deal with “nefarious inputs,” and cites the use of a technology called BarrierNet.
The process, she said, is “akin to a high-stakes poker game,” and lack of academic input and public sector engagement is troubling.
Later, Rus cites Amara‘s law, which talks about how technology impact is overestimated in the short term, and underestimated in the long term.
“The long-term impact of automation on job loss is extremely difficult to predict, but we do know that AI does not automate jobs,” she writes, calling for fair deployment. “AI and machine learning automate tasks.”
Rus also describes the trade-offs involved in this type of collaboration:
“While AI tools plant certain trees, we will need smart, educated people to think about the forest,” she writes. “These experts might need to adjust those plantings, too, since the tools make mistakes.”
Explaining the differentiation of automation workloads, she also talked about the three types of cost of automation:
Fixed costs
Performance-dependent costs
Scale-dependent costs
As for the risks of embracing these new technologies, in a section called ‘What Now?’ Rus talks about different opinions on superintelligent AI.
“We cannot say with absolute confidence whether or not the so-called fast takeoff in which superintelligent AI emerges unexpectedly is a real threat, or whether the risk will arise through a subtle or slower handing off of control to AI agents across industries,” she writes. “As more AI tools are used within more companies, we may stumble our way into smaller disasters, instead of succumbing to some kind of large scale AI takeover. … The dangers of artificial intelligence in the long-term are of a different nature. The potential advent of an AI that surpasses human intelligence, across-the-board, raises concerns about ensuring such a system aligns with human values.”
This section of the book asks a number of questions, including: what tools and guardrails we can use, and how we hold companies accountable?
Also:
How can we strengthen the evaluation of models for different risks?
How should we optimize red teams for AI solutions?
Do we need an international regulatory body for AI?
Should we open source more models?
How do we restore trust in information?
What will happen if AI systems begin to self-improve?
How can we encourage broad innovation?
In the last section of the book, Rus ties everything back into human thought and its uniqueness.
“The perils and promise of AI are both very real indeed,” she writes. “Yes, AI systems can paint pictures by following meticulously designed human programs. They can offer books, leveraging language models, enriched by millennia of human thought. But can they encapsulate the raw motion of Van Gogh’s brushstrokes, the profound depth of Sophoclean drama, or the philosophical inquiries of Socrates? Unlikely – they can merely produce facsimiles: they can’t generate something powerful, emotional or innovative, because machines operate on logic, not the unique mix of passion, knowledge and experience that spark and shape humanity’s great works.”
There are these human values expressed in the book’s ending – as humans, we feel; we are versatile. We are creative. We are aware.
“The list could stretch on,” Rus writes. “As humans, we are far superior to AI systems in so many ways … Yet it is through the mirror that we can see the potential for transcending our limitations, by amplifying our own mental powers with the fantastic and unusual capabilities of the systems. In our journey with AI, as we mold, refine and teach these models, we are not merely advancing technology: we are understanding the contours of our own intellect, expanding the frontiers of knowledge, and engaging in a deeper dialogue with ourselves about what it means to be human in this vast and unexplored cosmos.”
Throughout the book, we see this interplay of humans and AI delineated and striking detail, as the authors ask the questions that we need to answer as we move into the AI age.
Read the full article here