This publication can be found online at http://pubpub.ito.com/pub/extended-intelligence.
This is based on an ongoing conversation at the Media Lab and is a compilation of thoughts from conversations with the faculty, students and researchers at the MIT Media Lab. Mostly written by Joichi Ito with help from Kevin Slavin and the rest of the Media Lab.
Extended Intelligence
We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren’t we just adding new pieces to the EI that every actor in the network is a part of?
Joichi Ito
[1] [2] [3]

Artificial Intelligence has yet again become one of the world’s biggest ideas and areas of investment, with new research labs, conferences, and raging debates from the main stream media to academia.
We see debates about humans vs. machines and questions about when machines will become more intelligent than human beings, speculation over whether they’ll keep us around as pets or just conclude we were actually a bad idea and eliminate us.
There are, of course, alternatives to this vision, and they date back to the earliest ideas of how computers and humans interact.
In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine. Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers. For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
- John Markoff
"A Fight to Win the Future: Computers vs. Humans". The New York Times. (2011): [http://www.nytimes.com/2011/02/15/science/15essay.html]
But beyond distinguishing between creating an artificial intelligence (AI), or augmenting human intelligence (IA), perhaps the first and fundamental question is where does intelligence lie? Hasn’t it always resided beyond any single mind, extended by machines into a network of many minds and machines, all of them interacting as a kind of networked intelligence [4] that transcends and merges humans and machines?
If intelligence is networked to begin with, wouldn’t this thing we are calling “AI” just augment this networked intelligence, in a very natural way? While the notion of collective intelligence and the extended mind are not new ideas, is there a lens to look at modern AI in terms of its contribution to the collective intelligence?
We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren’t we just adding new pieces to the EI that every actor in the network is a part of?
Marvin Minsky conceived AI not just as a way to build better machines, but as a way to use machines to understand the mind itself. In this construction of Extended Intelligence, does the EI lens bring us closer to understanding what makes us human, by acknowledging that what part of what makes us human is that our intelligence lies so far outside any one human skull?
At the individual level, in the future we may look less like terminators and more like cyborgs; less like isolated individuals, and more like a vast network of humans and machines creating an ever-more-powerful EI. Every elements at every scale connected through an increasingly distributed variety of interfaces. Each actor doing what it does best – bits, atoms, cells and circuits – each one fungible in many ways, but tightly integrated and part of a complex whole.
While we hope that this Extended Intelligence will be wise, ethical and effective, is it possible that this collective intelligence could go horribly wrong, and trigger a Borg Collective hypersocialist hive mind? [5]
Such a dystopia is not averted by either building better machine learning, nor by declaring a moratorium on such research. Instead, the Media Lab works at these intersections of humans and machines, whether we’re talking about neuronal interfaces between our brains and our limbs, or society-in-the-loop machine learning.
Where the majority of AI funding and research is to accelerate statistical machine learning, trying to make machines and robots “smarter,” we are interested in the augmentation and machine assistance of the complex ecosystem that emerges from the network of minds and our society.
Advanced Chess is the practice of human/computer teams playing in real-time competitive tournaments. Such teams dominate the strongest human players as well as the best chess computers. This effect is amplified when the humans themselves play in small groups, together with networked computers.
"Chess Festival in Benidorm – where a new genre is born". Chessbase.com. (2007): [http://en.chessbase.com/post/che-festival-in-benidorm-where-a-new-genre-is-born]
The Media Lab has the opportunity to work on the interface and communication between humans and machines–the artificial and the natural–to help design a new fitness landscape [6] for EI and this co-evolution of humans and machines.
EI research currently includes:
  • Connecting electronics to human neurons to augment the brain and our nervous system (Synthetic Neurobiology and Biomechatronics)
  • Using machine learning to understand how our brains understand music, and to leverage that knowledge to enhance individual expression and establish new models of massive collaboration (Opera of the Future)
  • If the best human or computer chess players can be dominated by human-computer teams including amateurs working with laptops, how can we begin to understand the interface and interaction for those teams? How can we get machines to raise analysis for human evaluation, rather than supplanting it? (Playful Systems)
  • Machine learning is mostly conducted by an engineer tweaking data and learning algorithms, later testing this in the real world. We are looking into human-in-the-loop machine learning [7][8], putting professional practitioners in the training loop. This augments human decision-making and makes the ML training more effective, with greater context.
  • building networked intelligence, studying how networks think and how they are smarter than individuals. (Human Dynamics Group)
  • developing humans and machine interfaces through sociable robots and learning technologies for children. (Personal Robots Group)
  • develeloping “society-in-the-loop,” pulling ethics and social norms from communities to train machines, testing the machines with society, in a kind of ethical Turing test. (Scalable Cooperation)
  • developing wearable interfaces that can influence human behavior through consciously perceivable and subliminal I/O signals. (Fluid Interfaces)
  • extending human perception and intent through pervasively networked sensors and actuators, using distributed intelligence to extend the concept of “presence.” (Responsive Environments)
  • incorporating human-centered emotional intelligence into design tools so that the “conversation” the designer has with the tool is more like a conversation with another designer than interactions around geometric primitives. (e.g., “Can we make this more comforting?”) (Object-Based Media)
  • developing personal autonomous vehicle (PEV) that that can understand, predict, and respond to the actions of pedestrians; communicate its intentions to humans in a natural and non-threatening way; and augment the senses of the rider to help increase safety. (Changing Places)
  • providing emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction.[9](Affective Computing)
  • (Camera Culture Group) is using artificial intelligence and crowdsourcing for understanding and improving the health and well-being of individuals.
  • The Macro Connections Group is collaborating with the Camera Culture Group on artificial intelligence and crowdsourcing for understanding and improving our cities.
  • Macro Connections has also developed Data Viz Engines such as the OEC, Dataviva, Pantheon, and Immersion, which served nearly 5 million people last year. These tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.
  • collaborating with Canan Dagdeviren to explore novel materials, mechanics, device designs and fabrication strategies to bridge the boundaries between brain and electronics. Further, developing devices that can be twisted, folded, stretched/flexed, wrapped onto curvilinear brain tissue, and implanted without damage or significant alteration in the device’s performance. Research towards a vision of brain probes that can communicate with external and internal electronic components.
The wildly heterogeneous nature of these different projects is characteristic of the Media Lab. But more than that, it is the embodiment of the very premise of EI: that intelligence, ideas, analysis and action are not formed in any one individual collection of neurons or code. All of these projects are exploring this central idea with different lenses, experiences and capabilities, and in our research as well as in our values, we believe this is how intelligence comes to life.

References

[1]Mitch notes that one of the very early mission statements resonates with this idea of humans and machines. "Enabling technologies for expression and understanding by people and machines"
[2]"The Extended Mind". Analysis. Vol. 58. (1998): Num. 1. 7-19. [http://www.jstor.org/stable/3328150?seq=1#page_scan_tab_contents] Inspired by the paper The Extended Mind
[3]"The Open Mind Common Sense Project". KurzweilAI.net. (2002): [http://web.media.mit.edu/~push/Kurzweil.html]
[4]"Networked Intelligence". Pub Pub. (2016): [http://pubpub.media.mit.edu/pub/networked-intelligence]
[5]"Borg (Star Trek)". Wikipedia. [https://en.wikipedia.org/wiki/Borg_(Star_Trek)]
[6][https://en.wikipedia.org/wiki/Fitness_landscape] In evolutionary biology, fitness landscapes or adaptive landscapes (types of Evolutionary landscapes) are used to visualize the relationship between genotypes and reproductive success.
[7]"Mixed-Initiative Real-Time Topic Modeling & Visualization for Crisis Counseling". ACM, (2015): 417--426. [http://doi.acm.org/10.1145/2678025.2701395]
[8]"Interactive learning with a “society of models” ". Pattern Recognition . Vol. 30. (1997): Num. 4. 565 - 581. [http://www.sciencedirect.com/science/article/pii/S0031320396001136]
[9]"Affective Computing's Publications". [http://affect.media.mit.edu/publications.php]
Add to Comment
Creative Commons License
All discussions are licensed under a Creative Commons Attribution 4.0 International License.
Submit
ArchivedComment by Cesar Hidalgo1 point
^
6
^
Xiao Xiao 2/11/2016
Permalink|Reply
Private. Collaborators only.
I’m really intruiged by the intersection between Learning (à la LLK & Papert) and AI. Minskian AI was at its core about understanding cognition (building machines to model it). Learning, is also about understanding cognition—building our own minds.
How can both sides benefit by connecting to other areas of inquiries about cognition? (e.g. embodiment, mindfulness, the arts)
^
3
^
Joscha Bach 2/11/2016
Permalink|Reply
Private. Collaborators only.
Yes! Minsky conceived AI not just as a way to build better machines. He wanted to use machines to understand the mind itself. What makes us human? Who and what are we?
The “extended mind” is an excellent notion, because it acknowledges our embodiment as social, biological and cultural beings, and it emphasizes the interactions between humanity, design, technology and ethics. Of course we are also more than the interface between our tools, society and our therapist. Understanding the mind itself is going to be a crucial part of understanding how we interact with our environments and with each other. I wonder if this should be reflected in this effort, too. For instance, Shoshanna’s work in Playful Systems uses games to study motivational traits. The Media Lab also explores art as a way to learn about experience and perception, and many people seem to be interested in cognition itself.
^
1
^
Joichi Ito 2/26/2016
Permalink|Reply
Private. Collaborators only.
Tried to integrate this.
ArchivedComment by Natasha Jaques3 points
^
2
^
Matt Carney 2/11/2016
Permalink|Reply
Private. Collaborators only.
The Center for Bits and Atoms is developing declarative design tools and robotic assemblers of micro to macro discrete continuum materials. Computation and robotic assembly will enable discovery of otherwise unreachable design spaces.
ArchivedComment by Tal Achituv2 points
^
1
^
Nick DePalma 6/14/2016
Permalink|Reply
Private. Collaborators only.
I was under the impression that unlike previous comments here and the written document, that Extended Intelligence was primarily and objectively about taking lessons learned from AI, interaction design, and cognitive science to /improve ourselves/ through some type of augmentation. It’s not really about understanding cognition (much like AI was agnostic to actual cognitive mechanisms), but about using what we now know as tools to self improve. I think the central claim is not on a “distributed” or “cognitive” intelligence ala Rumelhart but on a positivist position that the Media Lab is taking?
^
1
^
Karen Schrier 6/13/2016
Permalink|Reply
Private. Collaborators only.
We could consider Knowledge Games a type of “extended intelligence” – in situation where the play of a game helps us extend our ability to solve complex problems and produce new knowledge. See more at: jhupbooks.press.jhu.edu/content/knowledge-games
^
1
^
Ben Tolkin 4/11/2016
Permalink|Reply
Private. Collaborators only.
Always interesting to see the way academic philosophy and engineering/tech interact, or, more often, don’t. There’s been a ton of work done in philosophy on extended cognition since Clark and Chalmers in '98; I’m not sure who the “we” is in “we propose a kind of extended intelligence”… Of course, it’s more often that philosophers haven’t caught up with technology, but as someone with a foot in both worlds it’s a little strange to see an article here that wouldn’t look out of place in a copy of Synthese from 2005.
The most interesting questions in my mind are around all the familiar terms that need to be redefined in light of an extended cognition hypothesis. Can the self or mind be divorced from intelligence? Is there room for a self at all, or will that dissolve as we communicate at increasingly higher bandwidth with our social networks and machines? Who is to blame when a cognitive network does something immoral? (do normal rules of morality even apply?) At a certain point, this train of thought leads to viewing the universe as a single, vast network, with any subdivision of interacting parts being arbitrary. Are there any boundaries we should draw on what makes us intelligent? (After all, we are constantly interacting with every body in the known universe.) And if not, is intelligence a useful quality to define?
^
1
^
samim winiger 3/10/2016
Permalink|Reply
Private. Collaborators only.
Great discussion here, thank you. Thinking along similar lines, we just published an extensive piece on “CreativeAI”. It includes in-depth analysis, narrative and vision for the space between human, machine and creativity: https://medium.com/@ArtificialExperience/creativeai-9d4b2346faf3 curios to hear your thoughts.
^
1
^
Peter van der Putten 2/28/2016
Permalink|Reply
Private. Collaborators only.
Could be relevant to refer to Licklider 1960 paper on Man Computer Symbiosis. Also the concept of general AI seems to have made more of a comeback with the popularity of general purpose methods such as deep learning. Some of the more longer term questions are how societies will adapt and our own concept evolves of what it is that makes us human as AI/IA/EI progresses - how will we see ourselves in say 50 years. And what does this mean for related fields such as artificial creativity / creativity augmentation?
^
1
^
Sebastien Dery 2/28/2016
Permalink|Reply
Private. Collaborators only.
I very much enjoy how this article attempts to touch base on the wider topics AI. Always refreshing to hear the MediaLab step up and shake the box a little bit. That being said, I can’t help but find some dark humour in the way we speak about “augmented brain” and “smarter computer” with little discussion regarding the metric by which we evaluate those fascinating topics and system. More to the point, despite our many attempts at trying to mimic or extend the human mind, very little attention has been given to the many plague it (and consequently we) suffer from; namely depression, cognitive biases, lack of compassion, selection of evidence to satisfy previous notion, etc. In other words, we are building data-driven decision-making tools under the assumption that the human minds around us are willing to accept the conclusions. Something akin to the difference between Knowledge and Wisdom. Extended-inteligence sounds great. Perhaps it would also be worth diving into Collective-Wisdom (CW). Anyone interested or am I just rambling? 😃
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
^
4
^
Xiao Xiao 2/11/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 4
collaborate seamlessly
Seamless collaboration is actually the core idea on which Hiroshi became faculty at the Media Lab 😃, but I don’t see TMG on this list 😦 Of course, that was about human/human collaboration through machines, a different relationship between man and machine but a relationship nonetheless.
A bigger question is what is the relationship between AI and HCI and how they can mutually inform each other. Both care about the intersection of human and machines, but (at least traditionally), they have differed in their core values.
Interestingly, one of the important pioneers in AI, Terry Winograd, who made SHRDLU, crossedover to HCI. Here’s an interesting short article he wrote 10 years ago on the distinctions between AI and HCI, which he attributes to a matter of core values. https://hci.stanford.edu/winograd/papers/ai-hci.pdf In a nutshell, it goes back to opposing philosophical values—Rationalist vs Phenomemological.
Winograd actually cites a debate between Ben Shneiderman vs. Pattie (!) on direct manipulation vs interface agents. And actually, I believe that Pattie did a lot of seminal work on agent-based interactions back in the day. Fluid Interfaces was called Ambient Intelligence (AI!) back when I was a UROP at the lab. From “intelligence” to “interface”… I’d be curious to hear from Pattie what prompted the shift in focus.
Anyway, I feel like it’s time for AI and HCI to sit at the same table, at least to exchange some ideas 😃
^
2
^
Tal Achituv 2/11/2016
Permalink|Reply
Private. Collaborators only.
In that context there is of course also Human/Human collaboration that is mediated by an AI, which doesn’t resolve all of the conflicts, but at least allows to work around them with some ease.
Computer voice interfaces are a good example, on their own they are HCI, but an ‘ideal’ (perfect, real-time) one can be coupled into a universal translator, which is then a seamless AI turned human/human interface.
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Cesar Hidalgo1 point
ArchivedComment by Rosalind Picard1 point
^
1
^
Joe Paradiso 2/15/2016
Permalink|Reply
Private. Collaborators only.
Thinking about this more, we may have gotten the dystopia to avoid wrong. Everybody thinks AI gone bad as being ‘Terminator’ or maybe even ‘Colossus’ (have you seen that film? If not, I recommend it highly - already from 1969 - link below) or maybe the manipulative ones like HAL or The Matrix. http://www.amazon.com/Colossus-Forbin-Project-Eric-Braeden/dp/B0003JAOO0/ref=sr_1_1?s=movies-tv&ie=UTF8&qid=1455552538&sr=1-1&keywords=collosus
But the way things go bad in the human-in-the-loop scenario runs more along the lines of the Borg Collective from Star Trek (I’m sure you all know that one) or Landru from the original series (http://memory-alpha.wikia.com/wiki/The_Body_of_Landru) - people being essentially teleoperated into an ultimate totalitarianism. The Borg were bad because of their extreme socialism and the desire to endlessly expand. Laudru meant well, but took his mission too seriously and was a narrow ‘autistic’ AI. Hence this ignites a ton of speculation and debate - what is the role of the individual in such a soup of human and machine? What is lost and what is gained - can we somehow protect the role of the individual when we’re all so connected, or will my ego go the way of the dodo?
This may be all wrongheaded - e.g., if we’re destined to become agents running somewhere, the physical manifestation may not matter as much as getting backed up’ in enough places - but it’s the natural argument where such ideas hit nightmares.
^
1
^
Joichi Ito 2/26/2016
Permalink|Reply
Private. Collaborators only.
Tried to add this to the third paragraph.
^
1
^
Dazza Greenwood 2/11/2016
Permalink|Reply
Private. Collaborators only.
What if we applied some concepts of extended intelligence to the new legal hackers and digital law movement to express statutes and regulations in computational ways. I predict even a little success would be unique, of high impact and (at least apparently) magical. Some thoughts: https://fold.cm/read/dazzagreenwood/lawgometric-code-vjGxW5dv
^
1
^
Jeremy Rubin 2/11/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 5
eliminate
Selection made on Version 5
eliminate
If you aren’t familiar my favorite reference in this space is Roku’s Basilisk, similar to Pascal’s Wager. http://rationalwiki.org/wiki/Roko’s_basilisk
ArchivedComment by Karthik Dinakar1 point
^
1
^
Tal Achituv 2/11/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 3
greater context
The question of context here is more complicated than that, right? I mean, this is about doing it with greater context, but not merely ‘with greater context’, the representation of context in the emergent properties of the human-machine system seems, to me, as a key component.
^
1
^
Tal Achituv 2/11/2016
Permalink|Reply
Private. Collaborators only.
With respect to the method of coactive learning, as presented in the work referenced, it seems that it merely introduces a mechanism for extracting context from user’s implicit feedback, but the process is heavily iterative, and relies on some assumed ‘fixed’ or at least loaclly-fixed of context which, to me, seem like the wrong direction