Skip to main content
SearchLoginLogin or Signup

Extended Intelligence

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon.

Published onFeb 11, 2016
Extended Intelligence

This is based on an ongoing conversation at the Media Lab and is a compilation of thoughts from conversations with the faculty, students and researchers at the MIT Media Lab. Mostly written by Joichi Ito with help from Kevin Slavin and the rest of the Media Lab.

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?


[1] [2] [3]


Artificial Intelligence has yet again become one of the world’s biggest ideas and areas of investment, with new research labs, conferences, and raging debates from the main stream media to academia.

We see debates about humans vs. machines and questions about when machines will become more intelligent than human beings, speculation over whether they’ll keep us around as pets or just conclude we were actually a bad idea and eliminate us.

There are, of course, alternatives to this vision, and they date back to the earliest ideas of how computers and humans interact.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.

Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.

For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world. John Markoff

But beyond distinguishing between creating an artificial intelligence (AI), or augmenting human intelligence (IA), perhaps the first and fundamental question is where does intelligence lie? Hasn’t it always resided beyond any single mind, extended by machines into a network of many minds and machines, all of them interacting as a kind of networked intelligence [4] that transcends and merges humans and machines?

If intelligence is networked to begin with, wouldn’t this thing we are calling “AI” just augment this networked intelligence, in a very natural way? While the notion of collective intelligence and the extended mind are not new ideas, is there a lens to look at modern AI in terms of its contribution to the collective intelligence?

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?

Marvin Minsky conceived AI not just as a way to build better machines, but as a way to use machines to understand the mind itself. In this construction of Extended Intelligence, does the EI lens bring us closer to understanding what makes us human, by acknowledging that what part of what makes us human is that our intelligence lies so far outside any one human skull?

At the individual level, in the future we may look less like terminators and more like cyborgs; less like isolated individuals, and more like a vast network of humans and machines creating an ever-more-powerful EI. Every elements at every scale connected through an increasingly distributed variety of interfaces. Each actor doing what it does best -- bits, atoms, cells and circuits -- each one fungible in many ways, but tightly integrated and part of a complex whole.

While we hope that this Extended Intelligence will be wise, ethical and effective, is it possible that this collective intelligence could go horribly wrong, and trigger a Borg Collective hypersocialist hive mind? [5]

Such a dystopia is not averted by either building better machine learning, nor by declaring a moratorium on such research. Instead, the Media Lab works at these intersections of humans and machines, whether we’re talking about neuronal interfaces between our brains and our limbs, or society-in-the-loop machine learning.

Where the majority of AI funding and research is to accelerate statistical machine learning, trying to make machines and robots “smarter,” we are interested in the augmentation and machine assistance of the complex ecosystem that emerges from the network of minds and our society.

Advanced Chess is the practice of human/computer teams playing in real-time competitive tournaments. Such teams dominate the strongest human players as well as the best chess computers. This effect is amplified when the humans themselves play in small groups, together with networked computers.

The Media Lab has the opportunity to work on the interface and communication between humans and machines–the artificial and the natural–to help design a new fitness landscape [6] for EI and this co-evolution of humans and machines.

EI research currently includes:

  • Connecting electronics to human neurons to augment the brain and our nervous system (Synthetic Neurobiology and Biomechatronics)

  • Using machine learning to understand how our brains understand music, and to leverage that knowledge to enhance individual expression and establish new models of massive collaboration (Opera of the Future)

  • If the best human or computer chess players can be dominated by human-computer teams including amateurs working with laptops, how can we begin to understand the interface and interaction for those teams? How can we get machines to raise analysis for human evaluation, rather than supplanting it? (Playful Systems)

  • Machine learning is mostly conducted by an engineer tweaking data and learning algorithms, later testing this in the real world. We are looking into human-in-the-loop machine learning [7][8] , putting professional practitioners in the training loop. This augments human decision-making and makes the ML training more effective, with greater context.

  • building networked intelligence, studying how networks think and how they are smarter than individuals. (Human Dynamics Group)

  • developing humans and machine interfaces through sociable robots and learning technologies for children. (Personal Robots Group)

  • develeloping “society-in-the-loop,” pulling ethics and social norms from communities to train machines, testing the machines with society, in a kind of ethical Turing test. (Scalable Cooperation)

  • developing wearable interfaces that can influence human behavior through consciously perceivable and subliminal I/O signals. (Fluid Interfaces)

  • extending human perception and intent through pervasively networked sensors and actuators, using distributed intelligence to extend the concept of “presence.” (Responsive Environments)

  • incorporating human-centered emotional intelligence into design tools so that the “conversation” the designer has with the tool is more like a conversation with another designer than interactions around geometric primitives. (e.g., “Can we make this more comforting?”) (Object-Based Media)

  • developing personal autonomous vehicle (PEV) that that can understand, predict, and respond to the actions of pedestrians; communicate its intentions to humans in a natural and non-threatening way; and augment the senses of the rider to help increase safety. (Changing Places)

  • providing emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction. [9] (Affective Computing)

  • (Camera Culture Group) is using artificial intelligence and crowdsourcing for understanding and improving the health and well-being of individuals.

  • The Macro Connections Group is collaborating with the Camera Culture Group on artificial intelligence and crowdsourcing for understanding and improving our cities.

  • Macro Connections has also developed Data Viz Engines such as the OEC, Dataviva, Pantheon, and Immersion, which served nearly 5 million people last year. These tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.

  • collaborating with Canan Dagdeviren to explore novel materials, mechanics, device designs and fabrication strategies to bridge the boundaries between brain and electronics. Further, developing devices that can be twisted, folded, stretched/flexed, wrapped onto curvilinear brain tissue, and implanted without damage or significant alteration in the device's performance. Research towards a vision of brain probes that can communicate with external and internal electronic components.

The wildly heterogeneous nature of these different projects is characteristic of the Media Lab. But more than that, it is the embodiment of the very premise of EI: that intelligence, ideas, analysis and action are not formed in any one individual collection of neurons or code. All of these projects are exploring this central idea with different lenses, experiences and capabilities, and in our research as well as in our values, we believe this is how intelligence comes to life.

Citations:

Comments
41
?
Zabel Seo:

I’m glad to see this site with informative articles shared. brazilian jiu jitsu Memphis

?
medical guide:

If you're searching for a skilled Doctor for Appendicitis in Karachi, rest assured that the city has many experienced professionals ready to provide prompt and effective care. Appendicitis is a serious condition that often requires surgery, and receiving timely treatment is crucial for a smooth recovery. Karachi's doctors are well-equipped with the expertise and facilities needed to handle appendicitis cases, ensuring patients get the best care possible. If you're facing symptoms, don't hesitate to consult a qualified specialist right away.

?
Dogs Habit:

Ever wondered should you bark like a dog at your dog? It’s fascinating to think how it might affect a dogs habits and communication.

?
Kyle Zee:

I enjoyed reading this. It was engaging and kept me interested throughout. Biloxi Concrete Contractors concrete contractor

?
Jan Wil:

This is awesome! Glad to check this site. St. Louis Concreters concrete driveway

?
Diana Mclain:

While some envision a cooperative coexistence, others raise concerns about the risks of AI development, including existential threats. It’s crucial to approach these debates with a balanced perspective, recognizing both the incredible opportunities AI presents and the responsibilities that come with its advancement. Ultimately, fostering open dialogue will be key to navigating this transformative era.

Diana- CEO- Sydney Dance School

?
Oli Seo:

Thank you for the great content you shared. Buffalo Concrete Company concrete driveway

?
Keiko Alonte:

I see.. Some speculate that AI could become so advanced that it may view humans as either companions or obsolete, leading to fears of existential threats. While these discussions can seem far-fetched, they highlight the importance of ethical considerations and responsible development in AI. As we navigate this landscape, it's crucial to foster a balanced dialogue about the possibilities and challenges that AI presents, ensuring that technology enhances human life rather than diminishes it.

Keiko of bathroom renovations sydney

Agnez Cruz:

To contextualize this, the notion of EI suggests that intelligence isn't just something that resides within individuals but is enhanced and extended through our interactions with technology and each other. As we develop more sophisticated tools—be it AI algorithms, collaborative platforms, or data-sharing systems—we're not just amplifying individual capabilities; we're creating a networked intelligence that draws on the strengths and insights of all participants involved.

Ms. Cruz at Gold Coast landscaping

?
Dorothy Hughes:

Get the best building pest report service now!

?
Justine Co:

As we develop more sophisticated tools—like AI algorithms, data analytics platforms, and collaborative software—we're not just enhancing individual capabilities; we're creating a richer ecosystem where knowledge and insights can flow more freely among all participants. For instance, consider how platforms like GitHub enable developers to collaborate on code in real time, effectively pooling their collective expertise to produce better software faster than any one person could alone.

J. Co/Solar installers hills district

Jasmine Sanchez:

While a lot of AI funding and research focus on improving statistical machine learning for smarter machines, there is a growing recognition of the importance of leveraging AI to enhance human-machine collaboration and societal impact. By shifting some attention towards supporting the network of minds and our society, we can unlock new opportunities for innovation and positive change.

Sanchez/renovators in Penrith

auspiciouszesty fauly:

It encompasses many important ideas and practices, and thinking about the future of science in the context of design promises to be a fruitful endeavor. geometry dash lite

?
Gray Klouse:

While we strive for wisdom and ethicality, there's always a concern about potential negative outcomes. The idea of a Borg Collective-like scenario is indeed thought-provoking. It's crucial to explore the intersection of humans and machines to navigate this evolving landscape wisely. Let's continue to approach AI development with caution and mindfulness.

Mr. Klouse., carpenter in Sydney

?
Cara Railey:

In essence, as we continue to innovate and develop new technologies that enable us to collaborate and share knowledge across networks, we are indeed expanding the scope of Extended Intelligence. This interconnected web of intelligence underscores the importance of collective efforts in driving innovation and problem-solving.

Cara at Parramatta Kitchen Renovations

?
Rea Brooker:

It's fascinating to think about intelligence not just as something individual, but as a collective, distributed phenomenon. The concept of Extended Intelligence (EI) that you're proposing at the MIT Media Lab really challenges the traditional notions of what intelligence is and how it operates.

Rea, sme coaching sydney

?
Daph Rose:

Extended Intelligence (EI) suggests that intelligence is not solely confined to individual minds but is distributed across interconnected systems and networks. In this context, my advice would be to embrace collaboration, leverage diverse perspectives, and harness the collective intelligence of interconnected entities to enhance problem-solving and decision-making processes. By recognizing the power of distributed intelligence, we can unlock new opportunities for innovation and growth. So, in essence, don't underestimate the potential of collective intelligence and make the most of it in your endeavors!

Miss Kelly., Sydney av consultants

?
Kelly Knoox:

Absolutely! It's fascinating to think about how our collective intelligence grows as we connect and collaborate with powerful tools. By expanding our network and sharing information, we're essentially enhancing the Extended Intelligence (EI) that we all contribute to. It's like adding more pieces to a puzzle where each actor plays a crucial role in shaping the bigger picture. Such a collaborative approach truly showcases the beauty of distributed intelligence.

Miss K., sydney roofing

?
Rain Fuente:

Extended Intelligence (EI) proposes that intelligence is a distributed phenomenon. It suggests that intelligence is not solely confined to individual entities but can be enhanced and extended through interactions with external systems and environments.

Rain (Bathroom Renovations Parramatta)

?
Dorothy Hughes:

Many thanks approving towards the great data. Purely nondiscriminatory wen upwards! basement waterproofing washington dc

?
Zean Albert:

Great! Thanks for sharing this useful information, www.windowcleanersburnaby.com

?
Jake Taylor:

tree pruning services This blog discusses the concept of Extended Intelligence (EI), which views intelligence as a distributed phenomenon. It raises questions about the nature of intelligence, the relationship between humans and machines, and the potential of collective intelligence. The document outlines various research areas within EI, emphasizing the collaborative interaction between humans and machines in fields like neuroscience, music, and artificial intelligence. It underscores the diverse and interconnected nature of these projects, reflecting the essence of intelligence as a collective and networked concept.

?
ZDROWERSI kliknij po zdrowie:

Great stuff! Let us invite to visit our site https://zdrowersi.pl/zlozone-preparaty-ziolowe-198

?
Angel Be:

Thank you for sharing this fantastic information. Keep posting! Appliance Repair

?
seven yevale:

Great article . Students and professionals can also learn with https://www.sevenmentor.com

?
Mahima Mantri:

Great discussion. thanks for sharing. We are providing the Best Data Science Course Training Institute in Pune with Job assistance & offers a blended model of data science training. It helps you to master in Python, ML, Git, etc. to become a certified Data Scientist. Enroll in the best training now! Data Science Course in Pune

Address: KUNAL PLAZA, SevenMentor, 3rd Floor, off Mumbai Pune Highway, Pimpri-Chinchwad, Maharashtra 411019

Nick DePalma:

I was under the impression that unlike previous comments here and the written document, that Extended Intelligence was primarily and objectively about taking lessons learned from AI, interaction design, and cognitive science to /improve ourselves/ through some type of augmentation. It's not really about understanding cognition (much like AI was agnostic to actual cognitive mechanisms), but about using what we now know as tools to self improve. I think the central claim is not on a "distributed" or "cognitive" intelligence ala Rumelhart but on a positivist position that the Media Lab is taking?

Karen Schrier:

We could consider Knowledge Games a type of "extended intelligence" -- in situation where the play of a game helps us extend our ability to solve complex problems and produce new knowledge. See more at: jhupbooks.press.jhu.edu/content/knowledge-games

Ben Tolkin:

Always interesting to see the way academic philosophy and engineering/tech interact, or, more often, don't. There's been a ton of work done in philosophy on extended cognition since Clark and Chalmers in '98; I'm not sure who the "we" is in "we propose a kind of extended intelligence"... Of course, it's more often that philosophers haven't caught up with technology, but as someone with a foot in both worlds it's a little strange to see an article here that wouldn't look out of place in a copy of Synthese from 2005.

The most interesting questions in my mind are around all the familiar terms that need to be redefined in light of an extended cognition hypothesis. Can the self or mind be divorced from intelligence? Is there room for a self at all, or will that dissolve as we communicate at increasingly higher bandwidth with our social networks and machines? Who is to blame when a cognitive network does something immoral? (do normal rules of morality even apply?) At a certain point, this train of thought leads to viewing the universe as a single, vast network, with any subdivision of interacting parts being arbitrary. Are there any boundaries we should draw on what makes us intelligent? (After all, we are constantly interacting with every body in the known universe.) And if not, is intelligence a useful quality to define?

samim winiger:

Great discussion here, thank you. Thinking along similar lines, we just published an extensive piece on "CreativeAI". It includes in-depth analysis, narrative and vision for the space between human, machine and creativity: https://medium.com/@ArtificialExperience/creativeai-9d4b2346faf3 curios to hear your thoughts.

Peter van der Putten:

Could be relevant to refer to Licklider 1960 paper on Man Computer Symbiosis. Also the concept of general AI seems to have made more of a comeback with the popularity of general purpose methods such as deep learning. Some of the more longer term questions are how societies will adapt and our own concept evolves of what it is that makes us human as AI/IA/EI progresses - how will we see ourselves in say 50 years. And what does this mean for related fields such as artificial creativity / creativity augmentation?

Sebastien Dery:

I very much enjoy how this article attempts to touch base on the wider topics AI. Always refreshing to hear the MediaLab step up and shake the box a little bit. That being said, I can't help but find some dark humour in the way we speak about "augmented brain" and "smarter computer" with little discussion regarding the metric by which we evaluate those fascinating topics and system. More to the point, despite our many attempts at trying to mimic or extend the human mind, very little attention has been given to the many plague it (and consequently we) suffer from; namely depression, cognitive biases, lack of compassion, selection of evidence to satisfy previous notion, etc. In other words, we are building data-driven decision-making tools under the assumption that the human minds around us are willing to accept the conclusions. Something akin to the difference between Knowledge and Wisdom. Extended-inteligence sounds great. Perhaps it would also be worth diving into Collective-Wisdom (CW). Anyone interested or am I just rambling?

Joe Paradiso:

Thinking about this more, we may have gotten the dystopia to avoid wrong. Everybody thinks AI gone bad as being ‘Terminator’ or maybe even ‘Colossus’ (have you seen that film? If not, I recommend it highly - already from 1969 - link below) or maybe the manipulative ones like HAL or The Matrix. http://www.amazon.com/Colossus-Forbin-Project-Eric-Braeden/dp/B0003JAOO0/ref=sr_1_1?s=movies-tv&ie=UTF8&qid=1455552538&sr=1-1&keywords=collosus

But the way things go bad in the human-in-the-loop scenario runs more along the lines of the Borg Collective from Star Trek (I’m sure you all know that one) or Landru from the original series (http://memory-alpha.wikia.com/wiki/The_Body_of_Landru) - people being essentially teleoperated into an ultimate totalitarianism. The Borg were bad because of their extreme socialism and the desire to endlessly expand. Laudru meant well, but took his mission too seriously and was a narrow ‘autistic’ AI. Hence this ignites a ton of speculation and debate - what is the role of the individual in such a soup of human and machine? What is lost and what is gained - can we somehow protect the role of the individual when we’re all so connected, or will my ego go the way of the dodo?

This may be all wrongheaded - e.g., if we’re destined to become agents running somewhere, the physical manifestation may not matter as much as getting backed up' in enough places - but it's the natural argument where such ideas hit nightmares.

Joichi Ito:

Tried to add this to the third paragraph.

Dazza Greenwood:

What if we applied some concepts of extended intelligence to the new legal hackers and digital law movement to express statutes and regulations in computational ways. I predict even a little success would be unique, of high impact and (at least apparently) magical. Some thoughts: https://fold.cm/read/dazzagreenwood/lawgometric-code-vjGxW5dv

Jeremy Rubin:

If you aren't familiar my favorite reference in this space is Roku's Basilisk, similar to Pascal's Wager. http://rationalwiki.org/wiki/Roko's_basilisk

Joichi Ito:

Can you share some links?

Natasha Jaques:

Specifically, much of our work has focused on understanding affect to better facilitate human/machine communication

+ 3 more...
Matt Carney:

The Center for Bits and Atoms is developing declarative design tools and robotic assemblers of micro to macro discrete continuum materials. Computation and robotic assembly will enable discovery of otherwise unreachable design spaces.

Xiao Xiao:

I'm really intruiged by the intersection between Learning (à la LLK & Papert) and AI. Minskian AI was at its core about understanding cognition (building machines to model it). Learning, is also about understanding cognition—building our own minds.

How can both sides benefit by connecting to other areas of inquiries about cognition? (e.g. embodiment, mindfulness, the arts)

Joscha Bach:

Yes! Minsky conceived AI not just as a way to build better machines. He wanted to use machines to understand the mind itself. What makes us human? Who and what are we?

The "extended mind" is an excellent notion, because it acknowledges our embodiment as social, biological and cultural beings, and it emphasizes the interactions between humanity, design, technology and ethics. Of course we are also more than the interface between our tools, society and our therapist. Understanding the mind itself is going to be a crucial part of understanding how we interact with our environments and with each other. I wonder if this should be reflected in this effort, too. For instance, Shoshanna's work in Playful Systems uses games to study motivational traits. The Media Lab also explores art as a way to learn about experience and perception, and many people seem to be interested in cognition itself.

+ 1 more...
Karthik Dinakar:

This section must add references to the HITL work with respect to crisis counseling, measuring self-harm and cardiolinguistics currently done at the lab. The AI2 talk and the HITL work at the lab are quite different from each other.

Joichi Ito:

Can you provide the appropriate citations Karthik?

+ 2 more...
Tal Achituv:

The question of context here is more complicated than that, right? I mean, this is about doing it with greater context, but not merely 'with greater context', the representation of context in the emergent properties of the human-machine system seems, to me, as a key component.

Tal Achituv:

With respect to the method of coactive learning, as presented in the work referenced, it seems that it merely introduces a mechanism for extracting context from user's implicit feedback, but the process is heavily iterative, and relies on some assumed 'fixed' or at least loaclly-fixed of context which, to me, seem like the wrong direction