Extended Intelligence

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon.

Authors

Director, MIT Media Lab
Discussions41Share

Discussions

Authors
Labels
Sort
Discussion on Jun 14, 2016
Nick DePalma
I was under the impression that unlike previous comments here and the written document, that Extended Intelligence was primarily and objectively about taking lessons learned from AI, interaction design, and cognitive science to /improve ourselves/ through some type of augmentation. It's not really about understanding cognition (much like AI was agnostic to actual cognitive mechanisms), but about using what we now know as tools to self improve. I think the central claim is not on a "distributed" or "cognitive" intelligence ala Rumelhart but on a positivist position that the Media Lab is taking?
Discussion on Jun 13, 2016
Karen Schrier
We could consider Knowledge Games a type of "extended intelligence" -- in situation where the play of a game helps us extend our ability to solve complex problems and produce new knowledge. See more at: jhupbooks.press.jhu.edu/content/knowledge-games
Discussion on Apr 11, 2016
Ben Tolkin
Always interesting to see the way academic philosophy and engineering/tech interact, or, more often, don't. There's been a ton of work done in philosophy on extended cognition since Clark and Chalmers in '98; I'm not sure who the "we" is in "we propose a kind of extended intelligence"... Of course, it's more often that philosophers haven't caught up with technology, but as someone with a foot in both worlds it's a little strange to see an article here that wouldn't look out of place in a copy of Synthese from 2005. The most interesting questions in my mind are around all the familiar terms that need to be redefined in light of an extended cognition hypothesis. Can the self or mind be divorced from intelligence? Is there room for a self at all, or will that dissolve as we communicate at increasingly higher bandwidth with our social networks and machines? Who is to blame when a cognitive network does something immoral? (do normal rules of morality even apply?) At a certain point, this train of thought leads to viewing the universe as a single, vast network, with any subdivision of interacting parts being arbitrary. Are there any boundaries we should draw on what makes us intelligent? (After all, we are constantly interacting with every body in the known universe.) And if not, is intelligence a useful quality to define?
Discussion on Mar 10, 2016
samim winiger
Great discussion here, thank you. Thinking along similar lines, we just published an extensive piece on "CreativeAI". It includes in-depth analysis, narrative and vision for the space between human, machine and creativity: https://medium.com/@ArtificialExperience/creativeai-9d4b2346faf3 curios to hear your thoughts.
Discussion on Feb 28, 2016
Peter van der Putten
Could be relevant to refer to Licklider 1960 paper on Man Computer Symbiosis. Also the concept of general AI seems to have made more of a comeback with the popularity of general purpose methods such as deep learning. Some of the more longer term questions are how societies will adapt and our own concept evolves of what it is that makes us human as AI/IA/EI progresses - how will we see ourselves in say 50 years. And what does this mean for related fields such as artificial creativity / creativity augmentation?
Discussion on Feb 28, 2016
Sebastien Dery
I very much enjoy how this article attempts to touch base on the wider topics AI. Always refreshing to hear the MediaLab step up and shake the box a little bit. That being said, I can't help but find some dark humour in the way we speak about "augmented brain" and "smarter computer" with little discussion regarding the metric by which we evaluate those fascinating topics and system. More to the point, despite our many attempts at trying to mimic or extend the human mind, very little attention has been given to the many plague it (and consequently we) suffer from; namely depression, cognitive biases, lack of compassion, selection of evidence to satisfy previous notion, etc. In other words, we are building data-driven decision-making tools under the assumption that the human minds around us are willing to accept the conclusions. Something akin to the difference between Knowledge and Wisdom. Extended-inteligence sounds great. Perhaps it would also be worth diving into Collective-Wisdom (CW). Anyone interested or am I just rambling?
Discussion on Feb 25, 2016
Cesar Hidalgo and Joichi Ito
The work involves a collaboration between the groups (technically between me and Nikhil Naik). The work has not been passed on to another group. I assume that collaborations between groups are ok
OK. Tried to reflect that.
Discussion on Feb 25, 2016
Cesar Hidalgo
-> "just naturally augment this networked intelligence"
Discussion on Feb 25, 2016
Cesar Hidalgo
typo: "yet again becoming" or "again"
Discussion on Feb 16, 2016
Cesar Hidalgo
Do we have room for the Data Viz Engines we build at Macro and that millions of people are using every year to make decisions? Between the OEC, Dataviva, Pantheon, and Immersion, we served nearly 5 million people last year only. These data viz engines are being used to make economic decisions by entrepneurs looking for commercial destinations (we get tons of emails of people like that in the OEC). DataViva has also been used by development banks in Brazil to prioritize their development loans (so actual monetary decisions were aided by the vizs). I would say these tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.
Discussion on Feb 16, 2016
Cesar Hidalgo
I recently wrote and published an entire book on the computational capacity of societies and economies, and on how that computational capacity is expressed (and therefore can be measured), by looking at the outputs that an economy produce. The book is Why Information Grows.
Discussion on Feb 16, 2016
Cesar Hidalgo
I feel a bit sidelined here, because the research on urban perception is research that I ideated, started, and for which I am the PI. But the fact that I am working with a student from Camera Culture (Nikhil, with whom I spend vast amounts of time with, and he is great), makes this a Camera Culture project.
Discussion on Feb 16, 2016
Cesar Hidalgo
It sounds to me like this is phrased as a "new" thing, when in reality, our abilty to create increasingly larger, and smarter, groups, is an extension of a process that has been quite continous and ongoing for tens of thousands of years. Some would say newtorked intelligence started with the cognitive revolution and the invention of human language (see E.O. Wilson, The Social Conquest of Earth, or Yuval Harari, Sapiens).
Discussion on Feb 16, 2016
Cesar Hidalgo and Joichi Ito
This type of questioning makes the author sound naive when it is about a topic where there is an extensive literature. It sounds like the author is discovering what others already know. A more assertive way of communicating this is not to rethorically ask whether intelligence resides in networks, but to cite four or five examples of different traditions that have already made that claim. If you want to keep the examples close to the tradition of the lab, then Minsky's Society of Mind and my Why Information Grows are two examples of books that focus on collective intelligence. The essay I shared with faculty mailing list also has examples, like Hayek's "use of knowldege in society", which are well known. Also I did a chapter on something similar in a chapter for a book by John Brockman a few years ago: http://edge.org/response-detail/26176 Finally, there is also all of the cumulative culture ideas of Boyd, Richerson, and Henrich, that talk directly about this distributed intelligence.
Tried to add this notion in a single line. You should write a linked Pub about the history of collective intelligence so I can link to it.
Discussion on Feb 16, 2016
Cesar Hidalgo
I would unpack paragraph one into at least two sentences. The first one, is the motivation: an increase in the number of debates about AI. The second one, what these debates are about.
Discussion on Feb 15, 2016
Rosalind Picard
Affective computing is researching how to provide emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. Understanding and responding intelligently to human emotion is vital for fostering long-term successful interactive learning- whether a system is trying to help a human learn and sustain his/her motivation, or whether the human is trying to help the computer learn, without getting annoyed by the computer (e.g. by its incessant need for the human to explain things). For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction.
Discussion on Feb 15, 2016
Joe Paradiso and Joichi Ito
Thinking about this more, we may have gotten the dystopia to avoid wrong. Everybody thinks AI gone bad as being ‘Terminator’ or maybe even ‘Colossus’ (have you seen that film? If not, I recommend it highly - already from 1969 - link below) or maybe the manipulative ones like HAL or The Matrix. http://www.amazon.com/Colossus-Forbin-Project-Eric-Braeden/dp/B0003JAOO0/ref=sr_1_1?s=movies-tv&ie=UTF8&qid=1455552538&sr=1-1&keywords=collosus But the way things go bad in the human-in-the-loop scenario runs more along the lines of the Borg Collective from Star Trek (I’m sure you all know that one) or Landru from the original series (http://memory-alpha.wikia.com/wiki/The_Body_of_Landru) - people being essentially teleoperated into an ultimate totalitarianism. The Borg were bad because of their extreme socialism and the desire to endlessly expand. Laudru meant well, but took his mission too seriously and was a narrow ‘autistic’ AI. Hence this ignites a ton of speculation and debate - what is the role of the individual in such a soup of human and machine? What is lost and what is gained - can we somehow protect the role of the individual when we’re all so connected, or will my ego go the way of the dodo? This may be all wrongheaded - e.g., if we’re destined to become agents running somewhere, the physical manifestation may not matter as much as getting backed up' in enough places - but it's the natural argument where such ideas hit nightmares.
Tried to add this to the third paragraph.
Discussion on Feb 11, 2016
Dazza Greenwood
What if we applied some concepts of extended intelligence to the new legal hackers and digital law movement to express statutes and regulations in computational ways. I predict even a little success would be unique, of high impact and (at least apparently) magical. Some thoughts: https://fold.cm/read/dazzagreenwood/lawgometric-code-vjGxW5dv
Discussion on Feb 11, 2016
Jeremy Rubin
If you aren't familiar my favorite reference in this space is Roku's Basilisk, similar to Pascal's Wager. http://rationalwiki.org/wiki/Roko's_basilisk
Discussion on Feb 11, 2016
Natasha Jaques and Joichi Ito
The majority of Affective Computing's work relates to AI and machine learning, but I don't see it on this list.
Can you share some links?
Specifically, much of our work has focused on understanding affect to better facilitate human/machine communication
2 more...