Always interesting to see the way academic philosophy and engineering/tech interact, or, more often, don’t. There’s been a ton of work done in philosophy on extended cognition since Clark and Chalmers in '98; I’m not sure who the “we” is in “we propose a kind of extended intelligence”… Of course, it’s more often that philosophers haven’t caught up with technology, but as someone with a foot in both worlds it’s a little strange to see an article here that wouldn’t look out of place in a copy of Synthese from 2005.
The most interesting questions in my mind are around all the familiar terms that need to be redefined in light of an extended cognition hypothesis. Can the self or mind be divorced from intelligence? Is there room for a self at all, or will that dissolve as we communicate at increasingly higher bandwidth with our social networks and machines? Who is to blame when a cognitive network does something immoral? (do normal rules of morality even apply?) At a certain point, this train of thought leads to viewing the universe as a single, vast network, with any subdivision of interacting parts being arbitrary. Are there any boundaries we should draw on what makes us intelligent? (After all, we are constantly interacting with every body in the known universe.) And if not, is intelligence a useful quality to define?