Saturday, December 08, 2007

The Global Nincompoop Awakens

On a recent business trip to New York, I found myself sitting for a couple hours in a Starbucks in the midst of the campus of New York University (which is not a walled campus, but rather a collection of buildings strewn semi-haphazardly across a few blocks of Greenwich Village).

While sitting there typing into my laptop, I couldn't help being distracted by the conversations of the students around me. I attended NYU in the mid-80's (doing a bit of graduate study there on the way to my PhD), and I was curious to see how the zeitgest of the student body had changed.

Admittedly, this was a highly nonrepresentative sample, as I was observing only students who chose to hang out in Starbucks. (Most likely all the math and CS grad students were doing as I'd done during my time at NYU, and hanging out in the Courant Institute building, which was a lot quieter than any cafe' ...). And, the population of Starbucks seemed about 65% female, for whatever reason.

The first thing that struck me was the everpresence of technology. The students around me were constantly texting each other -- there was a lot of texting going on between people sitting in different parts of the Starbucks, or people waiting in line and other people sitting down, etc.

And, there was a lot of talk about Facebook. Pretty much anytime someone unfamiliar (to any of the conversation participants) was mentioned in conversation the question was asked "Are they on Facebook?" Of course, plenty of the students had laptops there and could write on each others Facebook walls while texting each other and slipping in the occasional voice phone call or email as well.

All in all I found the density and rapidity of information interchange extremely impressive. The whole social community of the Starbucks started to look like a multi-bodied meta-mind, with information zipping back and forth everywhere by various media. All the individuals comprising parts of the mind were obviously extremely well-attuned to the various component media and able to multiprocess very effectively, e.g. writing on someone's Facebook wall and then texting someone else while holding on an F2F conversation, all while holding a book in their lap and allegedly sort-of studying.

Exciting! The only problem was: The contents of what was being communicated was so amazingly trivial and petty it started to make me feel physically ill.

Pretty much all the electronic back-and-forth was about which guys were cute and might be interested in going to which party with which girls; or, how pathetic it was that a certain group of girls had "outgrown" a certain other group via being accepted into a certain sorority and developing a fuller and more mature appreciation for the compulsive consumption of alcohol ... and so forth.

Which led me to the following thought: Wow! With all our incredible communications technologies, we are creating a global brain! But 99.99% of this global brain's thoughts are going to be completely trite and idiotic.

Are we, perhaps, creating a global moron or at least a global nincompoop?

If taken seriously, this notion becomes a bit frightening.

Let's suppose that, at some point, the global communication network itself achieves some kind of spontaneous, self-organizing sentience.

(Yeah, this is a science-fictional hypothesis, and I don't think it's extremely likely to happen, but it's interesting to think about.)

Won't the contents of its mind somehow reflect the contents of the information being passed around the global communications network?

Say: porn, spam e-mails, endless chit-chat about whose buns are cuter, and so forth?

Won't the emergent global mind of the Internet thus inevitably be a shallow-minded, perverted and ridiculous dipshit?

Is this what we really want for the largest, most powerful mind on the planet?

What happens when this Global Moron asserts its powers over us? Will we all find our thoughts and behaviors subtly or forcibly directed by the Internet Overmind?? -- whose psyche is primarily directed by the contents of the Internet traffic from which it evolved ... which is primarily constituted of ... well... yecchh...


(OK .. fine ... this post is a joke... OR IS IT???)

Monday, October 29, 2007

On Becoming a Neuron

I was amused and delighted to read the following rather transhumanistic article in the New York Times recently.

http://www.nytimes.com/2007/10/26/opinion/26brooks.html?_r=1&oref=slogin

The writer, who does not appear to be a futurist or transhumanist or Singularitarian or anything like that, is observing the extent to which he has lost his autonomy and outsourced a variety of his cognitive functions to various devices with which he interacts. And he feels he has become stronger rather than weaker because of this -- and not any less of an individual.

This ties in deeply with the theme of the Global Brain

http://pespmc1.vub.ac.be/SUPORGLI.html

which is a concept dear to my heart ... I wrote about it extensively in my 2001 book "Creating Internet Intelligence" and (together with Francis Heylighen) co-organized the 2001 Global Brain 0 workshop in Brussels.

I have had similar thoughts to the above New York Times article many times recently... I can feel myself subjectively becoming far more part of the Global Brain than I was even 5 years ago, let alone 10...

As a prosaic example: Via making extensive use of task lists as described in the "Getting Things Done" methodology

http://en.wikipedia.org/wiki/Getting_Things_Done

I've externalized much of my medium-term memory about my work-life.

And via using Google Calendar extensively I have externalized my long-term memory... I use the calendar not only to record events but also to record information about what I should think about in the future (e.g. "Dec. 10 -- you should have time to start thinking about systems theory in connection to developmental psychology again...")

And, so much of my scientific work these days consists of reading little snippets of things that my colleagues on the Novamente project (or other intellectual collaborators) wrote, and then responding to them.... It's not that common these days that I undertake a large project myself, because I can always think of someone to collaborate with, and then the project becomes in significant part a matter of online back-and-forth....

And the process of doing computer science research is so different now than it was a decade or two ago, due to the ready availability and easy findability of so many research ideas, algorithms, code snippets etc. produced by other people.

Does this mean that I'm no longer an individual? It's certainly different than if I were sitting on a mountain for 10 years with my eagle and my lion like Nietzsche's Zarathustra.

But yet I don't feel like I've lost my distinctiveness and become somehow homogenized --
the way I interface with the synergetic network of machines and people is unique in complexly patterned ways, and constitutes my individuality.

Just as a neuron in the brain does not particularly manifest its individuality any less than a neuron floating by itself in a solution. In fact, the neuron in the brain may manifest its
individuality more greatly, due to having a richer, more complex variety of stimuli to which it may respond individually.

None of these observations are at all surprising from a Global Brain theory perspective. But, they're significant as real-time, subjectively-perceived and objectively-observed inklings of the accelerating emergence of a more and more powerful and coordinated Global Brain, of which we are parts.

And I think this ties in with Ray Kurzweil's point that by the time we have human-level AGI, it may not be "us versus them", it may be a case where it's impossible to draw the line between us and them...

-- Ben

P.S.

As a post-script, I think it's interesting to tie this Global Brain meme in with the possibility of a "controlled ascent" approach to the Singularity and the advent of the transhuman condition.

Looking forward to the stage at which we've created human-leve AGI's -- if these AGI's become smarter and smarter at an intentionally-controlled rate (say a factor of 1.2 per year, just to throw a number out there), and if humans are intimately interlinked with these AGI's in a Global Brain like fashion (as does seem to be occurring, at an accelerating rate), then we have a quite interesting scenario.

Of course I realize that guaranteeing this sort of controlled ascent is a hard problem. And I realize there are ethical issues involved in making sure a controlled ascent like this respects the rights of individuals who choose not to ascend at all. And I realize that those who want to ascend faster may get irritated at the slow pace. All these points need addressing in great detail by an informed and intelligent and relevantly educated community, but they aren't my point right now -- my point in this postcript is the synergetic interrelation of the Global Brain meme with the controlled-ascent meme.

The synergy here is that as the global brain gets smarter and smarter, and we get more and more richly integrated into it, and the AGI's that will increasingly drive the development of the global brain get smarter and smarter -- there is a possibility that we will become more and more richly integrated with a greater whole, while at the same time having greater capability to exercise our uniqueness and individually.

O Brave New Meta-mind, etc. etc. ;-)

Friday, June 15, 2007

The Pigeons of Paraguay (Further Dreams of a Ridiculous Man)

In the spirit of my prior dream-description Colors, I have written down another dream ... one I had last night ... it's in the PDF file linked to from

Copy Girl and the Pigeons of Paraguay


I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.

I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....

The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!

I'll give you some mathematics in my next blog entry ;-)

-- Ben

Saturday, June 02, 2007

Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)

From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.

First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog

http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html


we find some quotes from Google co-founder Larry Page:

"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.

...

a lot of our systems already use learning techniques


...

The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ...
You could ask 'what should I ask Larry?' and it would tell you."

Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.

Exciting rhetoric indeed!

Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.

Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.

So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:

"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "

Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....

He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."

Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....

Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.

And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...

[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]

When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..

"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.

Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."

Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either

  • they have trained dozens of their scientific staff to be really good actors, or
  • it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about

Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.

Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)

But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.

So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.

All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.

OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....

Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.

As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.

Friday, May 25, 2007

Pure Silliness


Ode to the Perplexingness of the Multiverse


A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"

On the Dangers of Incautious Research and Development

A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"

A couple clever followups to the above poem were posted by others on the Singularity email list...

On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)

A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"

By Derek Zahn:

The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.

And, less interestingly...

On the Benefits of Clarity in Verbal Presentation

There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration

Sunday, May 20, 2007

Flogging Poor Searle Again

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a workhorse theme in the philosophy of AI that normally bores me to tears.

But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.

I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.

The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.

As an example of this sort of theoretical research, check out:

http://www.hutter1.net/

which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.

My feeling is that one should think about, not just

Intelligence = complexity of goals that a system can achieve

but also

Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)

According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.

Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.

Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....

A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.

And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.

The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.

So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.

This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.

(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)

In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.

However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.

But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...

What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.

That is really the moral of the Chinese room.

Tuesday, May 15, 2007

Technological versus Subjective Acceleration

This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.

There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.

The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.

This breaks down into a couple sub-points.

First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.

Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.

Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.

Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.

These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.

The example of music may serve to illustrate both of these points.

The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.

Why is that?

Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...

Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.

And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.

To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).

But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....

And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.

Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.

So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.

Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...

I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.

At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.

Viva la Singularity!

Thursday, February 01, 2007

Colors: A Recurring Dream

I took a couple hours and wrote down a recurring dream I've had for years, which is a sort of metaphor for transhumanism and the quest to create AGI...

http://goertzel.org/Colors.pdf