Sunday, December 20, 2009

China Ascendant? (a comment on Robin Hanson's comment on...)

This blog post is an edited version of a comment I made on Robin Hanson's recent China Ascendant post on the Overcoming Bias blog. So, read Robin's post before reading this one!



Also, it is best read as a sort of post-script to my recent article on the Chinese Singularity in H+ Magazine. So maybe you should read that article first too ;-)

Ultimately, it may not be so important whether the US or China or India or Europe leads the advance of science and technology during the next decades.

Certainly, if you're a Singularitarian like me -- the Singularity is about the fate of mind and humanity, not the fate of nations ... and if/when it comes, it will quickly bring us beyond the state where national boundaries are a big deal.

But in current practical terms, the "where" question is an interesting one.... Especially, if a lot of the relevant developments are going to happen outside the Western world, this is worth knowing because it's going to affect a lot of decisions people have to make.

So, to get to the topic of Robin Hanson's blog post: China ascendant ???

My answer to that question is always: Maybe.

In his post, Robin makes the statement:

If China continues to outgrow the West, it will likely be because they do a few things very right, as did the West before.

The point I want to make here is a simple one: One of the things China is doing much better than the US, these days, is thinking medium-term and long-term rather than just short-term.

Perhaps long-range planning will be one of the "few things" China does "very right," to use Robin's language.

China is planning decades ahead, in their technology and science development, in their energy and financial policies, and many other areas as well.

Whereas in the US, we seem to be mired in a "next quarter" or "next election" mentality.

However, the matter isn't as simple as it seems...

It's interesting to observe that the American system sometimes does great mid-range planning accidentally (or, to use a more charitable word: implicitly)...

For instance, the dot-com boom seems kinda stupid in hindsight (trust me; I played my own small part in the stupidity!) ... but on closer inspection, a lot of the "wasted" venture $$ that went into the dot-com boom funded

  • the build-out of Internet infrastructure of various kinds
  • the prototyping of technologies that later became refined and successful.

Those VCs would not have funded infrastructure buildout or technologically prototyping explicitly, but they funded it accidentally, er, implicitly.

So in this case, the US system planned things 10 years in advance implicitly, without any one person explicitly trying to do so.

We can't explain the dot-com boom example by simplistic "market economics" arguments -- because on average, the investment of time and $$ in the dot-com boom wasn't worth it for the participants (and they probably weren't rational to expect that it would be worth it for them). Most of their work and $$ ultimately went to benefit people other than those who invested in the boom. But we can say that, in this case, the whole complex mess of the US economic system did implicitly perform some effective long-range planning.

Yet, this kind of implicit long-term planning has its limits, and seems to be failing in key areas like my own research area of AGI. The US is shortchanging AGI research badly compared to Europe as well as Asia, because our economic system is biased toward shortsightedness.

There are strong arguments that long-range state-driven planning and funding has benefited developing countries -- Singapore, South Korea and Brazil being some prime examples. In these cases, it supported the development of infrastructures that probably would not have developed in a less state-centric arrangement like we have in the US.

So, one interesting question is whether explicit or implicit long-range planning is going to be more effective in the next decades as technology and science continue to accelerate (or, to put the question more honestly but more confusingly: what COMBINATIONS of explicit and implicit long-range planning are going to work better)?

My gut feel is that the "mainly implicit" approach isn't going to do it. I think that if the US government doesn't take a strong hand in pushing for (and funding) adventurous, advanced technology and science development, then China will pull ahead of us within the next decades. I don't trust the US "market oligarchy" system to implicitly carry out the needed long-range planning.

The reason I have this feeling is that, in one advanced, accelerating technology area after another, I see a contradiction between the best path to short-term financial profit and the best path to medium-term scientific progress. For instance,

  • In AI, the quest for short-term profits biases toward narrow AI, yet the best medium-term research path is to focus on AGI
  • In nanotech, the best medium-term research path is Drexler's path which works toward molecular assemblers, but the best path to short-term profits is to focus on weak nanotechnology like most of the venture-funded "nano" firms are doing now
  • In life extension, the best short-term path is to focus on remedies for aging-related diseases, but the best medium-term path is either to understand the fundamental mechanisms of aging, or to work on radical cures to aging-related damage as Aubrey de Grey has suggested
  • In robotics, the path to short-term profit is industrial robots or Roombas, but the path to profound medium-term progress is more flexibly capable autonomous (humanoid or wheeled) mobile robots with capable hands, sensitive skin, etc. (and note how all the good robots are made in Japan, Korea or Europe these days, with government funding)
In area after area of critical technology and science, the short-term profit focus is almost certainly going to mislead us. What is needed is the ability to take the path NOW that is going to yield the best results 1-3 decades from now. I am very uncertain whether such an ability exists in the US, and it seems more clear to me that it exists in China.

The Chinese government is trying to figure out how to combine the explicit planning of their centralized agencies, with the implicit planning of the modern market ecosystem. They definitely don't have it figured out yet. But my feel is that, even if they make a lot of stupid mistakes as they feel their way into the future, their greater propensity for thinking in terms of DECADES rather than years or quarters, is going to be a huge advantage for them....

China has a lot of disadvantages compared to the US, including

  • a less rich recent science and engineering tradition
  • an immature ecosystem for academic/business collaboration
  • a culture that sometimes discourages effective brainstorming and teamwork
  • a less international scientific community
  • an unfortunate habit of blocking parts of the Internet (which doesn't prevent Chinese researchers from getting the world's scientific knowledge, but does prevent them from participating fully in the emerging Global Brain as represented by Web 2.0 technologies like Twitter, Facebook and so forth)

However, it may be that all these disadvantages are outweighed by the one big advantage of being better at long-range planning.

As Robin points out, dramatic success is often a matter of getting just a few things VERY RIGHT.

Tuesday, December 15, 2009

Dialoguing with the US Military on the Ethics of Battlebots

Today (as a consequence of my role in the IEET), I gave a brief invited talk at the National Defense University, in Washington DC, about the ethics of autonomous robot missiles and war vehicles and "battlebots" (my word, not theirs ;-) in general....

Part of me wanted to bring a guitar and serenade the crowd (consisting perhaps 50% of uniformed officers) with "Give Peace a Chance" by John Lennon and "Masters of War" by Bob Dylan ... but due to the wisdom of my 43 years of age I resisted the urge ;-p

Anyway the world seems very different than it did in the early 1970s when I accompanied my parents on numerous anti-Vietnam-war marches. I remain generally anti-violence and anti-war, but my main political focus now is on encouraging a smooth path toward a positive Singularity. To the extent that military force may be helpful toward achieving this end it has to be considered as a potentially positive thing....

My talk didn't cover any new ground (to me); after some basic transhumanist rhetoric I discussed my notion of different varieties of ethics as corresponding to different types of memory (declarative ethics, sensorimotor ethics, procedural ethics, episodic ethics, etc.), and the need for ethical synergy among different ethics types, in parallel with cognitive synergy among different memory/cognition types. For the low-down on this see a previous blog post on the topic.

But some of the other talks and lunchroom discussions were interesting to me, as the community of military officers is rather different from the circles I usually mix in...

One of the talks before mine was a prerecorded talk (robo-talk?) on whether it's OK to make robots that decide when/if to kill people, with the basic theme of "It's complicated, but yeah, sometimes it's OK."

(A conclusion I don't particularly disagree with: to my mind, if it's OK for people to kill people in extreme circumstances, it's also OK for people to build robots to kill people in extreme circumstances. The matter is complicated, because human life and society are complicated.)

(As the hero of the great film Kung Pow said, "Killing is bad. Killing is wrong. Killing is badong!" ... but, even Einstein had to recant his radical pacifism in the face of the extraordinary harshness of human reality. Harshness that I hope soon will massively decrease as technology drastically reduces material scarcity and gives us control over our own motivational and emotional systems.)

Another talk argued that "AIs making lethal decisions" should be outlawed by international military convention, much as chemical and biological weapons and eye-blinding lasers are now outlawed.... One of the arguments for this sort of ban was that, without it, one would see an AI-based military arms race.

As I pointed out in my talk, it seems that such a ban would be essentially unenforceable.

For one thing, missiles and tanks and so forth are going to be controlled by automatic systems of one sort or another, and where the "line in the sand" is drawn between lethal decisions and other decisions, is not going to be terribly clear. If one bans a robot from making a lethal decision, but allows it to make a decision to go into a situation where making a lethal decision is the only rational choice, then what is one really accomplishing?

For another thing, even if one could figure out where to draw the "line in the sand," how would it possibly be enforced? Adversary nations are not going to open up their robot control hardware and software to each other, to allow checking of what kinds of decisions robots are making on their own without a "human in the loop." It's not an easy thing to check, unlike use of nukes or chemical or biological weapons.

I contended that just as machines will eventually be smarter than humans, if they're built correctly they'll eventually be more ethical than humans -- even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

There was some understandable concern in the crowd that, if the US held back from developing intelligent battlebots, other players might pull ahead in that domain, with potentially dangerous consequences.... With this in mind, there was interest in my report on the enthusiasm, creativity and ample funding of the Chinese AI community these days. I didn't sense much military fear of China itself (China and the US are rather closely economically tied, making military conflict between them unlikely), but there seemed some fear of China distributing their advanced AI technology to other parties that might be hostile.

I had an interesting chat with a fighter pilot, who said that there are hundreds of "rules of engagement" to memorize before a flight, and they change frequently based on political changes. Since no one can really remember all those rules in real-time, there's a lot of intuition involved in making the right choices in practice.

This reminded me of a prior experience making a simulation for a military agency ... the simulated soldiers were supposed to follow numerous rules of military doctrine. But we found that when they did, they didn't act much like real soldiers -- because the real soldiers would deviate from doctrine in contextually appropriate ways.

The pilot drew the conclusion that AIs couldn't make the right judgments because doing so depends on combining and interpreting (he didn't say bending, but I bet it happens too) the rules based on context. But I'm not so sure. For one thing, an AI could remember hundreds of rules and rapidly apply them in a particular situation -- that is, it could do a better job of declarative-memory-based battle ethics than any human. In this context, humans compensate for their poor declarative memory based ethics [and in some cases transcend declarative memory based ethics altogether] with superior episodic memory based ethics (contextually appropriate judgments based on their life experiences and associated intuitions). But, potentially, an AI could combine this kind of experiential judgment with superior declarative ethical capability, thus achieving a better overall ethical functionality....

One thing that was clear is that the US military is taking the diverse issues associated with battle AI very seriously ... and soliciting a variety of opinions from those all across the political spectrum ... even including out-there transhumanists like me. This sort of open-ness to different perspectives is certainly a good sign.

Still, I don't have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good ... but there are a lot of other scenarios as well.

My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.

My colleagues and I -- among others -- are working on it ;-)

Wednesday, December 02, 2009

100 neural net cycles to produce consciousness?

This interesting article presents data indicating that it takes around half a second for an unconscious visual percept to become conscious (in the human brain)...

This matches well with Libet's result that there is a half-second lag between unconsciously initiating an action and consciously knowing you're initiating an action...

(Of course, what is meant by "consciousness" here is "consciousness of the reflective, language-friendly portion of the human mind" -- but I don't want to digress onto the philosophy of consciousness just now; that's not the point of this post ... I've done that in N prior blog posts ;-)

My Chinese collaborator ChenShuo pointed out that, combined with information about the timing of neural firing, this lets us estimate how much neural processing is needed to produce conscious perception.

As I recall, the firing of a single neuron's action potential takes around 5 milliseconds ... It takes maybe another 10-20 milliseconds after that for the neuron to be able to fire again (that's the "refractory period") .... Those numbers are not exact but I'm pretty sure they're the right order of magnitude...

So, the very rough estimate is 100 cycles in the neural net before consciousness, it would seem ;)

This fits with the view of consciousness in terms of strange attractors ... 100 cycles is often enough time for a recurrent net to converge to into an attractor basin ...

But of course the dynamics during those ~100 cycles is the more interesting story, and it's still obscure....

Is it really an attractor we have here, or "just" a nicely patterned transient? A terminal attractor a la Mikhail Zak's work, perhaps? Etc.

Enquiring minds want to know! (TM)