Saturday, August 30, 2008

On the Preservation of Goals in Self-Modifying AI Systems

I wrote down some speculative musings on the preservations of goals in self-modifying AI systems, a couple weeks back; you can find them here:

http://www.goertzel.org/papers/PreservationOfGoals.pdf

The basic issue is: what can you do to help mitigate against the problem of "goal drift", wherein an AGI system starts out with a certain top-level goal governing its behavior, but then gradually modifies its own code in various ways, and ultimately -- through inadvertent consequences of the code revisions -- winds up drifting into having different goals than it started with. I certainly didn't answer the question but I came up with some new ways of thinking about the problem, and formalizing the problem, that I think might be interesting....

While the language of math is used in the paper, don't be fooled into thinking I've proved anything there ... the paper just contains speculative ideas without any real proof, just as surely as if they were formulated in words without any equations. I just find that math is sometimes the clearest way to say what I'm thinking, even if I haven't come close to proving the correctness of what I'm thinking yet...

An abstract of the speculative paper is:


Toward an Understanding of the Preservation of Goals
in Self-Modifying Cognitive Systems


Ben Goertzel



A new approach to thinking about the problem of “preservation of AI goal systems under repeated self-modification” (or, more compactly, “goal drift”) is presented, based on representing self-referential goals using hypersets and multi-objective optimization, and understanding self-modification of goals in terms of repeated iteration of mappings. The potential applicability of results from the theory of iterated random functions is discussed. Some heuristic conclusions are proposed regarding what kinds of concrete real-world objectives may best lend themselves to preservation under repeated self-modification. While the analysis presented is semi-rigorous at best, and highly preliminary, it does intuitively suggest that important humanly-desirable AI goals might plausibly be preserved under repeated self-modification. The practical severity of the problem of goal drift remains unresolved, but a set of conceptual and mathematical tools are proposed which may be useful for more thoroughly addressing the problem.

Wednesday, August 27, 2008

Playing Around with the Logic of Play

On the AGI email list recently, someone asked about the possible importance of creating AGI systems capable of playing.

Like many other qualities of mind, I believe that the interest in and capability for playing is something that should emerge from an AGI system rather than being explicitly programmed-in.

It may be that some bias toward play could be productively encoded in an AGI system ... I'm still not sure of this.

But anyway, in the email list discussion I formulated what seemed to be a simple and clear characterization of the "play" concept in terms of uncertain logical inference ... which I'll recount here (cleaned up a slight bit for blog-ification).

And then at the end of the blog post I'll give some further ideas which have the benefit of making play seem a bit more radical in nature ... and, well, more playful ...

Fun ideas to play with, at any rate 8-D

My suggestion is that play emerges (... as a consequence of other general cognitive processes...) in any sufficiently generally-intelligent system that is faced with goals that are very difficult for it .

If an intelligent system has a goal G which is time-consuming or difficult to achieve ... it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism


Achieving G implies reward

G1 is similar to G

|-


Achieving G1 implies reward


(which in my Probabilistic Logic Networks framework would be most naturally modeled as an "intensional implication.)

As links between goal-achievement and reward are to some extent modified by uncertain inference (or analogous process, implemented e.g. in neural nets), we thus have the emergence of "play" ... in cases where G1 is much easier to achieve than G ...

Of course, if working toward G1 is actually good practice for working toward G, this may give the intelligent system (if it's smart and mature enough to strategize) or evolution impetus to create additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the play that human kids do with blocks and sticks and so forth is a special case, oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play (for the goals that originally inspired the play) is in appropriate similarity-assessment ... i.e. in measuring similarity between G and G1 in such a way that achieving G1 actually teaches things useful for achieving G.

But of course, play often has indirect benefits and assists with goals other than the ones that originally inspired it ... and, due to its often stochastic, exploratory nature it can also have an effect of goal drift ... of causing the mind's top-level goals to change over time ... (hold that thought in mind, I'll return to it a little later in this blog post...)

The key to the above syllogism seems to be similarity-assessment. Examples of the kind of similarity I'm thinking of:

  • The analogy btw chess or go and military strategy
  • The analogy btw "roughhousing" and actual fighting

In logical terms, these are intensional rather than extensional similarities

So for any goal-achieving system that has long-term goals which it can't currently effectively work directly toward, play may be an effective strategy...

In this view, we don't really need to design an AI system with play in mind. Rather, if it can explicitly or implicitly carry out the above inference, concept-creation and subgoaling processes, play should emerge from its interaction w/ the world...

Note that in this view play has nothing intrinsically to do with having a body. An AGI concerned solely with mathematical theorem proving would also be able to play...

Another interesting thing to keep in mind when discussing play is subgoal alienation

When G1 arises as a subgoal of G, nevertheless, it may happen that G1 survives as a goal even if G disappears; or that G1 remains important even if G loses importance. One may wish to design AGI systems to minimize this phenomenon, but it certainly occurs strongly in humans.

Play, in some cases, may be an example of this. We may retain the desire to play games that originated as practice for G, even though we have no interest in G anymore.

And, subgoal alienation may occur on the evolutionary as well as the individual level: an organism may retain interest in kinds of play that resemble its evolutionary predecessors' serious goals, but not its own!

Bob may have a strong desire to play with his puppy ... a desire whose roots were surely encoded in his genome due to the evolutionary value in having organisms like to play with their own offspring and those of their kin ... yet, Bob may have no desire to have kids himself ... and may in fact be sterile, dislike children, and never do anything useful-to-himself that is remotely similar to his puppy-playing obsession.... In this case, Bob's "purely playful" desire to play with his puppy is a result of subgoal alienation on the evolutionary level. On the other hand, it may also help fulfill other goals of his, such as relaxation and the need for physical exercise.

This may seem a boring, cold, clinical diagnosis of something as unserious and silly as playing. For sure, when I'm playing (with my kids ... or my puppy! ... or myself ... er ... wait a minute, that doesn't work in modern English idiom ;-p) I'm not thinking about subgoal alienation and inference and all that.

But, when I'm engaged in the act of sexual intercourse, I'm not usually thinking about reproduction either ... and of course we have another major case of evolution-level and individual-level subgoal alienation right there....

In fact, writing blog entries like this one is largely a very dry sort of playing! ... which helps, I think, to keep my mind in practice for more serious and difficult sorts of mental exercise ... yet even if it has this origin and purpose in a larger sense, in the moment the activity seems to be its own justification!

Still, I have to come back to the tendency of play to give rise to goal drift ... this is an interesting twist that apparently relates to the wildness and spontaneity that exists in much playing. Yes, most particular forms of play do seem to arise via the syllogism I've given above. Yet, because it involves activities that originate as simulacra of goals that go BEYOND what the mind can currently do, play also seems to have an innate capability to drive the mind BEYOND its accustomed limits ... in a way that often transcends the goal G that the play-goal G1 was designed to emulate....

This brings up the topic of meta-goals: goals that have to do explicitly with goal-system maintenance and evolution. It seems that playing is in fact a meta-goal, quite separately from the fact of each instance of playing generally involving an imitation of some other specific real-life goal. Playing is a meta-goal that should be valued by organisms that value growth and spontaneity ... including growth of their goal systems in unpredictable, adaptive ways....

w0000000t!!!!

Friday, August 22, 2008

Machine Consciousness (report from the Nokia Workshop)

I just got finished with the two-day Workshop on Machine Consciousness that Pentti Haikonen organized at Nokia Research, in Helsinki.

I probably wouldn't have come to Finland just for this gathering, but it happened I was really curious to meet the people at RealXTend, the Finnish open-source-virtual-worlds team Novamente has been collaborating with (with an aim toward putting our virtual pets in RealXTend) ... so the workshop plus RealXTend was enough to get me on a plane to Helsinki (with a side trip to Oulu where RealXTend is located).

This blog post quasi-randomly summarizes a few of my many reactions to the workshop....

Many of the talks were interesting, but as occurs at many conferences, the chats in the coffee and meal breaks were really the most rewarding part for me...

I had not met Haikonen personally before, though I'd read his books; and I also met a lot of other interesting people, both Finnish and international....

I had particularly worthwhile chats with a guy named Harri Valpola, a Finnish computational neuroscience researcher who is also co-founder of an AI company initially focused on innovative neural-net approaches to industrial robotics.

Harri Valpola is the first person I've talked to who seems to have originally conceived a variant of my theory of how brains may represent and generate abstract knowledge (such as is represented in predicate logic using variables and quantifiers). In brief my theory is that the brain can re-code a neural subnetwork N so that the connection-structure of N serves as input to some other subnetwork M. This lets the brain construct "higher order functions" as used in combinatory logic or Haskell, which pose an equivalent mathematical alternative to traditional predicate logic formulations. Harri's ideas did not seem exactly identical to this, but he did have the basic idea that neural nets can generate abstraction via having subnets take-as-input aspects of the connection structure of other nets.

Once again I was struck by the way different people, from totally different approaches, may arrive at parallel ideas. I arrived at these particular ideas via combinatory logic and then sought a neuroscience analogue to combinatory logic's higher-order-functions, whereas Harri arrived at them via a more straightforward neuroscience route. So our approaches have different flavors and suggest different research directions ... but ultimately they may well contribute the same core idea.

I don't have time to write summaries of the various talks I saw or conversations I had, so I'll just convey a few general impressions of the state of "machine consciousness" research that I got while at the conference.

First of all, I'm grateful to Pentti Haikonen for organizing the workshop -- and I'm pleased to see that the notion of working on building conscious, intelligent machines, in the near term, has become so mainstream. Haikonen is a researcher at a major industry research lab, and he's explicitly saying that if the ideas in his recent book Conscious Robots are implemented, the result will be a conscious intelligent robot. Nokia does not seem to have placed a tremendous amount of resources behind this conscious-robot research program at present, but at least they are taking it seriously, rather than adopting the skeptical attitude one might expect from talking to the average member of the AAAI. (My own view is that Haikonen's architecture lacks many ingredients needed to achieve human-level AGI, but could quite possibly produce a conscious animal-level intelligence, which would certainly be a very fascinating thing....)

The speakers were a mix of people working on building AI systems aimed at artificial consciousness, and philosophers investigating the nature of consciousness in a theoretical way. A few individuals with neuroscience background were present, and there was a lot of talk about brains, but the vast majority of speakers and participants were from the computer science, engineering or philosophy worlds, not brain science. The participants were a mix of international speakers, local Finns with an interest in the topic (largely from local universities), and Nokia Research staff (some working in AI-related areas, some with other professional foci but a general interest in machine consciousness).

Regarding the philosophy of consciousness, I didn't feel any really new ground was broken at the workshops, though many of the discussants were insighful. As a generalization, there was a divide betwen participants who felt that essentially any machine with a functioning perception-action-control loop was conscious, versus those who felt that a higher level of self-reflection was necessary.

My own presentation from the workshop is here ... most of it is cut and pasted from prior presentations on AGI but the first 10 slides are so are new and discuss the philosophy of consciousness specifically (covering content previously given in my book The Hidden Pattern and various blog posts). I talked for half an hour and spent the first half on philosophy of consciousness, and the second half on AGI stuff.

I was the only vocal panpsychist at the workshop ... i.e. the only one maintaining that everything is conscious, and that it makes more sense to think of the physical world as a special case of consciousness (Peirce's "Mind is matter hide-bound with habit") than to think of consciousness as a special case of the physical world. However, one Finnish philosopher in the audience came up to me during a coffee break and told me he thought my perspective made sense, and that he was happy to see some diversity of perspective at the workshop (i.e. to see a panpsychist there alongside all the hard-core empiricists of various stripes).

My view on consciousness is that raw consciousness, Peircean First, is an aspect of everything ... so that in a sense, rocks and numbers are conscious, not just mice and people. However, different types of entities may have qualitatively different kinds of consciousness. For instance, systems that are capable of modeling themselves and intelligently governing their behavior based on their self-models, may have what I call "reflective consciousness." This is what I have tried to model with hypersets, as discussed in my presentation and in a prior blog post.

Another contentious question was whether simple AI systems can display consciousness, or whether there's a minimal level of complexity required for it. My view is that reflective consciousness probably does require a fairly high level of complexity -- and, furthermore, I think it's something that pretty much has to emerge from an AI system through its adaptive learning and world-interaction, rather than being explicitly programmed-in. My guess is that an AI system is going to need a large dynamic knowledge-store and a heck of a lot of experience to be able to usefully infer and deploy a self-model ... whereas, many of the participants in the workshop seemed to think that reflective consciousness could be created in very simple systems, so long as they had the right AI architecture (e.g. a perception-action-control loop).

Since my view is that
  • consciousness is an aspect of everything
  • enabling the emergence of reflective consciousness is an important part of achieving advanced AGI
my view of machine consciousness as a field is that
  • the study of consciousness in general is part of philosophy, or general philosophical psychology
  • the study of reflective consciousness is an important part of cognitive science, which AGI designers certainly need to pay attention to
One thing we don't know, for example, is which properties of human reflective consciousness emanate from general properties of reflective consciousness itself, and which ones are just particular to the human brain. This sort of fine-grained question didn't get that much time at the workshop, and I sorta wish it had -- but, maybe next year!

As an example, the "7 +/- 2" property of human short-term memory seems to have a very big impact on the qualitative nature of human reflective consciousness ... and I've always wondered the extent to which it represents a fundamental property of STM versus just being a limitation of the brain. It's worth noticing that other mammals have basically the same STM capacity as humans do.

(I once speculated that the size of STM is tied to the octonion algebra (an algebra that I discussed in another, also speculative cog-sci context here), but I'm not really so sure about that ... I imagine that even if there are fundamental restrictions on rapid information processing posed by algebraic facts related to octonions, AI's will have tricky ways of getting around these, so that these fundamental restrictions would be manifested in AI's in quite different ways than via limited STM capacity.)

However, it's hard to ever get to fine-grained points like that in broad public discussions of consciousness ... even among very bright, well-intentioned expert researchers .. because discussion of consciousness seems to bring up even worse contentious, endless, difficult arguments among researchers than the discussion of general intelligence ... in fact consciousness is a rare topic that is even harder to discuss than the Singularity!! This makes consciousness workshops and conferences fun, but also means that they tend to get dominated by disagreements-on-the-basics, rather than in-depth penetration of particular issues.

It's kind of hard for folks who hold different fundamental views on consciousness -- and, in many cases, also very different views on what constitute viable approaches to AGI -- to get into deep, particular, detailed discussions of the relationship between consciousness and particular AI systems!

In June 2009 there will be a consciousness conference in Hong Kong. This should be interesting on the philosophy side -- if I go there, I bet I won't be the only panpsychist ... given the long history of panpsychism in various forms in Oriental philosophy. I had to laugh when one speaker at the workshop got up and stated that, in studying consciousness, he not only didn't have any answers, he didn't know what were the interesting questions. I was tempted to raise my hand and suggest he take a look at Dharmakirti and Dignaga, the medieval Buddhist logicians. Buddhism, among other Oriental traditions of inquiry, has a lot of very refined theory regarding different states of consciousness ... and, while these traditions have probably influenced some modern consciousness studies researchers in various ways (for example my friend Allan Combs, who has sought to bridge dynamical systems theory and Eastern thought), they don't seem to have pervaded the machine-consciousness community very far. (My own work being an exception ... as the theory of mind on which my AI work is based was heavily influenced by Eastern cognitive philosophy, as recounted in The Hidden Pattern.)

I am quite eager to see AI systems like my own Novamente Cognition Engine and OpenCogPrime (and Haikonen's neural net architecture, and others!!) get to the point where we can study the precise dynamics by which reflective consciousness emerges from them. Where we can ask the AI system what it feels or thinks, and see which parts of its mind are active relevant to the items it identifies as part of its reflective consciousness. This, along with advances in brain imaging and allied advances in brain theory, will give us a heck of a lot more insight....

Wednesday, August 13, 2008

Me on "The Future and You" podcast

I did an interview recently on "The Future and You" podcast ... interested parties may, er, feast their ears at

http://www.thefutureandyou.libsyn.com/index.php?post_id=368080

Topics include AGI, uploading, the nature and ultimate immateriality of the self, continuity of consciousness, and all manner of other expectable rambling blablabla ;-)

Saturday, August 09, 2008

In Memoriam Leo Zwell

History is a damn dim candle
over a damn dark abyss
-- W.S. Holt


My grandfather Leo Zwell died last week at age 92, so I thought I'd write a blog post (inadequately) commemorating his existence and lamenting his passing.

Leo and me, 1967

What a really exceptional person he was, and how glad I am to have known him.

He was a crystallographer by profession, and a really outstanding grandfather, but most of all I'll remember him as two things (in no particular order):

  • an inquisitive, careful, always-processing, generally-interested mind
  • a caring, loving human who always wanted to help, and to see that others were doing well
His attitude toward humanity was a subtle one, alluded to by a phrase I remember from a poem I wrote for him and my grandmother when I was 18 or so: "Cynically and with innocence." Of course, this is a rather typically Eastern-European-Jew sort of almost-but-not-quite-self-contradictory attitude, but he manifested it in a uniquely warm and thoughtful way.

He saw humans as hopelessly flawed, screwed-up animal creatures, dealt a mixed hand by evolution (a frequent saying of his was, "We're really just animals. Considering that we're really just animals like all the other animals, we're really not so bad ... we've actually come a long way.") -- yet he was dedicated to getting the most out of the flawed human-animal mind by understanding the world around him and encouraging others to do the same ... and to helping others nudge their flawed human existences in the direction of satisfaction and growth and cognizance rather than suffering and foolishness....

He thought people sometimes expected too much of humanity, given that after all we're just animals with mildly hyperdeveloped craniums. But then, he was also prone to push people to think more and more, to consider perspectives and avenues beyond what their culture or personality or habitual mind-set presented them with. It wasn't exactly "hope for the best, expect the worst" -- more like "expect the mixed-up and confused, but keep pushing for the better and better."

In conversations and in his own thinking on everyday or scientific topics, he was always willing to approach the world simply, in the manner of a child, just looking at reality without preconceived notions ... yet was also extremely interested in accumulating knowledge, and critical of people who govern their thinking and living in an ignorant way.... He was always great with children (at least until his very last years, when he occasionally became impatient), and one of the things he liked about kids was their natural inclination to be scientists and observers: he was always concerned to carefully and openly observe the world around him (and within him) and see what was actually there.

On a personal level, he was extremely important to me in two specific ways (beyond just being a loving and helpful family member), connected to the two qualities. I mentioned above...

Firstly: He was the one who got me into hard science, in the first place. Both of my parents were into social change and social science and such, and the general milieu of my early youth was left-wing hippie nonconformists, not hard scientists concerned with understanding physical or mathematical reality. Whenever we visited Leo (and we moved near him when I was 6, so I saw him often after that) he lectured me on physics, chemistry, biology, mathematics and what-not ... and he had an endless reserve of stories about experiments he'd done, famous and non-famous scientists he'd worked with, and so forth.

Leo taught me some practical things about science and math, such as tricks for doing mental arithmetic ... and simple trigonometry, Newton's method, the basics of X-ray diffraction and so forth -- but that wasn't really the main point ... I could learn that stuff from books ... the main point was his enthusiasm for science and his immersion in the culture of science ... in the culture of thinking, learning, and communicating with a goal of incrementally understanding more and more about the world.... The cognitive/adaptive/communicational attitude was something he applied to every aspect of existence, including everyday human life, but he saw the scientific sphere as the place where thought and in-depth communication could really flourish.

The other major gift he gave me was: As a child, he was the only example I saw, up-close, of a man who had gotten really deeply into taking care of his kids. My father Ted Goertzel was a very good father and I learned an awful lot from engaging with him in wide-ranging intellectual conversations throughout my childhood ... we also traveled a lot together and played sports together and did many other rewarding things ... but, in our house as a child it was my mom who did the vast bulk of childcare. Leo on the other hand had taken on a large amount of the responsibility for taking care of my mother and her brother, himself -- and I could see this, even as a child myself, in the nature of his relationship with his adult children. I thought this was pretty interesting, and I could see that both he and his children had gained a huge amount from this (at the time, quite unusual) pattern.

Seeing the relationship between Leo and my mom was a lot of what imprinted me with the idea that caring for my kids was something I was supposed to do myself (another thing pushing me in this direction, was probably my mother's own radical-feminist emphasis on male-female equality) ... an idea that continues to absorb a significant portion of my life, as I have ~50% shared-with-my-ex-wife custody of my two still-under-18 kids (my oldest, Zar, is 18 and now a junior in university: damn that makes me feel old!!).

It is impossible for me to estimate the amount of personal reward I've gotten from following Leo down the child-caring path ... or the amount to which my thinking about human and AI cognition has been influenced by carefully observing and partaking of the mental and emotional development of my 3 kids....

These days it is not that shocking for a father to take an intensive hands-on role, but for Leo's generation it was anomalous, and -- while there were some specific situational reasons that pushed him in this direction, such as some temporary health problems on his wife's part -- I'm sure much of the the reason he took on this role as intensively as he did was just his overall passion for being helpful. His kids needed and valued his help, so from his view, it would have been unnatural not to provide as much help as was sensibly possible. He could have left it to his wife, but as a rule, he never really was one to leave things to others (another meme I've adopted from him: I too like to Get Things Done, and have a strong tendency toward getting them done myself rather than relying on others...).

I only knew Leo for the last 41/92 of his life, and I'm not going to try to convey nearly all that I knew of him, but I hope the observations I'll make here will transmit some meaningful (if tiny) fraction of the essence of the person.

This blog post will be sort of disorganized (it already is, and I'm just getting started!) ... I'm going to jump around in time a bit ... but that's the way memory works ... and anyway, as Leo and I discussed a few times, modern physics tells us the directionality of time is projected by the perceiving mind...

The end was distressing but, at least, not marked by great suffering. Leo's body faded gradually during the last few years, with an especially fast decline in the last 6 months after a surgery ... his death itself was about as humane as deaths get, with his daughter (my mother, who did an incomprehensibly great job of caring for him during his last years, all while holding down a very demanding more-than-full-time job running a social service program) at his side. He spent much of his last few hours listening to family members bid him farewell over the phone, and then he curled up in a fetal position and reluctantly let his worn-out body shut down.

One of the most interesting things about the last few years was what an astute observer he was about his own mental and physical deterioration. As each aspect of his functioning declined (arithmetic ability, memory for faces, memory for names, sense of direction), he would note this and analyze the particular nature of the deterioration. And he worried to the end about imposing a burden on his family, to which we would always reply that he had helped us a huge amount in the earlier part of his life, so we were more than happy to return the favor.

Indeed, helping others was -- as I already hinted above -- one of the biggest themes of Leo's life. The receptionist at the old folks' home where he lived for his last few years (Martin's Run) likes to tell a story about the day that Leo felt she looked bored and hungry sitting at her desk by the entrance, so he went to the kitchen and brought her some soup. Walking was slow for him at that time, but, he thought it was imperative that the receptionist be treated well. I don't remember any of the other residents at Martin's Run paying nearly so much attention to the well-being of the hired staff. But this meme went way back in Leo's life, to his roots growing up in Brooklyn during the Great Depression, and then his political involvement in socialist organizations a little later on.

(His interest in socialism was never an abstract or theoretical thing; while, as a scientist, he appreciated very much the value of good theories with explanatory power, he never took much stock in social theories, considering them largely mumbo-jumbo. His interest in socialism was always very practical: he saw the world as full of people who needed help, and he thought that society should be organized in such a way that they got the help they needed. After the truth about the Soviet Union became clearer in the 1970's and 1980's he backed away from the more Marxist variants of socialism, but remained strongly attached to the caring-oriented values at the heart of the democratic socialist tradition.)

The (woefully inadequate) obituary in the Philadelphia Inquirer, after his death, read as follows:

Leo Zwell, 92, a retired scientist with the Joint Committee of X-Ray Powder Diffraction Standards in Swarthmore, died of heart failure on Wednesday at Martin's Run, a retirement community in Media where he had lived since 2002.

Mr. Zwell was a physicist with the Jet Propulsion Laboratory at the California Institute of Technology, U.S. Steel, and the U.S. Bureau of Standards before joining the Swarthmore firm in 1972.

He was a 1934 graduate of Brooklyn College. "He graduated at the age of 19," said his son, Michael, chief executive officer of his own human resources firm in Chicago.

"He was working full time" while in college "to put himself through school and contribute to his family's income," his son said. In the depths of the Depression, there was no money for advanced studies.

The Joint Committee, Michael Zwell said, is a publisher of research about X-raying of materials "to identify what their atomic structure was."

Besides his son, he is survived by his daughter, Carol Goertzel; sisters Gladys Berman and Priscilla Endler; four grandchildren; and a step-grandchild.

Mr. Zwell's wife of 54 years, Etta, died in 1994.

A memorial service will be held at 12:30 p.m. today at Martin's Run, near Route 320 and Paxon Hollow Road outside Media.

A spokeswoman for the Humanity Gifts Registry said Mr. Zwell had donated his body to science.


(Small digressive note: Perhaps the most peculiar thing about this obituary is its failure to mention that my mother Carol Goertzel runs a very successful social service program in the Philadelphia area. Apparently some obituary-writers don't do much homework.)


me and my mom Carol (Leo's daughter),
kickin' it old school back in the good old days ...
1968 or so, Eugene, Oregon,
in the midst of all the hippy madness...
but I was more concerned with my box
than science or revolt or the Vietnam war...

Leo graduated high school at 14 (so he wound up entering college at 15, just as I did ... though he was a younger 15) and went to Brooklyn College, where he majored in chemistry. Unlike me (whew!) he worked full-time while in college, helping out in his father's parking garage (which he found rewarding though at times exhausting ... and one of his favorite topics of discourse was the incredible ethical and intellectual characteristics of his father, Pop Charlie ... Charlie Zwagilski before immigrating to the US from Eastern Europe and getting his name mutated).

Pop Charlie (holding my sister Rebecca)
and Grandma Sarah (holding me)



Pop Charlie before I knew him, back in the day

Not too long after graduating college Leo joined the WWII war effort, stationed in the lab rather than the battlefield, working on various projects related to creating better metallic compounds for use in missiles and other military devices. Like many others of that time, he found his innate tendency to pacifism called into question by the Nazi threat. All or nearly all of his relatives who had not emigrated to the US, were wiped out in one or another anti-Jewish purge in Eastern Europe.

After the war ended, he spent some time as a researcher at the Jet Propulsion Laboratory in California, but eventually had to leave there due to McCarthyism: he was actively involved in efforts to politically organize scientists and engineers, and was hence falsely accused of un-Americanism. This episode had a lasting impact on his psyche, imprinting him with a cynicism about human nature and society that never quite left him after that. But, he went on to a very successful research career at US Steel in Pittsburgh, a major industrial research lab of its time ... and then after "retiring" from US Steel, spent 20 years working at the Joint Committee of X-Ray Powder Diffraction Standards in Swarthmore PA, where he pored over the crystallography research literature and figured out which of the many results published there seemed solid enough to enter into JCPDS's extensive databases.

(I briefly had a job helping him him with some of his JCPDS work during the summer of 1982, after my first year of college, but my duties consisted largely of photocopying and I confess I rapidly quit the job even though the small amount of pay was useful to me ... I did not, and still don't, have the patience to enjoy that sort of work....)

Leo during the JCPDS period


Had he been born a little later (a complicated counterfactual, but let's roll with it for the moment...) he would surely have gotten a PhD; but in his day you could do serious science without one ... and he surely did a lot of it. One trait of his that I did not inherit was his surpassing modesty: though he was brighter and more knowledgeable than many of the more famous and well-recognized scientists in the labs he worked in, he genuinely did not envy their greater glory and external recognition. Rather, he was contented to work in the background, analyzing peoples' data for them, solving the tough problems that no one else could solve, and pushing science ahead by helping others with their work as best he knew how.

Another excellent trait of his that I did not inherit was his incredible carefulness. He would review a body of data again and again, with never-failing concentration and precision, looking for subtle patterns, or subtle clues that some sort of error or irregularity might have occurred. There were various commonly-used formulas in the crystallographic literature that he considered inaccurate because the data from which they were derived had various issues; and I have little doubt he was correct. While I am not cognitively suited to be as careful as he was, watching his approach to understanding data was extremely educational for me, and throughout my career I have sought to work with people who are careful in the same way that he was, knowing that this is a virtue I lack.

He also taught me the value of teams in science. My inclination is to be a lone wolf and seek to range far and wide from others trying to solve the hardest problems by idiosyncratic means -- but he showed me that in science, an awful lot of things can only be achieved via different people with different strengths working together. What I'm doing now in my own career exemplifies this: in my thinking I remain a pretty-far-out-there lone wolf, but I'm happy to be working with a diverse, well-integrated R&D team that probably shares many characteristics with the teams he worked in at US Steel.

One theme that he often returned to in his ruminations and (extensive, sometimes rather excessive!) storytelling was that of Generalists versus Specialists. He considered himself more of a generalist, as he was always seeking to synthesize knowledge from different areas, and look for overall patterns of organization. On the other hand he also had deep specialized knowledge of particular areas such as X-ray diffraction data analysis. Only by integrating generalist and specialist traits, he felt, was it possible to really make profound scientific progress. He saw too many scientists as being generalists or specialists only, and felt that for this reason a lot less progress was made than could otherwise be the case.

In part, it was probably his very humbleness that allowed him to be so helpful to so many other scientists, during the course of his career. As they knew he didn't care to compete with them, they were comfortable sharing their doubts, questions, ideas and hard problems with him. He was always interested in arguing intellectual points -- with anyone, be they a famous scientist or a two year old child -- but rarely was there any rancor involved ... it was very much passionate, abstract argumentation in the Greek tradition. Ideas meant a lot to him, yet it was rare for him to hold someone's bad ideas against them as a human being. He had friends with whom he profoundly disagreed.

I recall, when my sister Rebecca (now a school principal) was 6, Leo was lecturing her on the need to avoid fashionable clothing and such, because making fashion statements was foolish and involved emphasizing the wrong things. (He liked to say how he'd worn the same clothes all his life, and watched them go in and out of style.) Rebecca argued back that being determinedly unfashionable was itself a kind of fashion statement, so should also be avoided according to his principles. He laughed and agreed with her, quite willing to take-lightly his own studied unfashionableness, and complimented her on her thoughtfulness. (Of course, Rebecca ultimately grew up not to be the kind of person who reads fashion magazines....)

One of Leo's and my biggest, long-running arguments regarded the future of humanity.

He believed human nature to be fundamentally flawed, and figured that all attempts to reform society and improve human nature were doomed to fail, based on the fundamentally screwed-up essence of human nature. He saw the scientific process as having a greater perfection than human nature, but still being deeply flawed in various ways due to the underlying flaws in the humans doing the science (when led to, for example, wrong formulas being perpetuated because the individuals advocating them were famous or people were simply too lazy to study the underlying data correctly).

When I argued to him that science could in fact be used to improve human nature, by modifying the brain or by uploading people into computers, he basically laughed off the idea. Not that he felt it was impossible, but that he felt he didn't have the conceptual background to really think about it thoroughly. I really wish I had discussed these topics with him when he was 55 or 60 rather than 75-92, but it just didn't occur to me, mostly because our conversations were more dominated by his own (interesting) interests. The most evocative thing he ever said on this topic was something like: "Fine, Ben, but by the time you modify the human brain to remove all the ethical problems and the foolishness, what you'll have won't be human anymore."

In other words, he saw screwed-up-ness as essential to the nature of humanity ... and he understood himself as inextricably part of this human web of screwed-up-ness ... but nevertheless he felt compelled to devote himself to gaining more and more understanding and helping all the other screwed-up humans as much as possible.

Still, I always had the feeling that if we argued long enough, I might have been able to bring him around to a point of view closer to my own radical futurism. Another of his excellent qualities was his ability to change his ideas and attitudes -- no matter how deep-seated -- based on evidence and reflection. This was evident in his scientific work, and also in his personal life: for instance he was raised with relatively sexist and homophobic attitudes (by modern standards), but gradually revised these over time as he observed they really did not explain what he saw around him, nor accord with his desire for general human happiness.

I also remember arguing with him that it might, one day, be possible to resurrect the dead in a scientific way. My argument was: If quantum theory is correct then all the information about everything that has ever happened is encoded in the perturbations of particles in the universe now, so that in principle dead people could be reconstituted from this information, if a being were smart and powerful enough to collect it and do the appropriate nanoengineering. On the other hand, who knows if quantum theory is correct ... and there are, er, some engineering difficulties in this plan. Leo certainly found this train of thought amusing -- but he was not of the emotional cast to draw any hope from it. As far as he was concerned, once he was dead, he was gone, and that was the end of it ... remote possibilities regarding far-future weirdball engineering feats didn't really enter his emotional world. He did not want to die, but he accepted it, and had no patience for superstitions or wishful thinking.

Another comment he frequently made was that he was "never bored." He claimed not to really understand the concept of boredom. "I always have my own mind," he said. "How can I be bored? There's always so much to think about and to wonder about."

Indeed when your attitudes are "Question Authority!" and furthermore "Question Everything!" (two attitudes that came down to me from both sides of my family with pretty overwhelming force), boredom is hard to come by, because there are always so many things to be questioning....

Even at the end, in fact, when his powers of thought were a fraction of their previous, he STILL was never-bored, and was always thinking and trying to understand things. At the very end, when his memory was gone, he was continually asking questions about the objects in the hospital room: what is that? What is that for? Who put that there?

In his last few hours, right before he lost the energy to talk, he was counting ... he counted from 1 to 36, slowly and carefully, as if to be sure all the numbers were still there, as if by attaching himself to the Platonic realm of numbers he was connecting with a reality more substantial than his fading body and memory ... as a non-religious person, he had no delusions of heavens or hells, but beyond our own personal worries, concerns and attachments, there is always the more permanent and perfect world of the Numbers ...

He had a great love of measurement as a way of understanding the world, and when we emptied out his drawers, boxes and closets after his death, we found a remarkable number of rulers, yardsticks, protractors, compasses, calculators and slide rules. (At a certain point, in Swarthmore, he adopted the habit of asking each visitor to his apartment, as soon as they walked in the door, to name their height ... and then proceeding to measure them, taking his daily dose of kindly schadenfreude from observing how nearly all males tend to overestimate their own heights.) Perhaps his most prized personal possession was his watch: when in the hospital the nurses took his watch from him, he felt completely at a loss until it was returned. I was reminded of the historical theory that the main reason Western civilization advanced so much further than others (such as the Chinese) was the invention, in the middle ages, of quantification, of precise empirical measurement. His career was based on measuring things and recognizing patterns in these measurements, and he was concerned with this till the end.

Well ... having written all that, it still seems pathetically inadequate, and there is so much more to say. Most of all I have left off the very, very long list of people whom Leo and his wife Etta helped in various ways during their lives -- going beyond family and colleagues, comprising a remarkable assemblage of individuals whom they encountered in one random way or another and tried their best to help on their ways through life.

I loved all 4 of my grandparents (the others died some time ago) ... and my other grandfather, Victor Goertzel, was also an accomplished scientist (a psychologist) with whom I had considerable intellectual interaction (for instance, when I was in my early 20's, Victor and my grandmother Mildred and my father and I co-authored a biography of Linus Pauling together) ... but I have to say that Leo is the only grandparent whom I really internalized, to the point of view where I sometimes feel like I have a miniature Leo Zwell homunculus living in some obscure corner of my brain, pointing out to me when someone needs help, and pointing out to me when some point on a data chart is likely to be an outlier, and urging me to doubt all my beliefs and ideas, especially the ones that are most important to me.

One of my favorite phrases was taught to me by my friend Bruce Klein, founder of the Immortality Institute and my collaborator in Novamente LLC: "To abolish the plague of involuntary death."

Indeed: few goals are as important. So sad that it did not happen in time for Grandpa Leo.

He had a good life and a very useful one (not always happy, there were bouts of depression and the usual real-life troubles, but overall a richly rewarding human existence) ... but even at 92 years, I can't help thinking that this excellent person's life was far, far too short.

P.S. The photos included in this blog post are among the many we took from Leo's apartment after his death.




ISO a non-religious foundation for the process of "taking responsibility"

This is a re-post (with light edits) of a post I made last week, that got disappeared due to some IT difficulties regarding poor communication btw blogger.com and my Web hoster.

These were some late-night thoughts about the conceptual logic of morality and responsibility, written to the tune of Charlie Parker's glorious "Au Privave" (hot on the heels of "Step into the Realm" by the Roots, one of the few hip-hop bands I like at all...).

I guess what I am inching toward here is some sort of cognitive theory of moral responsibility ... but I'm really inching there, one teeny little piece at a time.... (Well, some topics lend themselves to speed better than others....)

I'll start with will, and then move on to my main topic of taking personal "moral responsibility" for one's actions.

Don't worry, I haven't turned into a preacher yet (though I haven't shaved for a while and am sporting a fairly spiffy Jesus-like beard, though I've been planning to shave for a few days and just haven't found the time), my basic orientation on these topics is one of systems theory....

So, for starters: Anyone with any sense knows by now that the intuitive feeling of "free will" we have is illusory. Our unconscious decides for us, before any conclusion is derived by the process of conscious ratiocination that feels like it's making a decision.

So why bother with the decision process at all? Why not just go with the flow of the non-ratiocinative unconscious? Because we know that the intensely-conscious decision process helps dynamically restructure our long-term memory in a way that will help our unconscious make better decisions in the future.

Next: Anyone with any sense knows that the notion of "moral responsibility" is, to a large extent, a hanger-on from obsolete religious belief systems.

And, the notion of "taking personal responsibility" for one's actions has -- in most particular instances -- questionable empirical grounding. After all, anything any one of us does, is to a large extent caused by our social and physical context -- as Saddam famously said in the South Park movie: "It's not my fault that I'm so evil ... it's society ... society...." Of course, it really IS society ... that cute little pseudo-Saddam wasn't lying ... and in any particular case, none of us really has the information to tease out the internal from the external causes underlying any of our actions ... but yet, this is a poor perspective to take, in spite of the element of truth underlying it.

(Causality, in the end, is not really a scientific concept anyway: it's a tool that minds use to understand the world. A causes B, from the perspective of mind M, if

  • A precedes B
  • The probability of events in class B is differentially higher, given the prior and correlated occurrence of events in class A.
  • M can fairly confidently analogize that, if it were to carry out some action similar to A, then some event similar to B might be likely to follow

But that's a topic for another blog post, another day ... and is covered to some extent in the last chapter of my co-authored book Probabilistic Logic Networks, which Springer is supposed to publish this month...)

Sooo .... Why bother with the process of "taking moral responsibility" at all? Because we know that doing so helps us structure our long-term memory in a way that will help our unconscious take better actions in the future.

When we do something we wish we hadn't done, the act of assigning "responsibility" to ourselves causes us to insert a "correction signal" into our unconscious, which then modifies the structure of our internal declarative/procedural knowledge base in a way that makes it less likely we'll do similar regret-worthy things in the future. This is the case even though we (i.e. the deliberative, ratiocinative, "decision process" aspect of ourselves) don't know that much (rationally or intuitively) about how the unconscious works, and can't really untangle the various causal threads weaving through our minds and our worlds and leading to our actions.

The "ordinary waking state of consciousness" that most people occupy most of the time, involves a coupling of ratiocinative-decision-making with the free-will illusion, and a coupling of moral-responsibility-taking with some semi-religious notion of moral-agency. But it's possible to get into a state of mind where you carry out ratiocinative-decision-making and moral-responsibility-taking without any significant illusions attached to them ... simply because these cognitive dynamics tend to lead to effective overall system functioning.

Now, when I say "it's possible to get into a state of mind where X holds", am I saying that "by exercising one's free will, one can cause oneself to get into a state of mind where X holds" ?

Not really. What I'm saying is that "Sometimes the self-organizing dynamics of a mind coupled with an environment will result in that mind getting into a state of mind where X holds."

And what's the point of me telling you this? Well, some states of mind want to spread from one mind to another....

The basic idea is: If one does not internally take responsibility for one's own actions, one will never send those necessary correction-signals to one's own unconscious. Then one will just keep on doing those regrettable things.

Removing the obsolete, flawed quasi-religious concepts of blame, shame and so forth from one's inner mental landscape is an important step toward becoming a rational and self-aware, fully-realized person; but, once they are removed, they need to be replaced with something else ... they need to be replaced with a recognition of the mind as a holistic, complex dynamical system; and with a recognition of the role of the deliberative, ratiocinative aspect of mind as modulating the complex nonlinear dynamics of the unconscious.

None of us can control ourselves, none of us is fully aware of the dynamics by which we operate (in part because of basic information-theoretic limitations on the extent to which any finite system can understand itself; in part because of information-theoretically unnecessary limitations posed by the human brain architecture, which did not evolve in situations where acute self-awareness and mental self-control were key aspects of the fitness function). But "we" (the deliberative, ratiocinative "phenomenal self" portions of our minds) can modulate the dynamics of the other portions of our minds, via doing things like rational-decision-processes and responsibility-taking....

Our Lovable Species

I thought you might enjoy this email that I (and some others) received this morning. It is fairly typical of its genre, but particularly amusingly worded. I guess it doesn't need much comment; it kinda speaks for itself.

What scares me a little is that this sort of attitude is probably far more common than the transhumanist/Singularitarian/futurist attitude of most of the people I regularly communicate with ... and would probably be even more common if more people were aware of the kind of work that futurist-minded scientists and engineers are doing!

Clearly we need some more Hollywood flicks in which (preferably really hot-looking) AGI's save the world....


From: ********
Date: Sat, Aug 9, 2008 at 3:23 AM
Subject: I hope you realize.....

To: ****

I hope you people realize the immense stupidity, not intelligence, of what youre undertaking.

Anyone with any common sense would not allow this kind of research to reach its natural conclusion: the destruction of our humanity, such as what the transhumanists want to do.

Implants which are used to communicate across networks, or AI's so powerful that they literally take over normal human responsiblities. AI's would conclude we are incapable of caring for ourselves without help if they got advanced enough...


Take Kevin Warwiks work at the University of Reading. Absolutely monstrous. Everything he does, and everything you people do creates major issues in ethics and bioethics.

Trying to make people better with machines...We already are better then any cyborg you can make.


Anyone with any kind of common sense would reject these sort of things. Keep computers in a sense below where we are at.


Anybody who even thought about this, would say hey isnt there a problem with sticking machines in people, first it will be the disabled at first, sure, then normal people because somehow its cool and then alot of other horrible things...


Our technology is outpacing our wisdom, our humanity, our ability to comprehend the ethical boundaries beyond this, when you start getting into the term super intelligence.


The super intelligent thing would be to shutdown implants that react directly with the human mind, exceptions being people that medically benefit, such as alzheimers, the crippled, etc, but they should never be in perfectly healthy human beings.

If God had intended that machines be a part of us, he would have made us like that. But who knows if you guys even believe in Him.

No matter, just be aware of the dangerous waters you swim in. Our technology is so far ahead of our spirituality, that it represents a singularity of its own. A black hole that will suck us into oblivion unless you take a step back and realize the dangers of all this heady research.


So stop a bit, smell the roses, then reflect on this whole madness, Heres to You Mr Warwick of Reading, God help us all.