Friday, July 22, 2005

P.S. on objective/subjective reality and consciousness (and future virtual Elvises)

Well, I started writing a followup to my previous blog entry on subjective/objective reality, dealing with issues relating to consciousness and qualia, but it got way too big for a reasonable blog entry, and so I've posted it as an HTML document:

http://www.goertzel.org/new_essays/QualiaNotes.htm

But it's still rough and informal and speculative in the manner of a blog entry, rather than being a really polished essay.

Of course, I have plenty more to say on the topic than what I wrote down there, but -- well -- the usual dilemma ... too many thoughts, too little time to write them all down... I need to prioritize. Entertaining, speculative philosophy only gets a certain fraction of my time these days!

BTW, I wrote about 1/3 of those notes while watching "Jailhouse Rock" with the kids, but I don't know if Elvis's undulating pelvis had any effect on the style or contents of the essay or not. (Wow -- the Elvis phenomenon really makes piquant the whole transhumanist dilemma of "Is humanity really worth preserving past the Singularity or not?"!! ... A decent helping of art, beauty and humor exists there in Elvis-land, sure -- but along with such a whopping dose of pure and unrefined asininity --- whoa.... )

How many of you readers out there agree that the first superhuman AI should be programmed to speak to humans through a simulation of Elvis's face??? ;-D

Tuesday, July 19, 2005

Objective versus subjective reality: Which is primary?

This post is a purely intellectual one -- playing at the border between "blog entry" and "brief philosophical essay"..... It transmits a small portion of the philosophical train of thought I undertook while wandering with Izabela at White Sands National Monument a few weeks ago. Much of that train of thought involved issues such as free will and the emergence of notions of self, will and reality in the infant's mind (the epigenesis of conceptual structures and cognitive dynamics in the infant and toddler mind is much on my mind these days, because in the Novamente AI project we're working on putting together a demonstration of Novamente progressing through the earlier of Jean Piaget's stages of child cognitive development). But what I'll discuss here today is a bit different from that: the relation between objective and subjective reality.

One of my motivations for venturing into this topic is: I've realized that it's wisest to clearly discuss the issue of reality before entering into issues of consciousness and will. Very often, when I try to discuss my theory of consciousness with people, the discussion falls apart because the people I'm talking to want to assume that objective reality is primary, or else that subjective experiential reality is primary. Whereas, to me, a prerequisite for intelligently discussing consciousness is the recognition that neither of these two perspectives on being is primary -- each has their own validity, and each gives rise to the other in a certain sense.

OK, so ... without further ado... : There are two different ways to look at the world, both of which are to some degree sympathetic to me.

One way is to view the objective world as viewed by science and society as primary, and to look at the subjective worlds of individuals as approximations to objective reality, produced by individual physical systems embedded within physical reality.

Another way is to view the subjective, experiential world of the individual world (mine, or yours) as primary, and look at "objective reality" as a cognitive crutch that the experiencing mind creates in order to make use of its own experience.

I think both of these views are valid and interesting ones -- they each serve valuable purposes. They don't contradict each other, because the universe supports "circular containment": it's fine to say "objective reality contains subjective reality, and subjective reality contains objective reality." The theory of non-well-founded sets shows that this kind of circularity is perfectly consistent in terms of logic and mathematics. (Barwise and Etchemendy's book "The Liar" gives a very nice exposition of this kind of set theory for the semi-technical reader. I also said a lot about this kind of mathematics in my 1994 book Chaotic Logic, see a messy rough draft version of the relevant chapter here ... (alas, I long ago lost the files containing the final versions of my books!!))

But it's also interesting to ask if either of the two types of world is properly viewed as primary. I'll present here an argument that it may make sense to view either subjective or objective reality as primary, depending on the level of detail with which one is trying to understand things.

My basic line of argument is as follows. Suppose we have two entities A and B, either of which can be derived from the other -- but it's a lot easier to derive B from A than to derive A from B. Then, using the principle of Occam's Razor, we may say that the derivation of B from A is preferable, is more fundamental. (For those not in the know, Occam's Razor -- the maxim of preferring the simplest explanation, from among the pool of reasonably correct ones -- is not just a pretty little heuristic, but is very close to the core of intelligent thought. For two very different, recent explorations of this theme, see Marcus Hutter's mathematical theory of general intelligence; and Eric Baum's book What is Thought (much of which I radically disagree with, but his discussion of the role of Occam's Razor in cognition is quite good, even though he for some reason doesn't cite Ray Solomonoff who conceived the Occam-cognition connection back in the 1960's)).

I will argue here that it's much easier to derive the existence of objective reality from the assumption of subjective reality, than vice versa. In this sense, I believe, it's sensible to say that the grounding of objective reality in subjective reality is primary, rather than vice versa.

On the other hand, it seems that it's probably easlier to derive the details of subjective reality from the details of objective reality than vice versa. In this sense, when operating at a high level of precision, it may be sensible to say that the grounding of subjective reality in objective reality is primary, rather than vice versa.

Suppose one begins by assuming "subjective reality" exists -- the experienced world of oneself, the sensations and thoughts and images and so forth that appear in one's mind and one's perceived world. How can we derive from this subjective reality any notion of "objective reality"?

Philip K. Dick defined objective reality as "that which doesn't go away even when you stop believing in it." This is a nice definition but I don't think it quite gets to the bottom of the matter.

Consider the example of a mirage in the desert -- a lake of water that appears in the distance, but when you walk to its apparent location, all you find is sand. This is a good example of how "objective reality" arises within subjective reality.

There is a rule, learned through experience, that large bodies of water rarely just suddenly disappear. But then, putting the perceived image of a large body of water together with the fact that large bodies rarely disappear,and the fact that when this particular large body of water was approached it was no longer there -- something's gotta give.

There are at least two hypotheses one can make to explain away this contradiction:


1. one could decide that deserts are populated by a particular type of lake that disappears when you come near it, or

2. one can decide that what one sees from a distance need not agree with what one sees and otherwise senses from close up.

The latter conclusion turns out to be a much more useful one, because it explains a lot of phenomena besides mirage lakes.

Occam's Razor pushes toward the second conclusion, because it gives a simple explanation of many different things, whereas explanations of form 1 are a lot less elegant, since according to this explanatory style, each phenomenon where different sorts of perception disagree with each other requires positing a whole new class of peculiarly-behaving entity.

Note that nothing in the mirage lake or other similar experiences causes one to doubt the veracity of one's experiences.

Each experience is valid unto itself. However, the mind generalizes from experiences, and takes particular sensations and cognitions to be elements of more general categories. For instance, it takes a particular arrangement of colors to be a momentary image of a "lake", and it takes the momentary image of a lake to be a snapshot of a persistent object called a "lake." These generalizations/categorizations are largely learned via experience, because they're statistically valid and useful for achieving subjectively important goals.

From this kind of experience, one learns that, when having a subjective experience, it's intelligent to ask "But the general categories I'm building based on this particular experience -- what will my future subjective experiences say about these categories, if I'm experiencing the same categories (e.g. the lake) through different senses, or from different positions, etc." And as soon as one starts asking questions like that -- there's "objective reality."

That's really all one needs in order to derive objective reality from subjective reality. One doesn't need to invoke a society of minds comparing their subjective worlds, nor any kind of rigorous scientific world-view. One merely needs to posit generalization beyond individual experiences to patterns representing categories of experience, and an Occam's Razor heuristic.
In the mind of the human infant, this kind of reasoning is undertaken pretty early on -- within the first six months of life.

It leads to what developmental psychologists call "object permanence" -- the recognition that, when a hand passes behind a piece of furniture and then reappears on the other side, it still existed during the interim period when it was behind the furniture. "Existed" here means, roughly, "The most compact and accurate model of my experiences implies that if I were in a
different position, I would be able to see or otherwise detect the hand while it was behind the chair, even though in actual fact I can't see or detect it there from my current position." This is analogous to what it means to believe the mirage-lake doesn't exist: "The most compact and accurate model of my experiences implies that if I were standing right where that lake
appears to be, I wouldn't be wet!" Notice from these examples how counterfactuality is critical to the emergence of objective from subjective reality. If the mind just sticks to exactly what it experiences, it will never evolve the notion of objective reality. Instead, the mind needs to be able to think "What would I experience if...." This kind of basic counterfactuality leads fairly quickly to the notion of objective reality.

On the other hand, what does one need in order to derive subjective reality from objective reality? This is a lot trickier!

Given objective reality as described by modern science, one can build up a theory of particles, atoms, molecules, chemical compounds, cells, organs (like brains) and organisms -- and then one can talk about how brains embodied in bodies embedded in societies give rise to individual subjective realities. But this is a much longer and more complicated story than the emergence of objective reality from subjective reality.

Occam's-razor-wise, then, "objective reality emerges from subjective reality" is a much simpler story than the reverse.

But of course, this analysis only scratches the surface. The simple, development-psychology approach I've described above doesn't explain the details of objective reality -- it doesn't explain why there are the particular elementary particles and force constants there are, for example. It just explains why objective reality should exist at all.

And this point gives rise to an interesting asymmetry. While it's easier to explain the existence of objective reality based on subjective reality than vice versa, it seems like it's probably easier to explain the details of subjective reality based on objective reality than vice versa. Of course, this is largely speculative, since right now we don't know how to do either -- we can't explain particle physics based on subjectivist developmental psychology, but nor can we explain the nature of conscious experience based on brain function. However, my intuition is that the latter is an easier task, and will be achieved sooner.

So we then arrive at the conclusion that:


  • At a coarse level of precision, "subjectivity spawns objectivity" is a simpler story than vice versa
  • At a higher level of precision, "objectivity spawns subjectivity" is a simpler story than vice versa

So, which direction of creation is more fundamental depends on how much detail one is looking for!

This is not really such a deep point -- but it's a point that seems to elude most philosophers, who seem to be stuck either in an "objective reality is primary" or "subjective reality is primary" world-view. It seems to me that recognizing the mutual generation of these two sorts of reality is prerequisite for seriously discussing a whole host of issues, including consciousness and free will. In my prior writings on consciousness and will I have taken for granted this kind of mutual-generationist approach to subjectivity/objectivity, but I haven't laid it out explicitly enough.

All these issues will be dealt with in my philosophy-of-mind book "The Hidden Pattern", which I expect to complete mid-fall. I wish I had more time to work on it: this sort of thinking is really a lot of fun. And I think it's also scientifically valuable -- because, for example, I think one of the main reasons the field of AI has made so little progress is that the leading schools of thought in academic and industrial AI all fall prey to fairly basic errors in the philosophy of mind (such as misunderstanding the relation between objective and subjective reality). The correct philosophy of mind is fairly simple, in my view -- but the errors people have made have been quite complicated in some cases! But that's a topic for future blog entries, books, conversations, primal screams, whatever....

More later ... it's 2AM and a warm bed beckons ... with a warm wife in it ;-> ... (hmm -- why this sudden emphasis on warmth? I think someone must have jacked the air conditioning up way too high!!)

Monday, July 18, 2005

The massive suckage of writing academic research papers / the ontology of time / White Sands

I was a professor for 8 years, so I'm no stranger to the weird ways of academia. But I've been pretty much away from that universe for a while, pursuing commercial software development and independent research. Recently I've re-initiated contact with the world of academic research, because it's become clear that getting some current academic publications on my AI and bioinformatics work will be valuable to my scientific and business pursuits. Egads!! The old frustrations are coming back -- badly enough to spill over into a blog entry....

This is a pretty boring blog entry, I'm afraid: just a long rant about how annoying academic research can be. But I got irritated enough to write this stuff down, so I guess I may as well post it....

I've been working on an academic paper together with my former Webmind colleague Pei Wang, on the topic of "why inference theories should represent truth values using two numbers rather than one." For instance, the inference component of my Novamente AI system represents the truth values of statements using a probability and a "weight of evidence" (which measures, roughly, the number of observations on which the probability is based). Pei's NARS reasoning system uses two-component truth values with a slightly different interpretation.

Now, this is a perfectly decent paper we've written (it was just today submitted for publication), but, what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important -- blah, blah, blah, blah, blah.... A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!

I'm writing some books describing the Novamente AI system -- one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don't follow all the rules of academic writing -- for instance, the whole 600 page book has a reference list no longer than I've seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!

For another example, this week I've been reading a book called "The Ontology of Time" by L. Nathan Oaklander. It's a pretty interesting book, in terms of the contents, but the mode of discourse is that of academic philosophy, which is very frustrating to me. It's a far cry from Nietzsche or Schopenhauer style prose -- academic philosophy takes "pedantic" to new heights.... The book makes some good points: it discusses the debate between philosophers promoting the "A-theory of time" (which holds that time passes) and the "B-theory of time" (which holds that there are only discrete moments, and that the passage of time is an illusion). Oaklander advocates the B-theory of time, and spends a lot of space defending the B-theory against arguments by A-theorists that are based on linguistic usage: A-theorists point out that we use a lot of language that implies time passes, in fact this assumption is embedded in the tense system of most human languages. Oaklander argues that, although it's convenient to make the false assumption that time passes for communicative purposes, nevertheless if one is willing to spend a lot of time and effort, one can reduce any statement about time passing to a large set of statements about individual events at individual moments.

Now, clearly, Oaklander is right on this point, and in fact my Novamente AI design implicitly assumes the B-theory of time, by storing temporal information in terms of discrete moments and relations of simultaneity and precedence between them, and grounding linguistic statements about time in terms of relationships between events occurring at particular moments (which may be concrete moments or moments represented by quantified mathematical variables).

There are also deep connections between the B-theory and Buddhist metaphysics, which holds that time is an illusion and only moments exist, woven together into apparent continua by the illusion-generating faculty of the mind. And of course there are connections with quantum physics: Julian Barbour in "The End of Time" has argued ably that in modern physics there is no room for the notion of time passing. All moments simply exist, possessing a reality that in a sense is truly timeless -- but we see only certain moments, and we feel time moving in a certain direction, because of the way we are physically and psychologically constructed.

But Oaklander doesn't get to the connections with Buddhism and quantum theory, because he spends all his time pedantically arguing for fairly simple conceptual points with amazing amounts of detail. The papers in the book go back 20 years, and recount ongoing petty arguments between himself and his fellow B-theorists on the one hand, and the A-theorists on the other hand. Like I said, it's not that no progress has been made -- I think Oaklander's views on time are basically right. What irritates me is the painfully rate of progress at which these very smart philosophers have proceeded. I attribute their slow rate of progress not to any cognitive deficits on their part, but to the culture and methodology of modern academia.

Obviously, Nietzsche would be an outcast in modern academia -- casting his books in the form of journal papers would really be a heck of a task!

And what if the scientists involved in the Manhattan Project had been forced to write up their incremental progress every step of the way, and fight with journal referees and comb the literature for references? There's no way they would have made the massively rapid progress they did....

And the problem is not restricted to philosophy, of course -- "hard" science has its own issues. In computer science most research results are published at least twice: once in a conference proceedings and once in a journal article. What a waste of the researcher's time, to write the same shit up twice ... but if you don't do it, your status will suffer and you'll lose your research grants, because others will have more publications than you!

Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy "toy problems" that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard "toy problem." As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple "toy" test problems that are popular with journal referees....

Research universities are supposed to be our society's way of devoting resources to advancing knowledge. But they are locked into a methodology that makes knowledge advance awfully damn slowly....

And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don't bother to ram it through the publication mill ... or, generate new knowledge at the rate that's acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyses rather than doing truly productive creative research. Obviously, the former approach is a lot more fun -- but it shuts you out from getting government research grants. The only way to get government research money is to move really slowly -- or else to start out with a lot of money so you can hire people to do all the paper-writing and testing-on-toy-problems for you....

Arrrgh! Anyway, I'm compromising, and wasting some of my time writing a small fragment of my research up for academic journal publication, just to be sure that Novamente AI is "taken seriously" (or as seriously as a grand AGI project can possibly be taken by the conservative-minded world we live in).... What a pain.

If society valued AGI as much as it valued nuclear weapons during World War II, we'd probably have superhuman AI already. I'm serious. Instead, those of us concerned with creating AGI have to waste our time carrying out meaningless acts like writing academic papers describing information already adequately described in semi-formal documents, just to be taken seriously enough to ask for research money and have a nonzero chance of getting it. Arrggh!

OK, I promise, the next blog entry won't be as boring as this, and won't be a complaint, either. I've actually been enjoying myself a lot lately -- Izabela and I had a great vacation to New Mexico, where we did a lot of hiking, including the very steep and very beautiful Chimney Canyon route down Mount Sandia, which I'd always wanted to do when I lived in New Mexico, but never gotten around to. Also, we camped out on the dunes in White Sands National Monument, which is perhaps the most beautiful physical location I know of. I can't think of anywhere more hallucinogenic -- psychedelic drugs would definitely enhance the experience, but even without them, the landscape is surprisingly trippy, giving the sensation of being in a completely different universe from the regular one, and blurring the distinction between inside and out....

Most of the time wandering around in White Sands was spent in conversation about the subtleties of the interrelationship between free will and consciousness -- interesting and perhaps valuable ideas that I haven't found time to write down yet, because all my writing-time these last couple weeks has been spent putting already-well-understood ideas into the form of academic papers ;-ppp White Sands is exactly the right place to mull over the structure of your mind, since the landscape itself projects you involuntarily into a kind of semi-meditative state....

Hmmm... maybe I'll write down those ideas about free will and consciousness in the next blog entry. It's tempting to write that stuff now -- but it's 1:25 AM, I think I'll go to sleep instead. Tomorrow, alas, is another day... (I tried to make all the days run into each other by taking Modafinil to eliminate my need for sleep -- but it just wound up upsetting my stomach too much, so I've had to go back to sleeping again: bummer!!)