Tuesday, May 21, 2013

RIP Ray Manzarek


What a bummer to read that Ray Manzarek has died.   

I was born in 1966, and the psychedelic rock of the late 1960s and early 1970s was the music I grew up on.   Later I became more interested in jazz fusion, bebop, classical music and so forth -- but the psychedelic 60s/70s music (Hendrix, Doors, Floyd, Zeppelin) was where my love for music started.  This was the music that showed me the power of music to open up the mind to new realities and trans-realities, to bring the mind beyond itself into other worlds....

Hendrix was and probably always will be my greatest musical hero -- but Ray Manzarek was the first keyboardist who amazed me and showed me the power of wild and wacky keyboard improvisation.   I now spend probably 30-45 minutes a day improvising on the keyboard (and more on weekends!).  I don't have Ray's virtuosity but even so, keyboard improv keeps my mind free and agile and my emotions on the right side of the border between sanity and madness.  Each day I sit at my desk working, working working -- and when too much tension builds up in my body or I get stuck on a difficult point, I shift over to the piano or the synth and jam a while.   My frame of mind re-sets, through re-alignment with the other cosmos the music puts my mind in touch with.

The Doors and Ray had a lot of great songs.  But no individual song is really the point to me.  The point is the way Ray's music opens up your mind -- the way, if you close your eyes and let it guide you, you follow it on multiple trans-temporal pathways into other realms, beyond the petty concerns of yourself and society ... and when you return your body feels different and you see your everyday world from a whole new view....

The Singularity, if it comes, will bring us beyond petty human concerns into other realms in a dramatic, definitive way.   Heartfelt, imaginative improvisation like Ray Manzarek's can do something similar, in its own smaller (yet in another sense infinite) way -- opening up a short interval of time into something somehow much broader.

As Ray once said:

“Well, to me, my God, for anybody who was there it means it was a fantastic time, we thought we could actually change the world — to make it a more Christian, Islamic, Judaic Buddhist, Hindu, loving world. We thought we could. The children of the ’50s post-war generation were actually in love with life and had opened the doors of perception. And we were in love with being alive and wanted to spread that love around the planet and make peace, love and harmony prevail upon earth, while getting stoned, dancing madly and having as much sex as you could possibly have.” 


w00t! ... those times are gone, and I was too young in the late 60s early 70s to take part in the "getting stoned and having as much sex as you could possibly have" aspect (that came later for me, including some deep early-80s acid trips to Doors music), but my child self picked up the vibe of that era nonetheless ... all the crazy, creative hippies I saw and watched carefully back then affected more than just my hairstyle....   Somewhat like Steve Jobs, I see the things I'm doing now as embodying much of the spirit of that era.   Ray Manzarek and his kin of that generation wanted to transcend boring, limited legacy society and culture and revolutionize everything and make it all more ecstatic and amazing -- and so do I....


I recall a Simpsons episode where Homer gets to heaven and encounters Jimi Hendrix and Thomas Jefferson playing air hockey.  Maybe my memory has muddled the details, but no matter.   I hope very much that, post-Singularity, one of my uploaded clones will spend a few eons jamming on the keyboard with the uploaded, digi-resurrected Ray Manzarek.

Until then: Rest In Peace, Ray....


Sunday, May 19, 2013

Musing about Mental vs. Physical Energy


Hmmm....

I was talking with my pal Gino Yu at his daughter Oneira's birthday party yesterday … and Gino was sharing some of his interesting ideas about mental energy and force…

Among many other notions that I won't try to summarize here, he pointed out that, e.g. energy (in the sense he meant) is different from arousal as psychologists like to talk about…  You can have a high-energy state without being particular aroused -- i.e. you can be high-energy but still and quiescent.

This started me thinking about the relation between "mental energy" in the subjective sense Gino appeared to be intending, and "energy" in physics.

I have sometimes in the past been frustrated by people -- less precise in their thinking than Gino -- talking about "energy" in metaphorical or subjective ways, and equating their intuitive notion of "energy" with the physics notion of "energy."

Gino was being careful not do to this, and to distinguish his notion of mental energy from the separate notion of physical energy.   However, I couldn't help wondering about the connection.   I kept asking myself, during the conversation: Is there some general notion of energy which the physical and mental conceptions both instantiate?

Of course, this line of thinking is in some respects a familiar one, e.g. Freud is full of ideas about mental energy, mostly modeled on equilibrium thermodynamics (rather than far-from equilibrium thermodynamics which would be more appropriate as an analogical model for the brain/mind)…

Highly General Formulations of Force, Energy, Etc.

Anyway... here is my rough attempt to generalize energy and some other basic physics concepts beyond the domain of physics, while still capturing their essential meaning.

My central focus in this line of thinking is "energy", but I have found it necessary to begin with "force" ...

Force may, I propose, be generally conceived as that which causes some entity to deviate from its pattern of behavior ...

Note that I've used the term "cause" here, which is a thorny one.   I think causation must be understood subjectively: a mind M perceives A as causing B if according to that mind's world-model,

  • A is before B
  • P(B|A) > P(B)
  • there is some meaningful (to M) avenue of influence between A and B, as evidenced e.g. by many shared patterns between A and B

So, moving on ... force quickly gives us energy…

Energy, I suggest (not too originally), may be broadly conceived as a quantity that

  • is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)
  • measures (in some sense) the amount of work that a certain force gets done, or (potential energy) the amount of work that a certain force is capable of getting done

Now, in the case of Newtonian mechanics,

  • an entity's default pattern of behavior is to move in a straight line at a constant velocity (conservation of momentum), therefore force takes the form of deviations from constant momentum, i.e. it is proportional to acceleration
  • "mass" is basically an entity's resistance to force…
  • energy = force * distance

However, the basic concepts of force and energy as described above are pertinent beyond the Newtonian context, e.g. to relativistic and quantum physics; and I suppose they may have meaning beyond the physics domain as well.

This leads me to thinking about a couple related concepts...

Entropy maximization: When a mind lacks knowledge about some aspect of the world, its generically best hypothesis is the one that maximizes entropy (this is the hypothesis that will lead to its being right the maximum percentage of the time).   This is Jaynes' MaxEnt principle of Bayesian inference.

Maximum entropy production: When a mind lacks knowledge about the path of development of some system, its generically best hypothesis is that the system will follow the path of maximal entropy production (MEP).   It happens that this path often involves a lot of temporary order production; as Swenson said, "The world, in short, is in the order production business because ordered flow produces entropy faster than disordered flow"

Note that while entropy maximization and MEP are commonly thought of in terms of physics, they can actually be conceived as general inferential principles relevant to any mind confronting a mostly-opaque world.

Sooo... overall, what's the verdict?  Does it make sense to think about "mental energy", qualitatively, as something different from physical energy -- but still deserving the same word "energy?"   Is there a common abstract structure supervening both uses of the "energy" concept?

I suppose that there may well be, if the non-physical use of the term "energy" follows basic principles like I've outlined here.

This is in line with the general idea that subjective experiences can be described using their own language, different from that of physical objects and events -- yet with the possibility of drawing various correlations between the subjective and physical domains.  (Since in the end the subjective and physical can be viewed as different perspectives on the same universe … and as co-creators of each other…)

In What Sense Is Mental Energy Conserved?

But ... hmmm ... I wonder if the notion of "mental energy" -- in folk psychology or in whatever new version we want to create -- really obeys the principles suggested above?

In particular, the notion of "conservation in isolated systems" is a bit hard to grab onto in a psychological context, since there aren't really any isolated systems ... minds are coupled with their environments, and with other minds, by nature.

On the other hand, it seems that whenever physicists run across a situation where energy may seem not to be conserved, they invent a new form of energy to rescue energy conservation!   Which leads to the idea that within the paradigm of modern physics, "being conserved" is essentially part of the definition of "energy."

Also, note that above I used the phrasing that energy "is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)."   The alternate parenthetical phrasing may, perhaps, be particularly relevant to the mental-energy case.

(Note for mathematical physicists: Noether's Theorem shows that energy conservation ensues from temporal translation invariance, but it only applies to systems governed by Lagrangians, and I don't want to assume that about the mind, at least not without some rather good reason to....) 

Stepping away from physics a bit, I'm tempted to consider notion of mental energy in the context of the Vedantic hierarchy, which I wrote about in The Hidden Pattern (here's an excerpt from Page 31 ...)


In a Vedantic context, one could perhaps view the Realm of Bliss as being a source of mental energy that is in effect infinite from the human perspective.   So when a human mind needs more energy, it can potentially open itself to the Bliss domain and fill itself with energy that way (thus perhaps somewhat losing its self, in a different sense!).   This highlights the idea that, in a subjective-mind context, the notion of an "isolated system" may not make much sense.

But one could perhaps instead posit a principle such as

Increase or decreases in a mind-system's fund of mental energy, are causally tied to that mind-system's interactions with the universe outside itself.

This sort of formulation captures the notion of energy conservation without the need to introduce the concept of an "isolated system."    (Of course, we still have to deal with the subjectivity of causality here -- but there's no escaping that, except via stopping to worry about causality altogether!)

But -- well, OK -- that's enough musing and rambling for one Sunday early afternoon; it's time to walk the dogs, eat a bit of lunch, and then launch into removing the many LaTeX errors remaining in the (otherwise complete) Building Better Minds manuscript....

And so it goes...

-- This post was written while listening to Love Machine's version of "One More Cup of Coffee" by Bob Dylan ... and DMT Experience's version of "Red House" by Jimi Hendrix.   I'm not sure why, but it seems a "cover version" sort of afternoon...

Wednesday, May 15, 2013

Quasi-Mathematical Speculations on Contraction Maps and Hypothetical Friendly Super-AIs


While eating ramen soup with Ruiting in the Tai Po MegaMall tonight, I found myself musing about the possible use of the contraction mapping theorem to understand the properties of AGI systems that create other AGI systems that create other AGI systems that … etc. ….

It's a totally speculative line of thinking, that may be opaque to anyone without a certain degree of math background.

But if it pans out, it ultimately could provide an answer to the question: When can an AGI system, creating new AGI systems or modifying itself in pursuit of certain goals, be reasonably confident that its new creations are going to continue respecting the goals for which they were created?

This question is especially interesting when the goals in question are things like "Create amazing new things and don't harm anybody in the process."   If we create an AGI with laudable goals like this, and then it creates a new AGI with the same goals, etc. -- when can we feel reasonably sure the sequence of AGIs won't diverge dramatically from the original goals?

Anyway, here goes…

Suppose that one has two goals, G and H

Given a goal G, let us use the notation " agi(G, C) " to denote the goal of creating an AGI system, operating within resources C, that will adequately figure out how to achieve goal G

Let d(,) denote a distance measure on the space of goals.  One reasonable hypothesis is that, if

d(G,H) = D

then generally speaking,

d( agi(G,c), agi(H,c) ) < k D

for some k ….  That is: because AGI systems are general in capability and good at generalization, if you change the goal of an AGI system by a moderate amount, you have to change the AGI system itself by less than that amount…

If this is true, then we have an interesting consequence….   We have the consequence that

F(X) = agi(X,C)

is a contraction mapping on the space of goals.   This means that, if we are working with a goal space that is a complete metric space, we have a fixed point G* so that

F(G*) = G*

i.e. so that

G* = agi(G*,C)

The fixed point G* is the goal of the following form:

G* = the goal of finding an AGI that can adequately figure out how to achieve G*

Of course, to make goal space a complete metric space one probably needs to admit some uncomputable goals, i.e. goals only computable using infinitely long computer programs.   So a goal like G* can never quite be achieved using ordinary computers, but only approximated.

Anyway, G* probably doesn't seem like a very interesting goal… apart from a certain novelty value….

However, one can vary on the above argument in a way that makes it possibly more useful.

Suppose we look at

agi(G,I,C)

-- i.e., the goal of creating an AGI that can adequately figure out how to achieve goals G and I within resources C.

Then it may also be the case that

d( agi(G,I,C), agi(H, I, C) ) < k d(G,H)

If so, then we can show the existence of a fixed point goal G* so that

G* = agi(G*, I, C)

or in words,

G* = the goal of finding an AGI that can adequately figure out how to achieve both goal G* and goal I

The contraction mapping theorem shows that if we start with a goal G close enough to G*, we can converge toward G* via an iteration such as

G, I
agi(G, I, C)
agi( agi(G,I,C), I, C)
agi( agi( agi(G,I,C), I, C) , I, C)

etc.

At each stage of the iteration, the AGI becomes more and more intelligent, as it's dealing with more and more abstract learning problems.  But according to the contraction mapping theorem, the AGI systems in the series are getting closer and closer to each other -- the process is converging.

So then we have the conclusion: If one starts with a system smart enough to solve the problem agi(G,I, C) reasonably well for the given I and C -- then ongoing goal-directed creation of new AGI systems will lead to new systems that respect the goals for which they were created.

Which may seem a bit tautologous!   But the devil actually lies in the details -- which I have omitted here, because I haven't figured them out!   The devil lies in the little qualifiers "acceptably" and "reasonably well" that I've used above.  Exactly how well does the problem agi(G,I,C) need to be solved for the contraction mapping property to hold?

And of course, it may be that the contraction mapping property doesn't actually hold in the simple form given above -- rather, some more complex property similar in spirit may hold, meaning that one has to use some generalization of the contraction mapping theorem, and everything becomes more of a mess, or at least subtler.

So, all this is not very rigorous -- at this stage, it's more like philosophy/poetry using the language of math, rather than real math.   But I think it points in an interesting direction.  It suggests to me that, if we want to create a useful mathematics of AGIs that try to achieve their goals by self-modifying or creating new AGIs, maybe we should be looking at the properties of mappings like agi() on the metric space of goals.   This is a different sort of direction than standard theoretical computer science -- it's an odd sort of discrete dynamical systems theory dealing with computational iterations that converge to infinite computer programs compactly describable as hypersets.

Anyway this line of thought will give me interesting dreams tonight ... I hope it does the same for you ;-) ...


Wednesday, May 01, 2013

The Dynamics of Attachment and Non-Attachment in Humans and AGIs



A great deal of human unhappiness and ineffectiveness is rooted in what Buddhists call "attachment"… roughly definable as an exaggerated desire not to be separated from someone, something, some idea, some feeling, etc.

Buddhists view attachment as ensuing largely from a lack of recognition of the oneness of all things.  If all things are one, then they can't really be separated anyway, so there's no reason to actively resist separation from some person or thing.

Zen teacher John Daido Loori put it as follows: "[A]ccording to the Buddhist point of view, nonattachment is exactly the opposite of separation. You need two things in order to have attachment: the thing you’re attaching to, and the person who’s attaching. In nonattachment, on the other hand, there’s unity. There’s unity because there’s nothing to attach to. If you have unified with the whole universe, there’s nothing outside of you, so the notion of attachment becomes absurd. Who will attach to what?"

That way of thinking makes plenty of sense to me (in a trans-sensible sort of way!).  However, I think one can also take a more prosaic and less cosmic, but quite compatible, approach to the attachment phenomenon...

In this blog post I will present a simple neural and cognitive model of attachment and its opposite.

I want to clarify that I'm not positing that the subjective experiences of attachment or non-attachment "reduce" to the neural/cognitive mechanisms I'll describe here -- I am not: not in a physics sense nor in a basic ontological sense.  I prefer to think about the ideas presented here as pertaining to the "neural/cognitive correlates of the experiences of attachment and non-attachment."

After presenting my model of attachment and non-attachment, I will dig into AGI theory for a bit, and explain why I think advanced AGI systems would suffer from the attachment phenomenon far less than human beings.   Or in other words:
  • Enlightening human minds is, in practice, a chancy and difficult matter ...
  • Enlightening AGI minds may merely be a matter of reasonable cognitive architecture design...

Hebbian Learning

I will start with some quasi-biological speculation.  What might be the neural roots of attachment?

Let's begin with the concept of Hebbian learning, an idea from neural network theory.  Hebbian learning has to do with a network in which neurons are joined by weighted synapses.  The larger the positive weight on the synapse between neuron N1 and neuron N2, the more of N1's activity will spill over to N2.  The larger the negative weight on the synapse between neuron N1 and neuron N2, the more strongly N1's activity will inhibit activity in N2.

In basic Hebbian learning the following two rules obtain:
  1. If N1 and N2 are active at the same time, the link (synapse) between N1 and N2 has its weight increased
  2. If N1 is active but N2 is not, or N2 is active but N1 is not, the link between N1 and N2 has its weight decreased
The result is that, over time
  • pairs of neurons that are frequently simultaneously active will be joined by synapses with high positive weights (so when one of them becomes active, the other will tend to be)
  • pairs of neurons that are generally active at different times, will be joined by synapses with very negative weights (so when one of them becomes active, the other will tend not to be active)
This is a very basic form of pattern recognition, but it's been shown to be adequate to learn arbitrarily complex patterns.  In technical terms, Hebbian learning can learn to achieve any computable goal in any computable environment -- though it may be very slow at doing so, and may require a very large network of neurons.

One of the interesting consequences of Hebbian learning is the formation of "cell assemblies" -- groups of neurons that are richly interconnected via high-positive-weight synapses, and hence tend to become activated as a whole.   Donald Hebb, who came up with the idea of Hebbian learning in the late 1940s, suggested that ideas in the mind are represented by neuronal cell assemblies in the brain.  60-odd years later, this still seems a sensible idea, and there is significant evidence in its favor.  The emergence of nonlinear dynamics has deepened the theory somewhat; it now seems likely that the cell assemblies representing ideas, memories and feelings in the human mind are associated with complex dynamical phenomena like strange attractors and strange transients.

Hebbian learning is a conceptual and mathematical model, but the basic idea is reflected in the brain in the form of long-term potentiation of synapses.  It may be found to be reflected in the brain in other ways as well, e.g. as our understanding of the roles of glia in memory increases.

So what does all this have to do with attachment?

Let's explore this via a simple example....

Suppose that Bob's girlfriend has left him.  He misses her.

While his girlfriend was with him, he woke up every morning, found her in the bed next to him, and put his arm around her.  He liked that.  The association between "wake up" and "put arm around girlfriend" become strong.   In Hebbian learning terms, the neurons in the "wake up" cell assembly got strongly positively weighted synapses to the neurons in the "put arm around girlfriend" cell assembly.  A larger assembly of the form " wake up and put arm around girlfriend" formed, linking together the two smaller assemblies.

Now, after the girlfriend left, what happens in Bob's brain?

According to straightforward Hebbian learning, the association between "wake up" and "put arm around girlfriend" should gradually decrease, until eventually there is no longer a positive weight between the two cell assemblies.  The larger assembly should fragment, leaving the "wake up" and "put arm around girlfriend" assemblies separate; and at the same time the "put arm around girlfriend assembly should start to dissipate, as it no longer gets reinforcement via experience.

But this may not actually be what happens.  Suppose, for example, that Bob spends a lot of time thinking about his girlfriend (now his ex-girlfriend).  Suppose he lies awake at night in bed and dwells on the fact that he's the only one there.  In that case, the "wake up" cell assembly and the "put arm around girlfriend" assembly will be activated simultaneously a lot, and will retain their positive association.

What's happening here is that Bob's emotions are causing a cell assembly to remain highly active -- in a case where the external world, in the absence of these emotions, would drive the assembly to dwindle.

This, I suggest, is the key neural correlate of the psychological phenomenon of attachment.  Attachment occurs -- neurally speaking -- when there is a circuit binding a cell assembly to the brain's emotional center, in such a way that emotion keeps the circuit whole and flourishing even though otherwise it would dissipate.

Ideally, a mind with amazing powers of self-control would delete the association between "wake up" and "put arm around girlfriend" as soon as the relationship with the girlfriend ended.   However, a mind without emotional interference in its Hebbian network dynamics would do the next best thing: the association would gradually dwindle over time.   For a typical human mind, on the other hand, the coupling of the "wake up and put arm around girl" network with the mind's emotional centers, will cause this association to persist a long time after simple Hebbian dynamics would have caused it to dwindle.

The example of Bob and his girlfriend is somewhat simplistic of course, and I chose it largely because of its simplicity.  A more pernicious example is when a mind becomes attached to an aspect of its model of itself.  For example, someone who derives pleasure from being correct (say, because someone praises them for being correct), may then become emotionally attached to the idea of themselves as someone who knows the right answer.   They may then have trouble letting go of this idea, even in contexts where the genuinely do not know the answer, and would be better off to admit this to themselves as well as to others.   Becoming attached to inaccurate models of oneself causes all sorts of problems, including the creation of compoundedly, increasingly inaccurate self-models, as well as self-defeating behaviors.

A Semantic Network Perspective

Now let's take a leap from modeling brain to modeling mind.  I've been talking here about neural networks and brains -- but the core idea presented above could actually be relevant to minds with very different biological underpinnings.  It could also be relevant if Hebbian learning turns out to be a terrible model of the brain.

Regardless of how the brain works, one can model the mind as a network of nodes, connected by weighted links.  The nodes represent concepts, actions, and perceptions in the mind; the links represent relationships between these, including associative relationships.  The "semantic networks" often used in AI are a simplistic version of this kind of model, but one can articulate much richer versions, capable of capturing all documented aspects of human cognition.

This sort of model of the mind has been instrumental in my own thinking about AI and cognitive science.  I have articulated a specific network model of minds called SMEPH, Self-Modifying Evolving Probabilistic Hypergraphs.   I won't go into the details of that here, though -- I mention it only to point out that the model of attachment and non-attachment here may be interpreted two ways: as a neural model, and as a cognitive model.   These interpretations are related but far from identical.

COEX Systems

The model of attachment presented here relates closely to Stanislav Grof's notion of a "COEX (Condensed Experience) system."  Roughly, a COEX is a set of related experiences organized around a powerful emotional center.   The emotional center is generally one or a few highly emotionally impactful experiences.  The various experiences in the COEX, all reinforce each other, keeping each other energetic and relevant to the mind.

In a Hebbian perspective, a COEX system would be modeled as a system of cell assemblies, each representing a certain episodic memory, linked together via positive, reinforcing connections.  The memories in the COEX stimulate powerful emotions, and these emotions reinforce the memories -- thus maintaining a powerful, ongoing attachment to the memories.

But Why?

I have said that "Attachment occurs -- neurally speaking -- when there is a circuit binding a cell assembly to the brain's emotional center, in such a way that emotion keeps the circuit whole and flourishing even though otherwise it would dissipate."

But why would the human mind be that way?

Emotions, basically, are system-wide (body and mind inclusive) reactions to events regarding system goals/desires/aspirations.  We are happy when we are achieving goals better and better; especially happy when we're doing so better than expected.  We are sad when we're making progress worse than expected.   We're angry when someone or something stands in the way of our goal fulfillment.   We feel pity when we use our mind's power of analogy to feel someone ELSE's frustration at their inability to fulfill their goals….

So, it's only natural that the emotion-bearing cell assemblies and attractors, wind up getting richly interlinked with other cell assemblies and attractors.

Let's say the "wake up", "put arm around girlfriend" and "happy emotion" assemblies all get richly interlinked.   Then there are multiple reverberating circuits joining all these  assemblies.  So even when the girlfriend goes away, these circuits will keep on cycling.

This won't be such a problem for an animal like a dog -- because in a dog, the associational cortex is not such a big part of its neural processing -- immediate perceptions and actions tend to hold sway.  But a larger and more complex associational cortex brings all sort of new possibilities with it, including the possibilities for more complex and persistent forms of attachment!

The Brains of the Enlightened

In recent years there  has been an increasing amount of work studying the brains of experienced meditators, and of people capable of various "enlightened" states of consciousness.   One of the interesting findings here is that such individuals have unusual patterns activity in a part of the brain called the posterior cingulate cortex (PCC).

The PCC does many different things, so the significance of this finding is not fully clear, and may be multidimensional.  However, it is noteworthy that ONE thing the PCC does is to regulate the interaction between memory and emotion.

The neural/cognitive theory presented above leads directly to the prediction that, if there's a key difference between the brains of attachment-prone versus non-attached people, it should indeed have to do with the interaction between memory and emotion.

I thus submit the hypothesis that ...  ONE of the significant factors the neurodynamics of enlightened states is: A change in the function of the PCC, so that in relatively non-attached people, emotion plays a significantly lesser role in the maintenance and dissolution of cell assemblies and associated attractors representing memories.

Toward Enlightened Digital Minds

This line of thinking, if correct, suggests that it may be relatively straightforward to create digital minds without the persistent phenomenon of attachment that characterizes ordinary human minds.

First of all, a digital mind -- if its design is not slavishly tied to that of the human brain -- may be able to explicitly remove associations and other inferences that are no longer rationally judged as relevant.  In other words, when a well-designed robot's girlfriend leaves him, he will just be able to remove any newly irrelevant associations from his brain, so his post-breakup malaise will be brief or nonexistent.

Secondly, even if a digital mind lacks this level of deliberative, rational self-modification, there is no reason it needs to have the same level of coupling of emotion and memory as human beings have.  From an AI software design perspective, it is quite simple to make the coupling of memory and emotion optional, to a much greater degree than the human brain does…

The interaction between memory and emotion is valuable for many purposes.  There is intelligence in emotional response, sometimes.  But there is no need, from a cognitive architecture perspective, for the formation and dissolution of memory attractors to be so inextricably tied to emotion.

Attachment in OpenCog

To explore the notion of attachment in digital minds more concretely, let's take a specific AGI design and muse on it in detail.   This exercise will also help us better understand why human minds get so extremely wrapped up in attachment as they do.

What if Bob's mind were a mature, fully functional OpenCog AGI engine, instead of a human?

(NOTE: to understand this example more thoroughly, take an hour or two and read the overview of the CogPrime cognitive architecture being gradually implemented in the OpenCog open-source AI framework....  Or, if you don't have time for that, just skim through the following instead, and you'll probably grok something!)

Then there would be an explicit link in OpenCog's Atomspace knowledge store, such as

PredictiveImplicationLink
   AND
      PredicateNode: wake_up
      PredicateNode: put_arm_around_girlfriend
   Happy
 
(NOTE: the actual nodes in the OpenCog knowledge base probably wouldn't have such evocative names, as they would be learned via experience --  but the basic structure would be like this.)

There would also be a bunch of HebbianLinks, similar to synapses in a neural network with Hebbian learning, going between various nodes related to wake_up and put_arm_around_girlfriend, and various nodes related to Happy.

When the girlfriend left, human-like attachment dynamics would likely be present, related to the HebbianLinks involved.   But the probabilistic truth value on the PredictiveImplicationLink would decrease.  It would decrease gradually via experience; or might be decreased very rapidly via reasoning (i.e. the AI could rationally infer that since the girlfriend is gone, putting its arm around her is not likely to be associated with happiness anymore).

The question then is: How rapidly and thoroughgoingly would this change in the OpenCog system's explicit knowledge (the PredictiveImplicationLink) cause a corresponding change in the system's implicit knowledge (the HebbianLinks between the assemblies or "maps" of nodes corresponding to "wake-up", "put_arm_around_girlfriend", and "Happy")?

Suppose the OpenCog system has a process that: Whenever the truth value of a link changes dramatically, puts the link in the system's AttentionalFocus (the latter being the set of nodes and links in the system's memory that have the highest Short Term Importance (STI) values, and thus get the most attention from the system's cognitive processes).  Putting the link in the AttentionalFocus, will cause STI to be spread to the nodes that the link connects, and to other nodes related to these.  This will then cause the HebbianLinks among these nodes to have their weights updated.  And this will gradually get rid of assemblies and attractors that are no longer relevant.

So this process that triggers attention based on truth value change, will serve directly to combat attachment.

Why Human Brains Get More Attached than a Smart OpenCog Would

In the human mind/brain, explicit knowledge is purely emergent from implicit knowledge -- different from the situation with OpenCog where the two kinds of knowledge exist in parallel, dynamically coupled together.  Obviously, given this, there must be neural mechanisms for changes in emergent explicit knowledge (derived via reasoning, for example) to cause changes in the corresponding underlying implicit knowledge.   But these mechanisms are apparently more complex and harder to control than the corresponding ones in OpenCog.

Evolutionarily, the reason for the difficulty the human brain has in coordinating explicit and implicit knowledge, seems to be that the brain's mechanisms mostly evolved in the context of brains with a lot less associational cortex than the human brain has.  In the context of a dog or ape brain, a sloppy mechanism for coordinating explicit and implicit knowledge may not be so troublesome.   In the context of a human brain, this sloppy mechanism leads to various problems, such as excessive attachment to ideas, people, feelings, etc.   And these problems can be worked around, to a large extent, via difficult and time-consuming practices like meditation, psychotherapy, etc.  Perhaps future technologies like brain implants will enable the circumvention of excessive attachment and other problematic aspects of the human mind/brain architecture, without the need for as much effort as uncertainty as is involved in current mind-improving disciplines....

...

And
so
it
goes
.
.
.