To follow this blog by email, give your address here...

Friday, April 03, 2015

Easy as 1-2-3: Obsoleting the Hard Problem of Consciousness via Brain-Computer Interfacing and Second-Person Science

NOTE ADDED A FEW DAYS AFTER INITIAL POSTING: The subtitle of this post used to be "Solving the Hard Problem of Consciousness via Brain-Computer Interfacing and Second-Person Science" -- but after reading the comments to the post I changed the first word of the subtitle to "Obsoleting" instead.  I made this change because I realized my initial hope that second-person-experience-enabling technology would "solve" the "hard problem of consciousness" was pretty idealistic.   It might solve the "hard problem" to me, but everyone has their own interpretation of the "hard problem", and in the end, philosophical problems never get solved to everybody's satisfaction.   On the other hand, this seems a great example of my concept of Obsoleting the Dilemma from A Cosmist Manifesto.  A philosophical puzzle like the "hard problem" can't necessarily be finally and wholly resolved -- but it can be made irrelevant and boring to everybody and basically obsolete.  That is what, at minimum, I think second-person-oriented tech will do for the "hard problem."

The so-called “hard problem of consciousness” (thus labeled by philosopher David Chalmers) can be rephrased as the problem of connecting FIRST-PERSON and THIRD-PERSON views of consciousness, where

  • the FIRST-PERSON view of consciousness = what it feels like to be conscious; what it feels like to have a specific form of consciousness
  • the THIRD-PERSON view of consciousness = the physical (e.g. neural or computational) correlates of consciousness … e.g. what is happening inside a person’s brain, or a computer system’s software and hardware, when that person or computer system makes a certain apparently sincere statement about its state of consciousness

The “hard problem”  is the difficulty of bridging the gap between these.  This gap is sufficient that some people, taking only a third-person view, go so far as to deny that the first-person view has any meaning — a perspective that seems very strange to those who are accustomed to the first-person view.  

(To me, from my first-person perspective, for someone to tell me I don’t have any qualia, any conscious experience -- as some mega-materialist philosophers would do -- is much like someone telling me I don’t exist.   In some senses I might not “exist” — say, this universe could be a simulation so that I don’t have concrete physical existence as some theories assert I do.  But even if the universe is a simulation I still exist in some other useful sense within that simulation.  Similarly, no matter what you tell me about my own conscious experience from a third-person perspective, it’s not going to convince me that my own conscious awareness is nonexistent — I know it exists and has certain qualities, even more surely than I am aware of the existence and qualities of the words some materialist philosopher is saying to me…) 

So far, science and philosophy have not made much progress toward filling in this gap  between first and third person views of consciousness— either before or after Chalmers explicitly identified the gap.

What I’m going to suggest here is a somewhat radical approach to bridging the gap: To bridge the gap between 1 and 3, the solution may be 2.  

I.e., I suggest we should be paying more attention to:
  • the SECOND-PERSON view of consciousness = the experience of somebody else’s consciousness

Brain-Machine Interfacing and the Second Person Science of Consciousness

There is a small literature on “second person neuroscience”   which contains some interesting ideas.  Basically it’s about focusing on what peoples brains are doing while they’re socially interacting. 

I also strongly recommend Evan Thompson’s edited book “Between Ourselves: Second-person issues in the study of consciousness”, which spans neurophysiology, phenomenology and neuropsychology and other fields.

What I mean though is something a little more radical than what these folks are describing.   I want to pull brain-computer (or at least brain-wire) interfacing into the picture!

Imagine, as a scientist, you have a brain monitor connected to your subject, Mr. X; and you are able to observe various neural correlates of Mr. X’s consciousness via the brain monitor’s read-out.  And imagine that you also have a wire (it may not actually be a physical wire, but let’s imagine this for concreteness) from your brain to Mr. X’s brain, allowing you to  experience what Mr. X experiences, but on the “fringe” of your own consciousness.  That is, you can feel Mr. X’s mind as something distinct from your own — but nevertheless you can subjectively feel it.  Mr. X’s experiences appear in your mind as a kind of second-person qualia.

Arguably we can have second-person qualia of this nature in ordinary life without need for wires connecting between brains.  This is what Martin Buber referred to as an “I-Thou” rather than “I-It” relationship.   But we don’t need to get into arguments about the genuineness of this kind of distinction or experience.  Though I do think I-Thou relationships in ordinary life have a kind of reality that isn’t captured fully in third-person views, you don’t have to agree with me on this to appreciate the particular second-person science ideas I’m putting forward here.  You just have to entertain the idea that direct wiring between two peoples’ brains can induce a kind of I-Thou experience, where one person can directly experience another’s consciousness.

If one wired two peoples’ brains sufficiently closely together, and setting aside a host of pesky little practical details, then one might end up with a single mind with a unified conscious experience.  But what I’m suggesting is to wire them together more loosely, so that each person’s consciousness appears on the *fringe* of the other’s consciousness.

The point, obviously, is that in this way, comparisons between first and third person aspects of consciousness can be made in a social rather than isolated, solipsistic way.

What is “Science”?

Science is a particular cultural institution that doesn’t necessarily fit into any specific formal definition.  But for sake of discussion here, I’m going to propose a fairly well-defined formal characterization of what science is.

The essence of science, in my view, is a community of people agreeing on
  • a set of observations as valid 
  • a certain set of languages for expressing hypotheses about observations
  • some intuitive measures of the simplicity of observation-sets and hypotheses

Given such a community, science can then proceed via the search for hypotheses that the community will agree are simple ways of explaining certain sets of agreed-on observations.   The validity of hypotheses can then be explored statistically by the community.  

No Clear Route to “First Person Science”

The problem with first-person views of consciousness is that they can’t directly enter into science, because a first-person experience can’t be agreed-upon by a community as valid.  

Now, you might argue its not entirely IMPOSSIBLE for first-person aspects of consciousness to enter into science.   It’s possible because a certain community may decide, for example, to fully trust each other’s verbal reports of their subjective experiences.   This situation is approximated within various small groups of individuals who work together in various wisdom traditions, aimed at collectively improving their state of consciousness according to certain metrics.   Consider a group of close friends meditating together, and sharing their states of consciousness and discussing their experiences and trying to collectively find reliable ways to achieve certain states.   Arguably the mystical strains of various religions have at various times contained groups of people operating in this sort of way.

A counter-argument against this kind of first-person science might be that there are loads of fake gurus around, claiming to have certain “enlightened” states of consciousness that they seem not to really have.   But of course, fraud occurs in third-person science too…

A stronger counter-argument, I think, is that even a group of close friends meditating together is not really operating in terms of a shared group of first-person observations.  They are operating in terms of third-person verbal descriptions and physical observations of each other’s states of consciousness — and maybe in terms of second-person I-Thou sensations of each other’s states of consciousness.

But There Likely Will Soon Come Robust Second-Person Science

On the other hand, second-person observations clearly do lie within the realm of science as I’ve characterized it above.   As long as any sane observer within the scientific community who wires their brain into Mr. X’s brain , receives roughly the same impression of Mr. X’s state of mind, then we can say that the second-person observation of Mr. X is part of science within that community. 

You might argue this isn’t so, because how do we know what Ms. Y perceived in Mr. X’s brain, except by asking Ms. Y?  But if we’re relying on Ms. Y’s verbal reports, then aren’t we ultimately relying on third-person data?  But this objection doesn’t really hold water — because if we wanted to understand what Ms. Y was experiencing when second-person-experiencing Mr. X’s brain, we could always stick a wire into her brain at the same time as she’s wired into X, and experience her own experience of Mr. X vicariously.   Or we could stick  a wire into her brain later, and directly experience her memory of what she co-experience with Mr. X.  Etc. 

Granted, if we follow a few levels of indirection things are going to get blurry — but still, the point is that, in the scenario I’m describing, members of a scientific community can fairly consider second-person observations achieved via brain-computer interfacing as part of the “observation-set collectively verifiable by the community.”   Note that scientific observations don’t need to be easily validated by every member of a community - it’s a lot of work to wire into Ms. Y’s brain, but it’s also a lot of work to set up a particle accelerator and replicate someone’s high-energy physics experiment, or to climb up on a mountain and peer through a telescope.   What matters in the context of science as I understand it is that the observations involved can in principle, and in some practical even if difficult way, be validated by any member of the scientific community.

Solving (Or at least Obsoleting) the Hard Problem of Consciousness

Supposing this kind of set-up is created, how does it relate to first and third person views of consciousness?

I presume that what would happen in this kind of scenario is that, most of the time, what X reports his state of consciousness to be, will closely resemble what Y perceives X’s state of consciousness to be, when the two are neurally wired together.   Assuming this is the case, then we have a direct correlation between first-person observations about consciousness and second-person observations — where the latter are scientifically verifiable, even though the first or not.  And of course we can connect the second person observations to  third-person observations as well.

Thus it appears likely to me the hard problem of consciousness can be "solved" in a meaningful and scientific way, via interpolating 2 between 1 and 3.   At very least it can be obsoleted, and made as uninteresting as the problem of solipsism currently is (are other people really conscious like me?), or, say, the philosophical problem of whether time exists or not (we can't solve that one intellectually, but we don't spend much time arguing about it)....

Can Computers or Robots be Conscious in the Same Sense as Humans Are?

Of course, solving or obsoleting the hard problem of consciousness is not the only useful theoretical outcome that would ensure from this kind of BCI-enabled second-person science.

For instance, it's not hard to see how this sort of approach could be used to explore the question of whether digital computers, robots, quantum computers or whatever other artifact you like can be "genuinely conscious" in the same sense that people are.

Just wire your brain into the robot's brain, in a similar way to how you'd wire your brain into a human subject's brain.   What do you feel?  Anything?  Do you feel the robot's thoughts, on the fringe of your consciousness?   Or does it feel more similar to wiring your brain into a toaster?

Is Panpsychism Valid?

And what does it feel like, actually, to wire your brain into that toaster?   What is it like to be a toaster?  If you could wire some of your neurons into the toaster's sensors and actuators, could you get some sense of this?  Does it feel like nothing at all?  Or does it feel, on the fringe of your awareness, like some sort of simpler and less sophisticated consciousness?

When your friend hits the toaster with a sledgehammer, what is it you feel on the fringe of your awareness, where you (hypothetically) sense the toaster's being?   Do you just feel the toaster breaking?   Or do you feel some kind of painful sensation, at one remove?   Is the toaster crying out, even though (if not for your wiring your brain into it) nobody would normally hear...?

The second-person knowledge about the toaster's putative awareness would be verifiable across multiple observers, thus it would be valid scientific content.   Panpsychism, in a certain sense, could become a part of science....

Toward a Real Science of Consciousness

In sum -- to me the hard problem is about building a connection between the first person and third person accounts of consciousness, and I think the second person account can provide a rich connection.... 

 That is, I think a detailed theory of consciousness and its various states and aspects is going to come about much more nicely as a mix of first, second and third person perspectives, than if we just focus on first and third...


11 comments:

Dean Horak said...

Ben,

It seems to me that if it were possible to experience, or share the conscious experience of another, then that would all but put the notion of panpsychism to rest. Since this BCI interface would presumably be implemented non-biologically (similar to today's BCI), they essentially transmit information from a sending subject to the receiving subject. This would validate the notion that consciousness is produced through information processing and not some mystical property of the universe. This revelation would also put to rest any notion that machines could not posses consciousness - it would be just a matter of the right information processing implementation.

Terren said...

Hey Ben -

Nice article... creative exploration of the I-thou possibilities in science.

My understanding however is that the relationship between 1 and 3 views is the "Easy" problem of consciousness. The Hard problem is explaining why there is a subjective experience in the first place, and it's not at all clear that what you're proposing makes much headway there, even if you were to create a robust new framework of 1-3 connections based on the empirical results of I-thou science. You would simply have a well-founded explanation of the neural correlates... i.e. the easy problem.

Benjamin Goertzel said...

Terren, regarding what is the "hard problem" of consciousness, I was using the standard terminology. E.g. in the Scholarpedia article I quoted above, we find that

"The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience"

Of course you are free to think that this problem is easy and some other problem is harder, but in the context of consciousness theory "the hard problem" is now a defined term with a specific meaning.

"Terren's hard problem" of "why there is subjective experience" seems a much looser sort of philosophical question. I'm not sure what kind of explanation would satisfy you actually?

Benjamin Goertzel said...

Terren, one more comment...

[I guess that consciousness is such a confusing term that even disambiguating what is meant by "the hard problem" itself becomes a hard problem ;-p ....]

Regarding your question of "why", it seems that what people accept as an answer to a "why" question depends exceedingly on their theoretical framework and presuppositions.

Is "objects on Earth fall down because of gravity" a good "why" explanation? If so why is it? Because it explains a specific phenomenon as a special case of a more general one? But why does gravity exist? The chain of "why" only ends when you want to decide it ends and move on to something else...

Once we have a good enough overall theory of consciousness, we will accept its terms and concepts as "why" explanations of aspects of consciousness. My bet is that second person exploration will be key in getting to this "good enough theory"

Terren said...

Hi Ben,

Maybe the focus on "why" makes things too hard... but I was referring to Wikipedia's entry http://en.wikipedia.org/wiki/Hard_problem_of_consciousness - and this reading is closer to my understanding of Chalmers' actual articulation of Hard vs Easy problems than the one you provided. For what that's worth.

Anyway, to your question of what sort of explanation would satisfy the Hard Problem as I see it - basically, a well-justified theory or framework for understanding what kinds of information-processing systems would give rise to awareness vs which wouldn't. I suppose you approach that with your 2nd person exploration of toasters etc., but that doesn't amount to an explanation or framework for thinking about it in theoretical terms.

Anyway, I don't mean to nitpick... it's overall an excellent post and I admire the creativity that went into it. I wouldn't have posted this except for the fact that you headed one section as "solving the hard problem".

Aaron Nitzkin said...

Hey guys!

I always love an original idea, and this one might teach us something, however, unfortunately, it wouldn't accomlish the goals you set for it -- I say with regret.

Call the receiver A and the sender B, for convenience.

A's experience of B's experience in your scheme cannot be demonstrated to actually be B's experience at all; it could be (and probably is) the construction by A's brain / consciousness of regular old electrical signals from B's brain. If A's experience of B matches with B's first person report, and with 3rd person data, then you can probably learn a few things; for example, you can establish that different humans' consciousnesses are very similar at least to the degree that they re-present the same data with more or less the same experiential qualities (if that is so). You could probably rule out a subset of mystical and religious claims based on the fact that you can turn their brain's data into consciousness without being in their soul . . .

You could probably make some progress towards identifying the necessary / sufficient neural correlates of consciousness by varying the details of the "wire" set-up between the brains.

But, you would probably be no closer to solving the hard problem, or determining the validity of pan-psychism. How do you determine whether your experience of the toaster is the toaster's experience, or your brain's reconstruction of the experience the toaster would have, if it had consciousness?

Whether you are talking about the hard problem is probaby debatable. It is sometims described as the gap between first and thir person perspectives,however, that definition is ambiguous, as to whether one means the gap between the third-preson and first-person descriptions of one person's experience, or the gap between third-person meterialistic science in general and the explaining of the existence of first-person perspectives in general. If one takes the second interpretation of the gap as the hard problem, then your experiement doesn't resolve it, but it would contribute some valuable data :-)

Benjamin Goertzel said...

Aaron, after reading your and Terren's comments I decided to change the title of the article and added a new prefatory note at the start...

It's clear to me now (and should have been before) that the "hard problem" means something different to different people. So "solving" a slippery philosophy puzzle whose meaning nobody can agree upon, is never going to happen in any definitive way.

However, philosophy problems CAN become boring and effectively obsolete. So the subtitle now reads "Obsoleting the Hard Problem of Consciousness..."

In other words, my contention is that once this second-person tech I envision becomes a reality and widely used and understood, the various flavors of today's "hard problem of consciousness" will all become utterly boring to people, in the same sense that we are now utterly bored by arguing with each other about whether each other are conscious....

Thanks for the feedback!

Anonymous said...

Cara memperlancari ASI yang mampet Herbal memperlancar ASI

Anonymous said...

Ben you might find this piece interesting. An Idealist's perspective.

http://www.bernardokastrup.com/2015/04/cognitive-short-circuit-of-artificial-consciousness.html

Bob

Jesse said...

I will apologize ahead of time as I'm not going to be editing for grammatical errors due to time constraints of a low battery and absence of charger. imo, it makes sense for some attribution towards some people's thoughts of survival... though of course compounded in complexity. not something I am qualified to fully expound upon... it would make sense that consciousness is a vast culmination of both serialized processes, common rules to be reproduced layered from the atomic, cellular to neuro levels... mixed with a lot of random processes (which can be coded as well) based off daily environmental feedback saved in memory... since birth. I can only describe this in a simplified example: e.g i am at a coffee shop and within a 2 minute span, i can document some very random actions + thoughts i've conducted with no understanding as to why... notice a pretty girl sitting to my right, i correct my back posture, i take a sip of my coffee..and reflect on what i may look like from her third person perspective.... POV if i look intellectual or not to my other surrounding patrons...i check my cell phone for no reason...i think that my facial expression reflect someone really really concentrated in articulating an important post.... i just re-adjusted my glasses...why? why did i just take a look at the random vehicle parked right outside etc... these subtle random actions occur every moment and to everyone one, all the time... if i think about it... most of these reactions or actions are fundamentally just re-enactments of what i've learned to do, have done or have seen others do before... right?
The interesting part of all of this is that to create "conscious" AI we usually depict a system that leapfrogs the crucial years required by the biological norms of "growth", that is from insemination to a living breathing human being... something substantial must happen during this period of time that also attributes to this "consciousness"...and even then I'd like to access research on the differentials of neuro activity at different age ranges ... in contrast, compare these results with those born with sensory deficiencies like someone born blind or deaf... or further handicapped. If you were born without all or most of our 5 senses...what affect does that have on consciousness or the development of it? blind, deaf and perhaps extreme sensory neuropathy type 2 (no sense of touch) ... i am highly caroused off caffein and whiskey right now so i don't know if im making any sense. cheers and i enjoy your posts...

Judge Omega said...

In my view, much of philosophy boils down to semantics. Human language is often very flawed;
- a word may sometimes have a subtly different definition from person to person
- a word may contain subtle flaws, not precisely mapping onto the physical universe. Either through outright errors or from relevant information which should be included in its definition but is not.
- words may represent very loose associations, patterns, or abstractions. Such words do not have a simple reference to in the physical universe.
- etc.

On the topic of consciousness, i believe consciousness falls into all the above categories of flawed words. Any debate over consciousness is going to center completely around how we define it, but that definition is completely arbitrary and man-made.
Even if we grant the existence of a pattern referred to as 'consciousness' which we miraculously somehow all agree on, it is still no more real than matrix operations. There is no law of nature that dictates how matrix manipulations should be performed, it is merely a useful construct of our intelligence. And its specifications are purely by convention.

Just as things that were once thought of as requiring intelligence to solve no longer are referred to as such once AI has attained superhuman ability in that area, i predict that true synthetic intelligence will push back what we refer to as consciousness until it is fully revealed that consciousness is just some outdated magical word.