Skip to content

The less of consciousness

August 13, 2019

David Chalmers, like many philosophers on the topic, is looking for a mechanism that explains consciousness. He sees the potential in artificial minds to help with that:

Once you’ve got an AI system that says, “I know on principle I’m just a bunch of silicon circuits, but from the first-person perspective, I feel like so much more,” then maybe we might be onto something in understanding the mechanisms of consciousness. Of course, if that just happens through somebody programming a machine to imitate superficial human behavior, then that’s not going to be so exciting. If, on the other hand, we get there via trying to figure out the mechanisms which are doing the job in the human case and getting an AI system to implement those mechanisms, then we find via some relatively natural process, that it A) finds consciousness in itself and B) is puzzled by this fact. That would at least be very interesting.

But what if the problem is one of what is missing, rather than one of what is there? Computer scientists long have known that what burns energy and increases entropy in computation is when you go to do something irreversible, when you delete data, not when you create it. There always is some of that when a system starts modeling itself and its environment, because it cannot simultaneously monitor every process that goes into the doing of that. I think this gets closer to the puzzle:

So far, the only research I know in this direction is a little project that was done last year by a couple of researchers, Luke Muehlhauser and Buck Shlegeris. They tried to build a little theorem prover, a little software agent that had a few basic axioms for modeling its perception of color and its own processes. It would give you reports like, “That’s red of such-and-such a shade,” and it would know it could sometimes go wrong. It could say, “I’m representing red of such-and-such a shade,” and from a certain number of basic axioms they managed to get it to generate a certain amount of puzzlement, such as, “how could my experience of this redness be the same as this underlying circuit?”

Update: In an interview last year, Peter Carruthers argues conscious thought is an illusion.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: