Out of all the debates surrounding the mind, the topic of greatest controversy is over qualia, or basic sensory perception. If we're trying to knock out a theory of mind, the most intimately counterintuitive aspect of our project is to understand qualia in the same theoretical terms as understanding just about anything else. Leibniz framed the problem succinctly long ago:
One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.
Today there are quite a few in-between positions and different takes. One of the most popular ways to conceive of the mind is in terms of a computer - computationalism (a brand of functionalism). Of course, that's a broad project, not one that maintains computers as we currently know them could pull it off. But I offer as a thought experiment against the grain of Leibniz's, the scene from The Matrix where Cypher is explaining to Neo how he monitors what's going on inside:
there's way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is blonde, brunette, and redhead.
I'm not suggesting anyone could ever really, read code like that, but, let's just be clear about what's at stake here. Cypher is claiming to "see a perception".
Now as a bridge between the two thought experiments, Leibniz's Mill and Cypher's Chair, consider the FARMS' Mayan tour, which I base on a comment by Rorty from Philosophy and the Mirror of Nature:
Imagine strolling through the Mayan ruins alone. You encounter pots and squiggles on rocks, but nothing to explain the Mayan language and culture. Now imagine a walk with a FARMS apologist who is considered by everyone to be the world's foremost expert on the topic of Mesoamerica. He begins interpreting the squiggles and explaining the pots. Some thoughts begin to flow in your mind, you don't see exactly the same squiggles you did before. Imagine the FARMS apologist, he's seeing even more than you are - an entirely different world than you're seeing! And finally, imagine a native Mayan, the sqiggles cary such force that he can almost feel them. Not too hard to imagine if you consider curse words in your own language. If you speak English natively, the German equivalent just doesn't hit you the same way.
Rorty's summary would be that of course you're baffled by Leibniz's mill if you don't read brain language. And what I'm suggesting is that there aren't multiple kinds of consciousness but one kind that exists on a spectrum. Where we are tempted to report a feeling when we decode some instances of foul languge, consider the hypothetical cypher who can make ridiculously fast code translations - how else to make the report but to say he "sees" them? And to the extent that it's unbelievable Cypher could pull it off, wouldn't we make it more realistic by increasing his abilities? More neural connections in his brain maybe, essentially, to make him MORE like a computer rather than less like one?!
I believe the intuition from either of the first two thought experiments is derived primarily, not from the force of the logic they impose, but by the way they direct (or misdirect) our imagination. Of course if we blow up the mill larger than life and walk through gigantic gears and pullies we'll never "see" or "find the explanation" of a thought - or an operating system for that matter. But if instead of gears, we talk symbols, tiny ones that flash by in the millions before our eyes, our intuition is led to think that perhaps with the right abilities to do the translation, a - as we label or mislabel it - "stream of consciousness" would fill in.
Nobody's Home
Another famous thought experiment in Mind is Mary's room (copied from Wiki):
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?
I couldn't possibly do justice to all the commentary on this thought experiment. I feel guilty even having an opinion on it. But consider my thought experiment, Nobody's Home:
A room exists with nothing but black and white things in it but it contains all the knowledge in the universe in print and on black and white DVDs. The door is opened and a yellow banana is tossed in. Does the room contain any more knowledge than it did before?
Knowledge is not a ubiquitous substance that just exists in Mary's "mind". When we look at it that way we've already presupposed what the answer is going to be. What this account lacks is a description of the physical tokening involved by which we can realistically talk about Mary knowing anything. You can only believe there is knowledge in Mary's room because we've been conditioned with metaphors that make knowledge a static thing, a list of propositions someone can own, or have, but when we take that position to the absurd, and literally make it a description - a bunch of textbooks - we see the problem. The reality is Mary isn't "aware" 24/7 of every fact in third person cold-hard form about color. She thinks about this or that, consults a book, lumbers through some equations, and then goes to sleep. This is akin to our FARMS scholar cracking Mesoamerica. He'll never know "what it's like to be" Mayan. Mary might, however, like the FARMS scholar be able to evoke a kind of "proto-feeling" that others can't. And that would be a significant acheivement. Something akin to Gleick's comment on Richard Feynman:
Feynman seemed to possess a frightening ease with the substance behind the equations.
Math was so tacit to Feynman that we're tempted to see a substance. It makes us wonder who's thinking more, they guy who plays with math like putty or the guy who spends hours to balance a checkbook? Wouldn't we be tempted to say a math wizard experiences something phenomenal that others don't when doing equations? Surely in Feynmans mind, there are steps "skipped" that I'd stumble over for hours. In that situation, who's doing more thinking, him or me? We'd be tempted to say Feynman's mind was so efficient that he's less aware of the explicated rules, and just "experiences" the math, what, as a continuous stream? Whether it's Feynman, Mary, or myself, knowledge doesn't just exist abstractly as a list of propositions, but it's a complicated scene of physical tokenings in the brain. Efficiency is quale-like, and labored is "conscious-thinking"-like.
The Tokening Problem
John Searle thought of an argument against functionalism once where if the paint on his wall was informationally complex enough to describe anything, it could describe a computer, and if the mind is a computer, then the paint must be conscious. There are a few subtle flaws here, but the most important is that it seems all the talk about models and formulas can help us forget that a computer isn't recursive anymore than a blueprint is a building. A computer is an actual running machine in space and time that physically tokens an instruction set just as a brain is. And when we understand that, trying to get at exactly what we know and how we know it is vastly more complicated than standard considerations of "propositional knowledge". What I'm trying to say is that the knowledge doesn't exist in Mary's mind or her books, but somehow in this complicated tokening process of reading, thinking, and writing things down. And it is unlikely given the grain of the thought experiment, that Mary would encounter her books and imagine things in her limited memory in such a way that she could produce a Yellow banana. But like the Mill turns its blades on size, Mary locks our minds in a room bungling around with test tubes and textbooks. It asks us to conjur up representation and knowledge in a particular way that isn't up to the task. But if she took some lessons from Cypher, she might be able to get that book knowledge into a form where it streams on her black and white monitor and she gets a yellow banana out of "pure information". There are other ways she could get that knowledge I admit if we allow this, and it's been suggested that we can only beg the question one way or another, but considering knowledge in relation to physical tokening rather than lists of propositions that exist on a page or "in a mind" helps us imagine how producing that knowledge might be possible.
Tokening and the Third Person
The tokening problem above is embedded in the talk of subjectivity, objectivity, first person, third person, and any language which tries to separate, "us from them". I don't believe there is a such thing as a truly first person or third person explanation. The most dry and exacting descriptive accounts in english nevertheless wouldn't be fully translatable (and I follow Quine's language translation problem) by an alien from another world. There is always a little "first person" burried tacitly within anything claiming to be objective and vice versa, there is no other-world residing meaning. And what's taken for granted by a community is very hard to emulate artifically for those on the outside.
Leibniz's mill confines us to learning Mayan from a first visit to the ruins. Mary's room augments our tool set with dictionaries and videos. But with Cypher as a guide, it may be possible to create the virtual environment that helps us token the knowledge in a way to facilitate translation from symbols to "qualia" though in reality it's probably not feasible. But I submit our FARMS experts know, literally, "what it's like to be" Mayan just a little bit more than the rest of us. And Mary knows "what it's like to see" yellow a little more than I would in her room. And a hypothetical Cypher "sees" red just like I do when reading code. So while I don't think a Cypher will ever exist, I can imagine the extreme end of the spectrum he represents. And I think it's not at all unreasonable to believe that one day "science" will have an account of "qualia". It won't be Cypher level understanding, but on the level of a FARMS scholar who understands Mayan and doesn't think there is property dualistic or pronoun problem that prevents him from explaining what it's like to be Mayan. Because what falls out of this phenomenology is an epistemology where there are only varying degrees between tacit knowledge and propositional knowledge, and propositional knowledge can't stray to far away from tacit knowledge. Where our information processing falls short to see Cypher's code, we might yet hit the mark on describing Leibniz's Mill.
In a nutshell, my current position then is twofold:
1) "qualia" is the hyperspace setting on the spectrum of thinking and not thinking, it's an extreme form of "access consciousness" and if that can be explained computationally, phenomenal consciousness can be. The human brain isn't capable of thinking fast enough in order to get "true" qualia from thinking (like Cypher does) only a vague impression of it.
2) The ontological problems once we get to that place in science will transform along with language such that it will remain undecided on question like whether matter has two properties or the account is truely descriptive, because what seems so maddingly first person now will be tacit within language. Somwhat maybe as life once seemed inexplicable, even in theory. Somehow our conceptual resources have evolved to the point where even young adults aren't baffled by the notion of a description of life.
my blog that talks about this stuff too:
http://gadianton2.tripod.com
Main influences for this post to substitute for proper citations:
Dennett
Searle
Block
Rorty