It’s human nature, I think, to want to understand everything. Of course, that’s quite impossible. There are things that we will probably never know, e.g. Ramsey(xkcd, xkcd), to give an impossibly contrived example.
Main question. How would humanity react if scientists finally resolved how the human brain worked? Suppose they understood how a neuron worked, how qualia could exist, how consciousness could possibly arise from a purely biological system, how certain personalities could exist, why certain people are simply nicer than others. And the explanation is alarmingly simple. Oh no. How would I feel?
Certainly, I feel a bit of a loss of individuality. After all, all that makes me different from the others is maybe the way my neurons are configured. Each time I think of a thought, all that is is certain neurons firing in a certain pattern. Typing is also just neurons to the peripheral system. All our interactions is just presence or absence of firings of neurons through a hugely convoluted, unnecessarily physically reliant path.
I’m cool with that bit. At least we’re controlling it. But what is that control? Is our control really control? Could we maybe have no free will? I may have said this before, but free will is irrelevant as long as you don’t know what the outcome is. Is that an illusion of free will?
I’m getting incoherent here, but anyways, if the brain turns out to be more or less an automatic, “brainless” organ much like the heart, I guess we’d be surprised at how good the illusion is. That would bring up another problem, though: philosophical zombies, or p-zeds for short.
A philo-zombie is an organism exactly like a human in every way, and behaves exactly the same, EXCEPT it has no actual consciousness. If the brain is purely biological, then a p-zed can’t exist. But simply looking at a mechanical brain, is there anything that must suggest a conscious human rather than a p-zed? If science can solve that, congrats. Wait, where was I? Yes, science completely determining how the brain works is scary since the brain IS us. Not only is there no hope of afterlife or anything, but we have to feel that our reasons, our emotions, our actions, our decisions are all unreal—nothing but neurons. But love is love even if it’s caused by mundane oxytocins or whatever. Is it? If every citizen of China acted as a neuron to simulate the human brain, would China be sentient?
I guess that if science can answer that objectively, then I would be amazed. But reality is subjective anyway. Wait, there’s more.
What if science gave us the ability to figure out a step-by-step explanation of, say, how Andrew Wiles made the final step to FLT? Would that be awesome insight or scary? If Andrew Wiles proved that with this step, this step, and this step, all perfectly typical firings in Andrew’s brain, then there is apparently nothing spectacular about the whole thing. Now so what? We could simulate brains! Humans would become obsolete! We could upload consciousnesses!
What would that feel like? You put your brain under a brain scanner, your virtual brain is started, and the material you is euthanized. It’s the last part I’m worried about.
So, at time A, you are copied. There are now two living yous, one in the computer and one in the scanner, and the latter is terminated at time B. What’s happening? Is the material you technically still living on in a different manifestation? Or was that you copied and then deleted? If the material you is still living, then, the copying action is in fact “continuous”, which perhaps means the “continuous” living we are doing now is not actually “continuous” at all. Each instant in time, a “you” makes a decision, which is recorded in memory and performed through neuronic nerves, and then the “you” is instantly destroyed, or ceases to exist.
Let’s temporarily ignore the euthanization part. Suppose you’re sitting under the brain copier. Certainly, as the machinery is operating, you don’t expect to suddenly find yourself submerged in this electronic matrix experience. You expect to stay here under the brain copier. Meanwhile a new version of you, also expecting to stay under the brain copier, pops up in the matrix. Maybe you can look up and communicate with the new copy. Yep, that copy is you and knows what you know, yet how could part of you have also gone through the brain copier with a different perspective after it was finished? The “fleeting consciousness” idea makes sense of this, yet it provides a terrible sense of depersonalization. All I am is a continuous stream of consciousnesses constantly taking my mental state, applying the rules of nature for one chronon, and leaving my mental state that way for the next bit. In that case, should I identify more with the stream of consciousnesses or the mental state? I wonder.
This post is already long, but I haven’t gotten to the climax yet. What if technology allows us to “hack” our brain? What if we could adjust our neurons to make ourselves nicer, smarter, more studious, and less prone to worrying? What if we could hack our muscles the same way? What if we could provide our next bit of consciousness with a complex set of entirely faked memories? Would that body still be us in the future? Would that be effectively like killing our current personality and replacing it with a completely new individual, except knowing how we got to be here?
I think that is so, but I think that won’t stop most people from making themselves smarter if they can. And, of course, parents would probably be more likely to do that to their children if they could.
Everybody would be the same. Nice and smart and studious. Is that utopia? Or will it be the worst hell anybody has seen?