The Future

This is a vast and disturbing topic. I have a sort of timeline planned out: make IMO in 2011, study abroad at Stanford or something, become another mathematics professor, make people happy, come back to Taiwan and get math education off the ground. This is what my future would be like if the rest of the world stayed more or less stagnant.

Which is not going to happen. (Okay, I know, I left the part out about overestimating how genius I am. Fine, maybe I don’t make IMO in 2011 or 2012 or ever. Aside the point.)

The world is totally unstable right now. U.S. troops all over the place, nuclear weapons flickering in and out of existence, gas and oil prices leaping madly higher. People developing invisibility cloaks, people developing little robots, people developing artificial intelligence. If WWIII really comes into existence, the world will be shattered. But to be honest, that’s one of the least scary outcomes, although at least the easiest to imagine, and possibly the most plausible.

Advances in neuroscience may eventually allow people to stimulate their happy nerves artificially. That would be the ultimate drug. (Digression: Google results for “artificial happiness” are dwarfed by results for the particular book of the same name. The remaining results are for the particular song of the same name. And then there was this really weird page with a Pokemon parody of Tik Tok. What?)

But even more, it would be another blurring between reality and, well, reality II. If the stimulator can perfectly simulate the feeling you get after dashing through ten laps on the local track, is that happiness “real”? Of course not, you would say, but why do we feel happy after ten laps anyway? If the only reason of that is that running happens to cause some particular type of endorphins or whatever to be released, then how is that happiness, caused by nothing more than a lot of chemical reactions, more “real” than that happiness caused by the stimulator?

Welcome to the Mind-Body problem. No, just kidding. Hmm.

Of course, there is the worrying sight of the Technological Singularity. Background. Computing power increases more or less exponentially, and projected curves say it’s going to hit a divide-by-zero around something like 2045. Not soon, anyway. Here come simulated universes, artificial intelligence, nanotechnology, and the obsoleteness of the human race.

First, on simulated universes. Could terrestrial scientists build a simulator of, say, Earth at 1750? I personally don’t think that would be feasible. The universe doesn’t have that much computing power for you, and you can’t simulate a part of the universe in a smaller part. Not going to work. Computational power has its limits. I think that if you tried to simulate, say, a human macroscopically—i.e. don’t simulate all the bacteria and all the neurons and whatever, but predict how they would behave on such a large scale—your human can’t be sentient. Many AIers are thinking that organic material may be needed for self-consciousness. After all, the computers we use now are really just extremely fast and sophisticated abaci. The chess program on my drive does not know what chess is, or even that chess exists; all it does is do a huge number of logic and arithmetic operations in a second, and those operations happen to provide a pretty decent chess strategy. That’s all.

I must acknowledge, though, that we could be simulated by higher-dimensional, more-computing-power-universe beings. What’s real anyway?

Okay, on to artificial intelligence. What if humans produce this artificial superintelligence that is thousands of times more intelligent than its creators and wants to survive? If it also had access, no matter how slight, to any sort of mechanical thing, humans would probably become extinct very soon, or reduced to powerless pets or slaves. Or hacked to become obedient. This could be terribly easy. Artificial intelligence is a real edge in combat, so there is probably no hope of getting everybody to not develop it.

I should say, though, that if humans were to develop something that was morally sound, reasonably intelligent, capable of survival, and mutually empathetic with humans, I would have no problem with humans becoming extinct. It might even be good. I have a lot of imperfections. Of course, I would have a problem with a murderous blob of gray goo rendering us obsolete. Or a completely digital floating object. And no human slaves, please.

Okay, this is enough for one post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s