This week in Futuristic Panic Mode:
There are a lot of possible consequences of complete scientific understanding of the human mind that bother me. Recently it seems to be the idea of happiness being objectified. If we become able to reliably reproduce happiness, in fact any possible “flavor” of it, e.g. the kind you get after a solid workout, the kind you get after learning something mind-blowing, the kind you get after completing a long assignment or project… what will stop people from no longer carrying these things out?
Presumably exercise would have much more comfortable and convenient replacement if we actually get that far, but science isn’t going to develop on its own and books don’t write themselves. What will stop people from actually doing stuff?
But I now see that’s being a bit unrealistic. The moral implications of delegating all your life goals to a machine would be huge to everybody, I think. Clearly, I already have qualms about getting happiness injected into my head, and simply knowing that the feeling is artificially created automatically makes it different from the type of happiness we’d be trying to reproduce, even if I were to support it. And if we also combined it with something to make myself forget or not realize that my feeling of happiness was artificial, that form of self-deception is far enough across the line, I think, for most of us to reject it.
I do worry that artificially created moods will become socially acceptable… right now everybody (not literally everybody) knows that drugs are bad, but they are addictive, have dangerous side effects, hurt your thinking abilities, and are already associated with people who do many more “bad things”. In fact, if all these negative qualities are eliminated, I don’t know if my concerns now can be fully defended in a rational way. Suppose we can create any state of mind we want with science, safely and with no side effects — why not? But then why not actually do whatever you want to get the feeling (if that action is socially acceptable)? And, practically speaking, would there be any difference to us?
And now I realize that, for me at least, the drugs we have are already similar enough to draw an analogy. Why not just train yourself to find happiness in the real world? I judge, by some standards that might or might not be rational, that it’s the only kind that’s “real”.
Randomly thinking back to the feelings of irrational happiness, I wondered—what if it’s already happening? What a great conspiracy theory: the government sending nanobots all over the place to make people happier without their knowledge or consent! I’m not sure if “normal” people would actually be mad at something like that or not. It beats covering up discovered alien bodies. What am I saying? We can do that already with present-day drugs, I think. Errr, back to the subject.
What if instead we had a drug that made people better at releasing the happiness chemicals? Instead of simulating happiness, it lowers your threshold for having that feeling. It’s a little scary, but then stress and depression are real problems that kill some people and seriously impair a lot more. I’d pick being unnaturally happier over dead any day.
Of course, it might not actually be in the form of chemicals; maybe the preferred method would be a couple sessions with a trained psychiatrist, which I gather is already happening now, we’re just not good at it (and by good, I mean Clarke’s-third-law good). This makes it seem a lot less scary, but is it fundamentally different? Aren’t we just manipulating our minds again according to our scientific understanding to change a specific quality? Also, who decides how happy one has to be to be normal?
And then, of course, who are we to call anything “natural” or not? I’m pretty sure agriculture is unique to our species (not counting extraterrestrial life), and is it natural? The world is the way it is now because that’s just how it developed; we’re inside the system, and whatever we do is “natural” to an outside observer.
Well, I guess we’ll see whatever society thinks about this when it actually becomes possible, say, 2050. And the final thought is, as usual, that I should be doing my homework instead of wondering what happens then. *sigh*