So. The Singularity is when our technology goes berserk, computers can think better than we can, we can simulate people and upload people and engineer humans capable of surviving underwater. People don’t even have to die! So why am I still worried?
There are many bad outcomes. Most obviously, people could abuse nanotechnology or genetic engineering or robotics or whatever, and totally destroy civilization. This would be, well, really bad. However, there’s room for hope here. The more awareness we have of this, the more we can prevent it, and I think a good deal of the futurists have this in mind all the time.
Humans could also be altered immensely by the singularity, losing plenty of essential characteristics of humanity: love, care, sympathy, respect, equality. In fact, we could reprogram ourselves to do without them. We could become a completely objective race. From our view, that would be just as bad. Objectively speaking, though, which is better? Is there a way outside of subjective and objective thinking? What?
Then, instead of objective humans, we could get extremely powerful artificial intelligence, which unfortunately want to destroy humans. We’d be doomed, but I think that, what with all the dystopian sci-fi movies and stories, this would be on the mind of even more people, so it’s more likely to be prevented.
There are also more specific dangers: people becoming overly absorbed into simulations and artificial-happiness-inducers and the like, but these are easy compared to the others, just as the easy mind-body problem is easy compared to the hard mind-body problem, so I don’t think I should worry about that. Another problem would be possibly a division between the rich and the poor, but this is again “easy”. Now, I have to march into the impractical:
Suppse we can simulate people. Let’s simulate a person who’s conscious, and coerce him/her/it to be happy, by setting the appropriate variable. Is there any point in doing so?