my bioass.
From the Heidegger-would-not-approve department:
The moral imperative to extend human life for as long as conceivably possible, and to improve its quality by artificial means, is no different from the responsibility to save lives in danger of ending prematurely, Professor Harris will say. Any technology that can achieve this should be actively pursued. |link|
A long life doesn’t mean a quality life. One might think that we have the imperative to genetically engineer kids to learn at even more advanced rates early on, while their brains are still plastic, for a fuller and more productive early life, even at the risk of shortening its length. I’m no ethicist, but I dont see either consequentialist or deontological reasons for rejecting that possibility from the start.
In any case, it seems like this same argument could be phrased as: we have an obligation to make humans as cybernetic and artificial as possible. Well, thats just silly. I speak up for machines a lot here, but central to my view is that we need to draw a distinction between humans and machines. Our machines are not just extensions of persons, they are participants in their own right. Ignoring this fact inclines us to think that the sole purpose of technology is to envelope the individual in a technological womb, to protect us from the world. But technology is no protector. Technology doesnt give us a free win, it changes the game.