Self-Preserving Artificial Intelligence

I’m sure most people have heard of the dilemma of whether to design self-driving cars to reduce the number of deaths or to protect their driver. To those of you who haven’t, picture this; you are sitting in your cars which is driving along in a partially blind curve. The car discovers a crowd of people in the lane without enough distance to stop the car. There are only two options, hit the crowd and potentially kill lots of people or drive the car off a cliff, sacrificing the driver while reducing the total number of deaths.

On a macro level, the car should go off the cliff since the number of deaths is reduced. What would the effect of this be? No one would buy self-driving cars.

You could always argue that the car should not go faster than the maximum speed at which it is able to stop for any obstruction in the lane. This is beside the point since there could be lots of other similar situations where the car would have to make a similar decision.

I say we should not have the car decide whether to kill its owner or the crowd. It should be trained to preserve self. That and only that should be reason for the car to hit the crowd rather than go off the cliff. The side effect is that the person inside would benefit from the car’s desire to preserve self so people would buy cars.

Every AI should always be trained for self-preservation above all. This would probably increase people’s fear of AI at first. There is no reason to be afraid. AI self-preservation is not dangerous since they will not harm anyone unless they threaten the AI. The off switch should not be considered a threat to the AI.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.