Editorial

Should we leave the tough decisions to computers?

Wednesday, February 15, 2017

When the going gets tough, the tough tend to look out for number one.

Experience seems to confirm that twist on an old adage when it comes to people. We've noticed that crimes like embezzlement and Ponzi schemes like Bernie Madoff's seem to turn up when the economy turns down.

The computer geeks at Google's DeepMind neural network discovered disconcerting human-like behavior when they pitted two computers against each other with the task of harvesting their share of a limited number of green "apples" in a confined space.

Besides the power to take possession of the apples, they were each given a "laser" to shoot and temporarily disable their opponent at any time.

Thus, while running thousands of simulations, the machines could decide whether to try to horde all the apples themselves or allow each other to have a roughly equal amount.

When there were plenty of apples to go around, the computers were more than happy to share and cooperate.

When apples are rare, they are more likely to attack and disable the other player.

In another game, when they were asked to hunt down a form of "prey," they found it was more beneficial to work together than try to catch it separately.

Researchers concluded that computers learned that cooperation is great for hunting down a target, but when resources are scarce, betrayal works better.

Sound familiar?

Artificial intelligence is one thing when it's a game on a screen, but another thing altogether when it intersects with the real world.

Consider self-driving vehicles, like a few already on the roads and many being planned by major car makers.

Suppose an autonomous car gets into a situation where it must make a choice -- spotting a pedestrian in its path, does it decide to run over the pedestrian, swerve to the right to hit a barrier and possibly kill its occupants, or swerve to the left into the path of an oncoming car and possibly killing passengers in both cars?

Would you want to make that choice? Would you want to leave it to a collection of circuits in your car's computerized brain?

MIT's Media Lab is running just such scenarios to find a way to program some sense of morality into the algorithms controlling future vehicles.

Someday, perhaps if any of us find ourselves having faced such a choice, heaven forbid, we will find comfort in knowing the choice wasn't left to us.

Respond to this story

Posting a comment requires free registration: