Headline Nov 02, 2014/

''' ROBO-ETHIC​S '''

ONE WAY of dealing with these difficult questions is to avoid them altogether:

By banning autonomous battlefield robots and requiring cars to have the full attention of human driver at all times.

Campaign groups such as the International Committee for Robot Arms Control have been formed in  opposition to the growing use of drones.

But autonomous robots could do much more good than harm.

Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. 

Driverless cars very likely to be safer than ordinary vehicles, as autopilots have made the planes safer.

Sebastian Thrun, a pioneer in the field reckons driverless cars could save 1 million lives a year. 

Instead, society needs to develop ways of dealing with the ethics of robotics  -and get going real fast.

In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation.

It is clear that rules of the road are required in this difficult area, and not just robots with wheels.

The best-known set of guidelines for  robo-ethics are  ''three laws of robotics''  coined by Isaac Asimov, a science fiction writer, in 1942. 

The law requires robots to protect humans, obey orders and preserve themselves, in that order. Unfortunately the laws are of little use in the real-world. Battlefields robots would be required to violate the first law.

And Asimov's robot stories are fun precisely because they highlight the unexpected complications that arise when robots try to follow his apparently sensible rules.

Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular.

First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault of an autonomous drone strike gone wrong or a driverless car has an accident.

In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design:

It may, for instance, rule out the use of  artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.

Second, where ethical systems are embedded into robots, the judgements they make need to be the ones that right to most people. 

The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help.

Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different type of rules if they were left to their own devices.

Both ethicists and engineers stand to benefit from working together; ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.

Technology has driven mankind's progress, but each new advance has posed troubling new questions. Autonomous machines are no different.

The sooner the questions of moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring.

***So, as robots grow more and more autonomous, society needs to develop rules to manage them.***

With respectful dedication to all the  Research Scientists  in the world. See Ya all, Sirs, on !WOW!  -the World Students Society Computers-Internet-Wireless:

''' Smoke And Fire '''

'''Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless


Post a Comment

Grace A Comment!