[Dprglist] Robots: for good or evil?

Murray Altheim murray18 at altheim.com
Sun Nov 22 11:14:50 PST 2020


  "On a summer night in Dallas in 2016, a bomb-handling robot made
   technological history. Police officers had attached roughly a
   pound of C-4 explosive to it, steered the device up to a wall
   near an active shooter and detonated the charge. In the explosion,
   the assailant, Micah Xavier Johnson, became the first person in
   the United States to be killed by a police robot.

  "Afterward, then-Dallas Police Chief David Brown called the
   decision sound. Before the robot attacked, Mr. Johnson had
   shot five officers dead, wounded nine others and hit two
   civilians, and negotiations had stalled. Sending the machine
   was safer than sending in human officers, Mr. Brown said."

  "But some robotics researchers were troubled. “Bomb squad”
   robots are marketed as tools for safely disposing of bombs,
   not for delivering them to targets."
   [...]

   Can We Make Our Robots Less Biased Than Us?
   A.I. developers are committing to end the injustices in how their
     technology is often made and used.
   David Berresby, 22 Nov. 2020, The New York Times
   https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html

----

This certainly raises a few questions. Forinstance: what is a robot?

The device used in the above description could be more properly described
as a telerobotic or remote-controlled platform (since it is entirely
controlled by a human). The article does acknowledge these distinctions,
and is more focused on the biases inherent in robot design due to the
biases of the designers. But also about the use of technology in general.

We often see almost any remote-controlled device nowadays being called a
"robot", which does disservice to that other aspect of robotics that is
(to us, I believe) more interesting: autonomous or semi-autonomous robots;
and to the thing that keeps Elon Musk up at night (“potentially more
dangerous than nukes”), the idea of AI somehow unleashed as a weapon upon
the world -- which is already happening, what with facial recognition,
computer-guided bombs, bomb-carrying drones, and weapon-carrying autonomous
firing platforms. The military clearly likes the idea of robots.

And then there's the combined fear of autonomous weapon-wielding robots too.

Robert Oppenheimer, Douglas Engelbart and others have proferred the idea
that designers, engineers, all who are involved in the development and
support of a technology are also ultimately responsible for its usage,
that no one escapes that responsibility should the technology end up
causing damage to society. The gun, the nuclear bomb, AI, robotics, all
tools potentially for beneficial as well as harmful uses. And if one is
using a tool against an enemy, is that harm justified? Is the use by
police of a telerobotic platform to kill a shooter, the security services
to monitor the movement of people via facial recognition to locate
criminals and terrorists, is that really so different (apart from scale
of course) to the use of the nuclear bombs on Hiroshima and Nagasaki
"to end a war"? Or even the use of oil to power our cars? How much
"collateral damage" is permitted?

Most of us are simply hobbyists, so we aren't creating tools that would
likely ever be used against us (maybe against our cats). And while I
don't think there are any definitive answers, the question of the use
and abuse of technology still remains one of the more profoundly
important of our times.

Cheers,

Murray

...........................................................................
Murray Altheim <murray18 at altheim dot com>                       = =  ===
http://www.altheim.com/murray/                                     ===  ===
                                                                    = =  ===
     In the evening
     The rice leaves in the garden
     Rustle in the autumn wind
     That blows through my reed hut.
            -- Minamoto no Tsunenobu



More information about the DPRGlist mailing list