<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">John,<br>
<br>
<font size="2" color="black" face="Arial, Helvetica, sans-serif"><font
size="2">"Stay out of the ditch. Don't hit anything."</font></font><br>
<br>
words to live by...<br>
<br>
cheers!<br>
dpa<br>
<br>
<br>
On 04/19/2017 09:47 PM, John Swindle wrote:<br>
</div>
<blockquote
cite="mid:15b894220dc-2f1e-29be9@webprd-a36.mail.aol.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<font size="2" color="black" face="Arial, Helvetica, sans-serif">
<div> <font size="2">Dave,<br>
<br>
The Q&A at the AT&T/IBM Watson hackathon
presentation at DPRG a couple months ago showed that the
weights in the neural nets have no meaning to people. So,
though we think we can grade an AI system, we cannot
diagnose it. We can only correct it when it malfunctions. We
think that a low rate of false passes and false fails means
success. But when we try to diagnose why there were false
fails and false passes, we have no clue because the neural
net weights have no meaning. We can "fix" the problem by
massaging the weights, but we have no freaking clue about
what those change have on other results. Indeed, we are not
even supposed to know!<br>
<br>
Over lunch at a place where they do this sort of thing, my
student said, "but people make mistakes, too." It's a very
cogent argument, and it stopped me then.<br>
<br>
But now I have a story, and it is the story. A
purpose-written program (non-AI) has a rationale, a story.
If it fails, we understand WHY it fails, because the story
was wrong. We understand it. We re-write it.<br>
<br>
Does our story give a better result than the opaque weights
of a neural net? I don't think so. But I think we can more
easily control and CORRECT a story that we understand, as
opposed to random weights that just happened to deconvolve a
problem that day.<br>
<br>
Separately,<br>
<br>
I have a completely different proposal from what any of the
car companies are doing:<br>
<br>
Instead of special-casing hundreds of thousands of traffic
conditions, I propose self-driving cars that:<br>
<br>
Drive like India, Mexico, and Rome, and dirt farms in
Arkansas where I grew up:<br>
<br>
1) Stay out of the ditches<br>
<br>
and<br>
<br>
2) Don't hit anything.<br>
<br>
If a lane is available that will not cause the vehicle to
become stuck and does not ruin a neighbor's lawn, drive on.
Otherwise, stop.<br>
<br>
That's what I was taught. The rest is law and courtesy,
which I was also taught. But for a vehicle, the rules could
be dirt simple.<br>
<br>
Stay out of the ditch. Don't hit anything.<br>
<br>
Best to y'all.<br>
<br>
John Swindle<br>
</font><br>
</div>
<div> <br>
</div>
<div> <br>
</div>
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">-----Original
Message-----<br>
From: David Anderson <a class="moz-txt-link-rfc2396E" href="mailto:davida@smu.edu"><davida@smu.edu></a><br>
To: DPRG <a class="moz-txt-link-rfc2396E" href="mailto:dprglist@dprg.org"><dprglist@dprg.org></a><br>
Sent: Sat, Apr 15, 2017 3:02 pm<br>
Subject: [Dprglist] fooling AI<br>
<br>
timely:
<a moz-do-not-send="true"
href="http://www.bbc.com/future/story/20170410-how-to-fool-artificial-intelligence"
target="_blank">http://www.bbc.com/future/story/20170410-how-to-fool-artificial-intelligence</a>
<<a moz-do-not-send="true"
href="http://www.bbc.com/future/story/20170410-how-to-fool-artificial-intelligence"
target="_blank">http://www.bbc.com/future/story/20170410-how-to-fool-artificial-intelligence</a>>
cheers,
dpa
_______________________________________________
DPRGlist
mailing list
<a moz-do-not-send="true"
href="mailto:DPRGlist@lists.dprg.org">DPRGlist@lists.dprg.org</a>
<a moz-do-not-send="true"
href="http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org"
target="_blank">http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org</a>
</div>
</font>
</blockquote>
<br>
</body>
</html>