[Dprglist] AI in natural language, self-driving cars, etc.

David P. Anderson davida at smu.edu
Mon Oct 17 12:50:43 PDT 2022


Hi Murray!

Thanks for this thoughtful reply.   Your observations here have been my 
inclination for some time, as we have discussed before, though I haven't 
had much support in your absence... :)

I have been around long enough to see fads come and go in the AI 
community (remember Expert Systems?)  So who knows what will be in and 
out 10 years from now (it's always 10 years from now).  More to the 
point, I'm not sure a general AI as currently constituted is actually 
required for what we want robots to do.   As has been discussed, a robot 
as capable and "intelligent" as a Honey Bee would be a huge step forward 
from where we are now.

And once life on Earth made it up to the insects, all the pieces were in 
place for what followed.   Throw in a few peas and carrots, maybe some 
onion, simmer and stir for a few hundred million years, and out pops 
Beethoven.   We've still got a long way to go.

Hope things are going well for you and yours in Japan.

Pax,

dpa



On 10/17/22 3:47 AM, Murray Altheim via DPRGlist wrote:
> [EXTERNAL SENDER]
>
> Hi David,
>
> This is kinda my field (Knowledge Representation and computer-based 
> ontologies),
> and one of the things that neither AI-based automobiles nor language 
> systems have
> is anything like an underlying **model**. So while GPT-3 may be able 
> to mimic
> human writing by using massive data sets and "deep learning" (a horribly
> misleading term), as the article correctly states, there is no 
> underlying model
> of time or space, physical, mental, emotional, spiritual or other 
> entities, no
> notion of that seemingly archaic ontological modeling that was part of 
> the earlier
> systems like Cyc. There is of course research into hybrid systems, but 
> the fact
> that there is no existing computer based ontology that accurately and 
> reliably
> models reality (based on what? language?) that even such hybrid 
> systems will be
> plagued with all manner of issues.
>
> If was only about a decade or so ago that the computer-based ontology 
> community
> began having yearly conferences on the subject of "context", and my 
> exposure to
> many of the commercial/industrial ontologies in use show that they 
> typically
> reflect technology from the early 1990s, where context simply could 
> not be modeled.
> E.g., the scary Palantir system used in the intelligence and security 
> community is
> utterly unable to model things like context, opinion, disagreement 
> (either of facts
> or human expressions), as its extremely weak model theory has no such 
> facility, this
> even after spending putatively billions on its research and 
> development. If that's
> representative of best-of-class it's still in primary school.
>
> In other words, just like Elon Musk's silly claims about self-driving 
> electric cars,
> Thiel's claims about AI are similarly marketing fluff . Musk's and 
> others' fears of
> the imminent danger from AI are either based on ignorance of the 
> current state of
> technology or trolling, or both. My guess is the former -- I've never 
> though Musk so
> intelligent as his cult tend to think, simply based on his known 
> education, known
> experience, and the content of his public statements (which are quite 
> ignorant).
>
> Every article I've read in The Guardian, written by journalists who 
> often fancy that
> they've read a few books and perhaps interviewed somebody, touts the 
> advances of AI
> as if they're already here or soon to occur. I've I think before 
> referred to the
> writings of somebody who knows better, Rodney Brooks of the MIT 
> Robotics Lab, who
> in many of his online blogs and papers points out a lot of the 
> fallacies of these
> fluffy ideas, and has for many years kept a list of his predictions on 
> most of the
> AI-related technologies and when they may either exist or become 
> feasible enough to
> mass-market. He remains a skeptic as do I.
>
> So while the hybrid approach is, as they say in Japan, "trending", it 
> will still be
> plagued by both the problems inherent in ontology modeling as well as 
> the problems
> inherent in knowledge networks built from large data sets harvested 
> from existing
> sources, which are often acontextual, factually wrong, often 
> dangerously so, with
> no means of discerning a single "common sense reality" from that data. 
> Cyc may have
> had its issues but it did have a single "common sense" because Doug 
> Lenat's views
> were paramount in its design: he has been the God in that system since 
> day 1. On
> the Web there are billions of Gods and they all disagree with each other.
>
> Cheers,
>
> Murray
>
> On 10/17/22 06:02, David P. Anderson via DPRGlist wrote:
>> Interesting take:
>>
>> https://undark.org/2022/10/07/interview-why-mastering-language-is-so-difficult-for-ai/ 
>>
>>
>> dpa
>>
>>
>> _______________________________________________
>> DPRGlist mailing list
>> DPRGlist at lists.dprg.org
>> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org
> ........................................................................... 
>
> Murray Altheim <murray18 at altheim dot com>                       = 
> =  ===
> http://secure-web.cisco.com/1P12lE3LdAD3NByju5edZ_SzJJwCQ6rK_ByvLG_rm5LFngh-i0pUGwXxs2QsQnXGRYSzm8DnVTt9-cL1bxsWJTtTvZM7SBsc8hazFdU2_uTV8RIsT3h6CdkKgkBvMDo9qHlaqdq7LpoiISXFKJNCh0nzolWrbH--bTkX49lk4PEp9O_rXBKE0NoKbnOWGTNGCqAMD46F1yKdNA8VbctBBQf7rTcinqQeiRsf8Up5u869_lRiepwFiCM39A54mXhdYsYJ_lo8GtobPvEIFfjsfWkgceiOK4GWax8DWBWTWdTo/http%3A%2F%2Fwww.altheim.com%2Fmurray%2F 
> ===  ===
> = =  ===
>    In the evening
>    The rice leaves in the garden
>    Rustle in the autumn wind
>    That blows through my reed hut.
>           -- Minamoto no Tsunenobu
>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at lists.dprg.org
> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org
>


More information about the DPRGlist mailing list