The limits of Deep Learning

In a recent artice, AI expert Gary Martin wrote about Deep learning experiencing serious troubles with the latest massive large-scale language models. After presenting a few runts about overly optimistic comments like e.g. Geoffrey Hinton’s quote: „… deep learning is going to be able to do everything“ he explains that models like GPT-3 have serious troubles caused by the broad wealth of the training material used to create them. As it seems, GPT-3 is in part very unreliable – in a dialog with GPT-3 the system recommends suicide, which is not what one really would expect of a future therapeutic dialog expert system. Other examples of GPT-3 are that it is using toxic language and prone to reflect stereotypes. Deep learing-powered large language models are like „stochastic parrots, repeating a lot, understanding little“ (Bender, Gebru et.al.). In the end, Martin recommends to combine the big-data oriented neural network way with symbolic manipulation like in the 70s/80s expert system to come around to create hybrid systems – first such systems show up as neurosymbolic programming as new fields of research (Swarat Chaudhuri).

I think it was a well-researched article and very interesting thing to read, I can recommend it.

Begrüßung

Willkommen zum Launch meiner Internetpräsenz!

Ich hoffe sie haben viel Spaß beim Herumstöbern in meinen Seiten – technisches, biografisches, interessantes …

Es grüßt ganz herzlich Ihr

Eric Windisch