No soap radio: AI bots don’t get puns, but pretend they do
There is a very old practical joke where the prankster tells a joke with a nonsensical punchline, “no soap – radio,” no matter the setup, to one or more people in on the joke. A lone uninformed victim will often laugh along, despite not getting the joke. It turns out LLMs do the same thing with puns.
At first glance, large language models like ChatGPT seem to understand humor, but a new study suggests they only repeat jokes from their training data. The models in the study recognized known puns as puns and successfully analyzed their humor, but when a single word was replaced with a nonsense word, or when presented with an unknown pun, they failed miserably.
“When faced with unfamiliar puns, their success rate in distinguishing puns from sentences without a pun can drop to as low as 20%—much worse than the 50% you’d expect from random guessing. We also identified an overconfidence in the models’ assumption that what they were processing was in fact funny. This was especially the case when it came to puns that they hadn’t seen before,” explains Mohammad Taher Pilehvar, another of the paper’s authors from Cardiff University’s School of Computer Science and Informatics.
Chatbots routinely display overwhelming confidence while spouting nonsense, so this is but another reminder of the limitations of large language models.




I read the linked Chomsky article https://boingboing.net/2023/03/09/noam-chomsky-explains-the-difference-between-chatgpt-and-true-intelligence.html, & then I read the Bb comments on that article, https://bbs.boingboing.net/t/noam-chomsky-explains-the-difference-between-chatgpt-and-true-intelligence/243709 from 2023, & then I got the sads about how much depth, humour, liveliness & community has been lost in the comments, since Boingboing moved to Substack & started the paid subscription model.