Miguel Afonso Caetano<p>In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...: </p><p>"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.<br>(...)<br>Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.<br>(...)<br>Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119382" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">papers.ssrn.com/sol3/papers.cf</span><span class="invisible">m?abstract_id=5119382</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Ignorance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ignorance</span></a> <a href="https://tldr.nettime.org/tags/Epistemology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Epistemology</span></a> <a href="https://tldr.nettime.org/tags/Bullshit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Bullshit</span></a></p>