snabelen.no is one of the many independent Mastodon servers you can use to participate in the fediverse.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

451
aktive brukere

#generativeAI

76 innlegg59 deltakere4 innlegg i dag

"My current conclusion, though preliminary in this rapidly evolving field, is that not only can seasoned developers benefit from this technology — they are actually in the optimal position to harness its power.

Here’s the fascinating part: The very experience and accumulated know-how in software engineering and project management — which might seem obsolete in the age of AI — are precisely what enable the most effective use of these tools.

While I haven’t found the perfect metaphor for these LLM-based programming agents in an AI-assisted coding setup, I currently think of them as “an absolute senior when it comes to programming knowledge, but an absolute junior when it comes to architectural oversight in your specific context.”

This means that it takes some strategic effort to make them save you a tremendous amount of work.

And who better to invest that effort in the right way than a senior software engineer?

As we’ll see, while we’re dealing with cutting-edge technology, it’s the time-tested, traditional practices and tools that enable us to wield this new capability most effectively."

manuel.kiessling.net/2025/03/3

The Log Book of Manuel Kießling · Senior Developer Skills in the AI Age: Leveraging Experience for Better Results • Manuel KießlingHow time-tested software engineering practices amplify the effectiveness of AI coding assistants.

“The story of automation in the US is that it has mostly impacted on manual workers in manufacturing. For example, factory employees — such as carmakers — performing routine tasks have lost their jobs to robots — or lower-cost Asian competitors.

#IndustrialAutomation has tended to affect lower-skilled, #BlueCollar jobs in the “#rustbelt” heartlands and small-town, less-educated communities in the south and midwest.

But a recent study from the #BrookingsInstitution suggests that the communities most exposed to AI-driven job dislocation will be #WhiteCollar information workers. The researchers studied the usage of #OpenAI’s #GenerativeAI tools across more than 1,000 occupations and mapped this against where those jobs were most commonly located.

Their analysis suggests that many #coders, #lawyers, #FinancialAnalysts and #bureaucrats in cities such as San Jose, San Francisco, Durham, New York and Washington DC might want to rethink their futures. But #NonOffice-bound #workers in places such as Las Vegas, Toledo, Ohio and Fort Wayne, Indiana may be less exposed to AI disruption.”

My observation since 2022 when #ChristopherHohn an influential shareholder decided to *speak out* about “reducing its head count and paying (hi-tech) workers less”. [1]

This is the decade where extreme (cost) pressure will be forced on White collar workers by the introduction of AI.

<archive.md/YqF03> / <ft.com/content/04343a69-8204-4> (paywall)

[1] <forbes.com/sites/jonathanponci>

If you understand Virtue Epistomology (VE), you cannot accept any LLM output as "information".

VE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.

VE prescribes that we should look to how the information was obtained, particularly in two ways:
1) Was the information obtained using a well-known method that is known to produce good results?
2) Does the method appear to have been applied correctly in this particular case?

LLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true.

LLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.

Only a damned fool would use it for anything, ever.

#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity @philosophy

What are some examples of where an existing app or service has added a generative AI powered feature that people genuinely find useful? I can think of countless examples of where AI has been added for the sake of it, and delivers little value, so I am keen to learn of cases where the opposite is true. #generativeAI

An #AI #Image Generator’s Exposed Database Reveals What People Really Used It For
An unsecured database used by a #generativeAI app revealed prompts and tens of thousands of explicit images—some of which are likely illegal. The company deleted its websites after WIRED reached out.
wired.com/story/genomis-ai-ima
archive.ph/sEmuA

WIRED · An AI Image Generator’s Exposed Database Reveals What People Really Used It ForAv Matt Burgess

"Now consider the chatbot therapist: what are its privacy safeguards? Well, the companies may make some promises about what they will and won't do with the transcripts of your AI sessions, but they are lying. Of course they're lying! AI companies lie about what their technology can do (of course). They lie about what their technologies will do. They lie about money. But most of all, they lie about data.

There is no subject on which AI companies have been more consistently, flagrantly, grotesquely dishonest than training data. When it comes to getting more data, AI companies will lie, cheat and steal in ways that would seem hacky if you wrote them into fiction, like they were pulp-novel dope fiends:
(...)
But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:
(...)
These companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire."

pluralistic.net/2025/04/01/doc

pluralistic.netPluralistic: Anyone who trusts an AI therapist needs their head examined (01 Apr 2025) – Pluralistic: Daily links from Cory Doctorow

In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...:

"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.
(...)
Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.
(...)
Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"

papers.ssrn.com/sol3/papers.cf

papers.ssrn.comAI as Artificial IgnoranceAI and bullshit (in the strong philosophical sense of Harry Frankfurt) are similar in the sense that both prioritize rhetoric over truth. They mix true, false,