snabelen.no er en av mange uavhengige Mastodon-servere du kan bruke for å delta i det desentraliserte sosiale nettet.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

367
aktive brukere

#generativeAI

56 innlegg42 deltakere0 innlegg i dag

“Jamie Dimon is unequivocal about the impact of #RemoteWorking on #training new bankers. “It doesn’t work in our business,” the chief executive of JPMorgan Chase told Stanford’s Graduate School of Business this year.

“Younger people [are] left behind.”
He has previously spoken of the importance of “the #apprenticeship model . . . which is almost impossible to replicate in the #Zoom world”.

In many #workplaces, that #ApprenticeshipModel is as simple as sitting near a more experienced #colleague or joining a #client meeting to watch how it is done, while also *learning the ropes* by taking on often more #repetitive and #basic #tasks.

But #OnTheJob learning is now facing the double threat of #HybridWorking, which means #JuniorStaff spend less time #observing and #listening to more senior #colleagues, and #GenerativeAI, which is making #obsolete many of the routine tasks that have long been building blocks of #ProfessionalKnowledge.”

#WhiteCollar / #ZeroHourWork / #AI <ft.com/content/071089b8-839a-4> (paywall) / <archive.md/WROMn>

Financial Times · On-the-job learning upended by AI and hybrid workAv Emma Jacobs

"While projects like Te Hiku are no doubt valuable, by definition they cannot be scaled-up alternatives to the collective power of American AI capital, which commands resources far greater than many of the world’s states. If it becomes normal for AI tools like ChatGPT to be governed by and for Silicon Valley, we risk seeing the primary means of content production concentrated in the hands of a tiny number of tech barons.

We therefore need to put big solutions on the table. Firstly, regulation: there must be a set of rules that place strict limits on where AI companies get their data from, how their models are trained, and how their algorithms are managed. In addition, all AI systems should be forced to operate within tightly regulated environmental limits: energy usage for generative AI cannot be a free-for-all on a planet under immense ecological stress. AI-powered automated weapons systems should be prohibited. All of this should be subject to stringent, independent audits to ensure compliance.

Secondly, although the concentration of market power in the AI industry took a blow from DeepSeek’s arrival, there remain strong tendencies within AI — and indeed in digital tech as a whole — towards monopolization. Breaking up the tech oligarchy would mean eliminating gatekeepers that concentrate power and control data flows.

Finally, the question of ownership should be a serious part of the debate. Te Hiku shows that when AI tools are built by organizations with entirely different incentive structures in place, they can produce wildly different results. As long as artificial intelligence is designed for the purposes of the competitive accumulation of capital, firms will continue to find ways to exploit labor, degrade the environment, take short cuts in data extraction, and compromise on safety, because if they don’t, one of their competitors will."

jacobin.com/2025/07/altman-ope

jacobin.comSam Altman’s AI Empire Relies on Brutal Labor ExploitationFirms like OpenAI are developing AI in a way that has deeply ominous implications for workers in many different fields. The current trajectory of AI can only be changed through direct confrontation with the overweening power of the tech giants.

"Despite the often-poor quality of the content, she says clients are becoming used to the speed of AI and that is creating unrealistic expectations.

"AI really makes everyone think it's a few minutes work," says Ms Barot, who says clients are using Open AI's ChatGPT.

"However good copyediting, like writing, takes time because you need to think and not curate like AI, which also doesn't understand nuance well because it's curating the data."

The hype around AI has prompted many companies to experiment without clear goals, adequate infrastructure, or a realistic understanding of what the technology can deliver, says Prof Li.

"For example, companies must assess whether they have the right data infrastructure, governance processes, and in-house capabilities to support AI use. Relying on off-the-shelf tools without understanding their limitations can lead to poor outcomes," he says."

bbc.com/news/articles/cyvm1dyp

Sophie Warner
www.bbc.com'I'm being paid to fix issues caused by AI'Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.

"The intoxicating buzz around artificial intelligence stocks over the last few years looks concerningly like the dot-com bubble, top investor Richard Bernstein warns.

The CIO at $15 billion Richard Bernstein Advisors wrote in a June 30 post that the AI trade is starting to look rich, and that it may be time for investors to turn their attention toward a more "boring" corner of the market: dividend stocks.

"Investors seem universally focused on 'AI' which seems eerily similar to the '.com' stocks of the Technology Bubble and the 'tronics' craze of the 1960s. Meanwhile, we see lots of attractive, admittedly boring, dividend-paying themes," Bernstein wrote.

Since ChatGPT hit the market in November 2022, the S&P 500 and Nasdaq 100 have risen 54% and 90%, respectively. Valuations, by some measures, have surged back toward record highs, rivaling levels seen during the dot-com bubble and the 1929 peak.

While Bernstein said he's not calling a top, trades eventually go the other way, and the best time to invest in something is when it's out of favor — not when a major rally has already occurred."

businessinsider.com/stock-mark

Business Insider · AI stocks look 'eerily similar' to the dot-com craze, CIO warnsAv William Edwards

"Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters.

Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May.

The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers.

The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search."

reuters.com/legal/litigation/g

"In May, researchers at Carnegie Mellon University released a paper showing that even the best-performing AI agent, Google's Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. Factoring in partially completed tasks — which included work like responding to colleagues, web browsing, and coding — only brought Gemini's failure rate down to 61.7 percent.

And the vast majority of its competing agents did substantially worse.

OpenAI's GPT-4o, for example, had a failure rate of 91.4 percent, while Meta's Llama-3.1-405b had a failure rate of 92.6 percent. Amazon's Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks.

Meanwhile, a recent report by Gartner, a tech consultant firm, predicts that over 40 percent of AI agent projects initiated by businesses will be cancelled by 2027 thanks to out-of-control costs, vague business value, and unpredictable security risks.

"Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied," said Anushree Verma, a senior director analyst at Gartner.

The report notes an epidemic of "agent washing," where existing products are rebranded as AI agents to cash in on the current tech hype. Examples include Apple's "Intelligence" feature on the iPhone 16, which it currently faces a class action lawsuit over, and investment firm Delphia's fake "AI financial analyst," for which it faced a $225,000 fine.

Out of thousands of AI agents said to be deployed in businesses throughout the globe, Gartner estimated that "only about 130" are real."

futurism.com/ai-agents-failing

Futurism · The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the IndustryAv Joe Wilkins

🚀 What if you could run Generative AI without being connected to the network? No more security risks, no sky-high costs, no complicated setup.

A Taiwanese startup just made that possible — and it’s changing the future of AI deployment. Curious how? 👀

Read the full story: sparknify.com/post/this-taiwan

“Suno, for those of you not familiar, is an #AI #SongGenerator: enter a text prompt (such as “a jazz, reggae, EDM pop song about my imagination”) and a song comes back. Like many #GenerativeAI companies, it is also being sued by all and sundry for ingesting #copyrighted #material. The parties in the suit — including major labels and the #RIAA — don’t have a smoking gun, since they can’t directly peek at Suno’s #TrainingData. But they have managed to generate some suspiciously similar-sounding AI generated materials, #mimicking (among others) “Johnny B. Goode,” “Great Balls of Fire,” and Jason Derulo’s habit of singing his own name.

#Suno essentially admits these songs were #regurgitated from #copyrighted source material, but it says such use was legal. “It is no secret that the tens of millions of #recordings that Suno’s model was trained on presumably included recordings whose rights are owned by the Plaintiffs in this case,” it says in its own legal filing. Whether AI training data constitutes fair use is a common but unsettled legal argument, and the plaintiffs contend Suno still amounts to “pervasive #illegal #copying” of artists’ works.”

#NYA / #music / #ElizabethLopatto / #amazon / #DataTheft <neilyoungarchives.com/news/3/a>

neilyoungarchives.comNeil Young Archives

"I think this highlights a few interesting trends.

Firstly, the era of VC-subsidized tokens may be coming to an end, especially for products like Cursor which are way past demonstrating product-market fit.

Secondly, that $200/month plan for 20x the usage of the $20/month plan is an emerging pattern: Anthropic offers the exact same deal for Claude Code, with the same 10x price for 20x usage multiplier.

Professional software engineers may be able to justify one $200/month subscription, but I expect most will be unable to justify two. The pricing here becomes a significant form of lock-in - once you've picked your $200/month coding assistant you are less likely to evaluate the alternatives."

simonwillison.net/2025/Jul/5/c

Simon Willison’s WeblogCursor: Clarifying Our PricingCursor changed their pricing plan on June 16th, introducing a new $200/month Ultra plan with "20x more usage than Pro" and switching their $20/month Pro plan from "request limits to …

"We often hear A.I. outputs described as “generic” or “bland,” but averageness is not necessarily anodyne. Vauhini Vara, a novelist and a journalist whose recent book “Searches” focussed in part on A.I.’s impact on human communication and selfhood, told me that the mediocrity of A.I. texts “gives them an illusion of safety and being harmless.” Vara (who previously worked as an editor at The New Yorker) continued, “What’s actually happening is a reinforcing of cultural hegemony.” OpenAI has a certain incentive to shave the edges off our attitudes and communication styles, because the more people find the models’ output acceptable, the broader the swath of humanity it can convert to paying subscribers. Averageness is efficient: “You have economies of scale if everything is the same,” Vara said.

With the “gentle singularity” Altman predicted in his blog post, “a lot more people will be able to create software, and art,” he wrote (...) But other studies have suggested the challenges of automating originality. Data collected at Santa Clara University, in 2024, examined A.I. tools’ efficacy as aids for two standard types of creative-thinking tasks: making product improvements and foreseeing “improbable consequences.” One set of subjects used ChatGPT to help them answer questions such as “How could you make a stuffed toy animal more fun to play with?” and “Suppose that gravity suddenly became incredibly weak, and objects could float away easily. What would happen?” The other set used Oblique Strategies, a set of abstruse prompts printed on a deck of cards, written by the musician Brian Eno and the painter Peter Schmidt, in 1975, as a creativity aid. The testers asked the subjects to aim for originality, but once again the group using ChatGPT came up with a more semantically similar, more homogenized set of ideas."

newyorker.com/culture/infinite

The New Yorker · A.I. Is Homogenizing Our ThoughtsAv Kyle Chayka

In case you're forgetting, the IFPI - the international organization representing record labels - has its roots in Italian Fascism. So, yes, record labels should never be considered heroes by anyone. They're extortionist, rent-seeking organizations. They act like a sort of institutionalized and legal Mafia for the music industry. Nevertheless...:

"AI is cutting a swath across a number of creative industries — with AI-generated book covers, the Chicago Sun-Times publishing an AI-generated list of books that don’t exist, and AI-generated stories at CNET under real authors’ bylines. The music industry is no exception. But while many of these fields are mired in questions about whether AI models are illegally trained on pirated data, the music industry is coming at the issue from a position of unusual strength: the benefits of years of case law backing copyright protections, a regimented licensing system, and a handful of powerful companies that control the industry.

Record labels have chosen to fight several AI companies on copyright law, and they have a strong hand to play.

Historically, whatever the tech industry inflicts on the music industry will eventually happen to every other creative industry, too. If that’s true here, then all the AI companies that ganked copyrighted material are in a lot of trouble."

theverge.com/ai-artificial-int

A robot watches a band rock out.
The Verge · Can the music industry make AI the next Napster?Av Elizabeth Lopatto