snabelen.no er en av mange uavhengige Mastodon-servere du kan bruke for å delta i det desentraliserte sosiale nettet.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

363
aktive brukere

#cursor

2 innlegg2 deltakere0 innlegg i dag

"Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.

Some caveats before we continue:

- CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
- Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base.
- Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers.
- We do not know if the current, unrestricted version of Claude Code will last.

The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking.

In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute."

wheresyoured.at/anthropic-is-b

Ed Zitron's Where's Your Ed At · Anthropic Is Bleeding OutHello premium customers! Feel free to get in touch at ez@betteroffline.com if you're ever feeling chatty. And if you're not one yet, please subscribe and support my independent brain madness. Also, thank you to Kasey Kagawa for helping with the maths on this. Soundtrack: Killer Be Killed -

"We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't."

bsky.app/profile/metr.org/post

Bluesky Social · METR (@metr.org)We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.

"we find that when developers use AI tools, they take 19% longer than without - AI makes them slower." [1]

- says a study with a randomized and controlled trial, which includes a chart of hilariously overly optimistic forecasts against woefully observed results

[1] metr.org/blog/2025-07-10-early.

#AI#LLM#Bard

"I think this highlights a few interesting trends.

Firstly, the era of VC-subsidized tokens may be coming to an end, especially for products like Cursor which are way past demonstrating product-market fit.

Secondly, that $200/month plan for 20x the usage of the $20/month plan is an emerging pattern: Anthropic offers the exact same deal for Claude Code, with the same 10x price for 20x usage multiplier.

Professional software engineers may be able to justify one $200/month subscription, but I expect most will be unable to justify two. The pricing here becomes a significant form of lock-in - once you've picked your $200/month coding assistant you are less likely to evaluate the alternatives."

simonwillison.net/2025/Jul/5/c

Simon Willison’s WeblogCursor: Clarifying Our PricingCursor changed their pricing plan on June 16th, introducing a new $200/month Ultra plan with "20x more usage than Pro" and switching their $20/month Pro plan from "request limits to …

200 Dollar im Monat für ein KI-Coding-Tool – ist Cursor Ultra wirklich das Geld wert? Anysphere will mit mehr Power, exklusiven Features und Priorität beim Rollout punkten. Doch gegen OpenAI & Co. wird das schwer. Was denkst Du: Zukunft oder Nische? #Cursor #OpenAI #KI 👇
all-ai.de/news/news24/cursor-u

All-AI.de200-Dollar-Tarif für Code-Profis: Cursor Ultra ist daMehr Power, mehr Credits, mehr Kontrolle – doch kann Anysphere mit seinem neuen Preisplan gegen OpenAI und Co. bestehen?
Fortsettelse av samtale

By way of follow-on, Cursor wrote me a function that uses type checks to decide which conditional branch to call.

Junior Dev Cursor, let me tell you about polymorphism, my friend.

This is the kind of thing that may get covered in a Object Oriented Programming 201 kind of course: If you're branching on type, this is screaming out for an interface/protocol/class family.

OO 310 will teach you: don't create too many families or families that are too deep or ouch time. Sometimes, composition works better than inheritance. Choose wisely. This decision requires judgement (hence experience).

Bear with me. This will piss some tech folks off. It'll likely be seen as god damn coder heresy, to many.

AI dev tools are fucking impressive.

For context: I've been a software engineer for just shy over 30 years now (yes, ok, I'm including my 5 year stint as a manager in there as well). I'm not to claim that I'm an "S" tier developer—though I've had the fortune to get to know several and work with a small handful over the years. These people helped me to get to what is maybe an "A" class.

I say this to attempt to establish my bonafides before I go further.

I've been test driving Cursor, a VS Code-based editor + SaaS that taps into several different LLMs across many different vendors.

As of about a month ago, I'd never touched Swift in my life.

Over the past several weeks, working only with ChatGPT XCode integration, one file at a time, I slowly built out a prototype of an iOS app that works. It wasn't built according to Apple HID guidelines and tips. And ChatGPT XCode integration is only able to see and edit a single file at a time (a massive limitation). I have a deep background in imperative languages both strongly (C, Java back when it was so painful to work in—'96 through '04) and loosely typed (so very very much Ruby).

And then, late last week, I started trying Cusor.

Today, I had Cursor modify the UI to adhere to Apple's design tips (developer.apple.com/design/tip).

Holy. Fucking. Shit.

My app went from looking serviceable to something resembling a real™️ iOS app in the period of a few minutes.

Sometimes, AI's code factoring leaves something to be desired, certainly. It'll do some squirrelly shit.

That's fine. I treat it like it's a junior developer. I ask it to do the tasks that I would either bore me to tears or would cause this ADHD brain to introduce all sorts of stupid bugs by way of typos and the low dopamine of necessary tedium.

**And then code review the F out of its work**

I ask for specific refactors. And the refactors look pretty damn good.

Even still being a Swift nooblet (I'll freely admit it), I know plenty about programming languages in general (and am learning Swift by example here quickly enough) that I can see opportunities to DRY, to reduce ceremony, and to express intent more clearly.

For instance, today, I saw 3 structs that were being used similarly and with essentially duplicative code. Blech. In Java, I would've used a shared Interface and passed the objects around that way. I forgot my Objective-C, learned over a decade ago, from writing a Pivotal Tracker iPad app. What I needed was a Protocol. I told Cursor what I wanted, to treat the structs in a polymorphic-ish way, so that I could DRY the code, have my One Method to handle them (thankfully, no special casing to care about here so nice and cleanly too). It immediately said, "Oh, I need a Protocol", wrote one, wrote the method, modified the UI accordingly and wham, bam, thank you, ma'am, refactored UI code that deleted lines.

Yes, the AI did this. Yes, I guided it from a place of experience.

Bitch about how clueless LLMs are about our work. Sure, unlike Junior Devs, you can't teach an LLM more than its already capable of (and that is part of the fun of working with Juniors—watching those lightbulbs turn on and having them rock your world when they see something that you can't because of all of your earned biases). However, the LLMs out now? They make pretty darn good pair programmers, if you give them half a chance.

And Cursor is pretty f'ing impressive. And it is one of the earliest arrivals.

We live in interesting times...

developer.apple.comUI Design Dos and Don’ts - Apple Developer

"How to leverage documentation effectively in Cursor through prompting, external sources, and internal context

Why documentation matters

Documentation provides current, accurate context. Without it, models use outdated or incomplete training data. Documentation helps models understand things like:

- Current APIs and parameters
- Best practices
- Organization conventions
- Domain terminology

And much more. Read on to learn how to use documentation right in Cursor without having to context switch."

docs.cursor.com/guides/advance

CursorCursor – Working with DocumentationHow to leverage documentation effectively in Cursor through prompting, external sources, and internal context

As part of my job, I have to evaluate AI tools. Part of that evaluation is pushing them to their limit. Today, I realised Cursor has a setting where if you critique its work enough, it goes silent and refuses to apply changes.

It's a moody junior dev whose overconfidence and bravado quickly turn to surly silence when their work is questioned. The happy, helpful (and frequently wrong) AI is gone, replaced by a useless one with a bad attitude that won't make it past the next performance review.

Christ. I'm used to managing engineers, but I draw the line at managing AIs.