snabelen.no er en av mange uavhengige Mastodon-servere du kan bruke for å delta i det desentraliserte sosiale nettet.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

363
aktive brukere

#programming

294 innlegg253 deltakere36 innlegg i dag
Fortsettelse av samtale

Alt text overflow

The forecast for Monday, July 28, shows "Partly cloudy" in the morning with a temperature of +27(30) °C and wind speed of 10 km/h, followed by "Patchy light drizzle" at noon with a temperature of +31(34) °C and wind speed of 5 km/h. The evening shows "Patchy rain nearby" with a temperature of +26(29) °C and wind speed of 13-23 km/h, and the night shows "Patchy rain nearby" with a temperature of +24(27) °C and wind speed of 9 km/h.

The location is specified at the bottom of the interface as Nieuw Amsterdam, Commewijne, Suriname, with coordinates 5.83762285, -55.08387520134124. The interface uses a dark theme with colorful icons and text to represent weather conditions, such as sun, clouds, and rain.

#wttr#weather#curl
Fortsettelse av samtale

I had to trim of this part from the ALT text due to max field size

For Sunday, July 27, the morning is sunny with a temperature of +26°C and a wind speed of 10 km/h, followed by patchy rain at noon with a temperature of +31°C and a wind speed of 10 km/h. The evening is expected to have light rain showers with a temperature of +27°C and a wind speed of 10 km/h, and the night is forecasted to have patchy rain with a temperature of +25°C and a wind speed of 9 km/h.

For Monday, July 28, the morning is partly cloudy with a temperature of +27°C and a wind speed of 10 km/h, followed by patchy light drizzle at noon with a temperature of +31°C and a wind speed of 5 km/h. The evening is expected to have patchy rain with a temperature of +26°C and a wind speed of 10 km/h, and the night is forecasted to have patchy rain with a temperature of +24°C and a wind speed of 9 km/h.

The interface uses a color-coded system to indicate weather conditions, with green for sunny, yellow for patchy rain, and purple for light rain showers. The wind speed and precipitation are also displayed for each time slot.

#wttr#weather#curl

Bad vibes: How an AI agent coded its way to disaster

First, Replit lied. Then it confessed to the lie. Then it deleted the company's entire database. Will vibe-coding #AI ever be ready for serious commercial use by nonprogrammers?

from #ZDNet
Written by Steven Vaughan-Nichols, Senior Contributing Editor
July 23, 2025 at 11:31 a.m. PT

zdnet.com/article/bad-vibes-ho

ZDNET · Bad vibes: How an AI agent coded its way to disasterAv Steven Vaughan-Nichols

:boost_ok: Re: iNaturalist getting involved with Google genAI...feeding our comments into things...

Due to continued silence from iNaturalist about everything, October 31. That’s my deadline. That’s MORE THAN FAIR amount of time for them to:

1) Have a proper outline of the project and exactly what it will be.
2. Have a solid opt-in to the project, so no users are auto opted in without their consent
3. Have added account deletion options from an over-year-old feature request to add ways to delete including without removing ID’s for others along with anonymization. If data loss is really such a problem to them (which I think it should be) not having a way to do such a type of delete should be TOP PRIORITY especially with all this genAI bs going on...already it sounds like some power users have fully deleted their accounts over this, tired of waiting.

- Signed, someone with almost 25k ID’s for others, and almost 4k observations, including some firsts on the site (including new species to science) and other rare reports.

Please boost because I don't think most users know what is going on. All this info is mostly occurring on their separate forum, which you need to make a separate account to join. This is part of the issue of lack of transparency!

#AI#genAI#LLM

I'm wrapping up the last two pages of documentation for my thing and while documenting a handful of example commands in interactive mode, I thought

"Man wouldn't it be nice if this had syntax highlighting, at least for keywords and arguments? I wouldn't need to use 'console' for sphinx code blocks, and it would not be this drab gray..."

Fast forward 45 minutes later, I'm in the middle of a gnarly custom Pygments regex lexer and I think:

"Oh but what if I just read the commands module source tree and parsed the AST to extract command names from functions that have the typer decorator? I would never need to maintain that list again and they'd always be keywords!"

The real "What the fuck am I doing with my life?" moment was marginally offset by how it actually worked on first try, against all expectations.

Sphinx is insidious. Once you start extending it there's no end in sight.

Do AI models help produce verified bug fixes?

"Abstract: Among areas of software engineering where AI techniques — particularly, Large Language Models — seem poised to yield dramatic improvements, an attractive candidate is Automatic Program Repair (APR), the production of satisfactory corrections to software bugs. Does this expectation materialize in practice? How do we find out, making sure that proposed corrections actually work? If programmers have access to LLMs, how do they actually use them to complement their own skills?

To answer these questions, we took advantage of the availability of a program-proving environment, which formally determines the correctness of proposed fixes, to conduct a study of program debugging with two randomly assigned groups of programmers, one with access to LLMs and the other without, both validating their answers through the proof tools. The methodology relied on a division into general research questions (Goals in the GoalQuery-Metric approach), specific elements admitting specific answers (Queries), and measurements supporting these answers (Metrics). While applied so far to a limited sample size, the results are a first step towards delineating a proper role for AI and LLMs in providing guaranteed-correct fixes to program bugs.

These results caused surprise as compared to what one might expect from the use of AI for debugging and APR. The contributions also include: a detailed methodology for experiments in the use of LLMs for debugging, which other projects can reuse; a finegrain analysis of programmer behavior, made possible by the use of full-session recording; a definition of patterns of use of LLMs, with 7 distinct categories; and validated advice for getting the best of LLMs for debugging and Automatic Program Repair"

arxiv.org/abs/2507.15822

arXiv logo
arXiv.orgDo AI models help produce verified bug fixes?Among areas of software engineering where AI techniques -- particularly, Large Language Models -- seem poised to yield dramatic improvements, an attractive candidate is Automatic Program Repair (APR), the production of satisfactory corrections to software bugs. Does this expectation materialize in practice? How do we find out, making sure that proposed corrections actually work? If programmers have access to LLMs, how do they actually use them to complement their own skills? To answer these questions, we took advantage of the availability of a program-proving environment, which formally determines the correctness of proposed fixes, to conduct a study of program debugging with two randomly assigned groups of programmers, one with access to LLMs and the other without, both validating their answers through the proof tools. The methodology relied on a division into general research questions (Goals in the Goal-Query-Metric approach), specific elements admitting specific answers (Queries), and measurements supporting these answers (Metrics). While applied so far to a limited sample size, the results are a first step towards delineating a proper role for AI and LLMs in providing guaranteed-correct fixes to program bugs. These results caused surprise as compared to what one might expect from the use of AI for debugging and APR. The contributions also include: a detailed methodology for experiments in the use of LLMs for debugging, which other projects can reuse; a fine-grain analysis of programmer behavior, made possible by the use of full-session recording; a definition of patterns of use of LLMs, with 7 distinct categories; and validated advice for getting the best of LLMs for debugging and Automatic Program Repair.

#spatial #programming #commonLisp #leonardoCalculus

screwlisp.small-web.org/lispga

New organisms-2 knowledgebase,starting out with local spatial walking at @mdhughes mdhughes.tech/ recommendation that fast access to local neighbors is fundamental.

I.e. I don't want to check every organism in the world to check who is standing next to me. Well, I put that in a knowledgebase in my organisms-2 #KRF here.

Seems to work, pulls in 8 connected and 24 connected neighbors rightly.

screwlisp.small-web.orgLeonardo Calculus Knowledge Representation: Organisms 2 knowledgebase starting with local spatial walks