X under pressure in Ireland over porn age checks – POLITICO https://www.byteseu.com/1222864/ #Audiovisual #ContentModeration #Data #DigitalID #ElonMusk #IllegalContent #Ireland #Media #OnlineSafety #Platforms

X under pressure in Ireland over porn age checks – POLITICO https://www.byteseu.com/1222864/ #Audiovisual #ContentModeration #Data #DigitalID #ElonMusk #IllegalContent #Ireland #Media #OnlineSafety #Platforms
Steam's new policy: Your game's fate now depends on whether Visa approves of your pixels
Credit card companies can now effectively ban games from the platform, particularly targeting adult content. The power to decide what games exist just shifted from gamers to finance executives.
https://www.europesays.com/2231469/ Turkey bans Elon Musk’s Grok over Erdoğan insults – POLITICO #algorithms #Antisemitism #ContentModeration #DigitalServicesAct #DonaldTusk #ElonMusk #Platforms #poland #Racism #RecepTayyipErdogan #SocialMedia #Turkey #turkiye
"Young people have always felt misunderstood by their parents, but new research shows that Gen Alpha might also be misunderstood by AI. A research paper, written by Manisha Mehta, a soon-to-be 9th grader, and presented today at the ACM Conference on Fairness, Accountability, and Transparency in Athens, shows that Gen Alpha’s distinct mix of meme- and gaming-influenced language might be challenging automated moderation used by popular large language models.
The paper compares kid, parent, and professional moderator performance in content moderation to that of four major LLMs: OpenAI’s GPT-4, Anthropic’s Claude, Google’s Gemini, and Meta’s Llama 3. They tested how well each group and AI model understood Gen Alpha phrases, as well as how well they could recognize the context of comments and analyze potential safety risks involved."
https://www.404media.co/ai-models-and-parents-dont-understand-let-him-cook/
"Disinformation about the Los Angeles protests is spreading on social media networks and is being made worse by users turning to AI chatbots like Grok and ChatGPT to perform fact-checking.
As residents of the LA area took to the streets in recent days to protest increasingly frequent Immigration and Customs Enforcement (ICE) raids, conservative posters on social media platforms like X and Facebook flooded their feeds with inaccurate information. In addition to well-worn tactics like repurposing old protest footage or clips from video games and movies, posters have claimed that the protesters are little more than paid agitators being directed by shadowy forces—something for which there is no evidence.
In the midst of fast-moving and divisive news stories like the LA protests, and as companies like X and Meta have stepped back from moderating the content on their platforms, users have been turning to AI chatbots for answers—which in many cases have been completely inaccurate."
https://www.wired.com/story/grok-chatgpt-ai-los-angeles-protest-disinformation/
You do you, EU tells Macron on banning social media for kids – POLITICO https://www.byteseu.com/1097756/ #ConsumerPolicy #ContentModeration #DataProtection #Denmark #DigitalServicesAct #EmmanuelMacron #France #Media #Platforms #Privacy #services #SocialMedia #youth
#dataleaks at #Meta #data #octopus #corp
“The leaked data comes as @israel drastically expands its #publicrelations spending, committing an additional $150 million in 2025 alone to “public diplomacy”—or #hasbara. Yet despite this investment and Meta’s well-documented suppression of pro-Palestinian voices, the data show a public turning decisively against Israeli companies and messaging.”
#MetaLEAKS #Facebook #Instagram #Suckerberg #Israel #economy #collapse #metaverse #censorship #contentmoderation #FreeSpeech #FreePalestine
#FTR2TS @palestine #willBfree
Macron says he’ll ban social media for under-15s in France – POLITICO https://www.byteseu.com/1094173/ #ConsumerPolicy #ContentModeration #cybersecurity #DigitalServicesAct #France #Platforms #SocialMedia
Mashable: YouTube very quietly loosened its content moderation rules. “YouTube quietly loosened its video moderation rules a few weeks before Donald Trump was sworn in as president a second time, reports the New York Times. The new rules encourage the site’s moderators not to remove videos that break YouTube’s code of conduct — which bans nudity, graphic violence, hate speech, and […]
"Before the first age verification bills were a glimmer in Louisiana legislators’ eyes three years ago, sexuality was always overpoliced online. Before this, it was (and still is) SESTA/FOSTA, which amended Section 230 to make platforms liable for what users do on them when activity could be construed as “sex trafficking,” including massive swaths and sometimes whole websites in its net if users discussed meeting in exchange for pay, but also real-life interactions or and attempts to screen clients for in-person encounters—and imposed burdensome fines if they didn’t comply. Sex education bore a lot of the brunt of this legislation, as did sex workers who used listing sites and places like Craigslist to make sure clientele was safe to meet IRL. The effects of SESTA/FOSTA were swift and brutal, and they’re ongoing.
We also see these effects in the obfuscation of sexual words and terms with algo-friendly shorthand, where people use “seggs” or “grape” instead of “sex” or “rape” to evade removal by hostile platforms. And maybe years of stock imagery of fingering grapefruits and wrapping red nails around cucumbers because Facebook couldn’t handle a sideboob means unironically horny fuckable-food content is a natural evolution to adapt.
Now, we have the Take It Down act, which experts expect will cause a similar fallout: platforms that can’t comply with extremely short deadlines on strict moderation expectations could opt to ban NSFW content altogether.
Before either of these pieces of legislation, it was (and still is!) banks. Financial institutions have long been the arbiters of morality in this country and others. And what credit card processors say goes, even if what they’re taking offense from is perfectly legal. Banks are the extra-legal arm of the right."
https://www.404media.co/egg-yolk-popping-instagram-tiktok-ioda-anti-porn-laws/
TikTok bans the ‘unhealthy’ SkinnyTok hashtag after pressure from regulators.
Efforts to protect kids online are gaining traction in Europe.
Social media platform TikTok has banned worldwide the popular SkinnyTok hashtag linked to weight-loss videos following scrutiny from policymakers in Brussels and Paris.
Social media platforms have a responsibility to balance free speech with preventing abuse. #contentmoderation
"The latest version of KOSA states that the bill would require social media platforms to “remove addictive product features,” give parents more control and oversight of their kids’ social media, create a duty for platforms to mitigate content focused on topics like suicide and disordered eating, and require transparency from social media platforms to share the steps they’re taking to protect children.
Those who are in favor of the bill say it would hold platforms legally accountable if they host harmful content that minors should not view. Opponents said it could inadvertently affect sites that host LGBTQ content. They’re also concerned it could lead to more censorship online.
“Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a ‘duty of care,’ and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people,” Joe Mullin, senior policy analyst for the Electronic Frontier Foundation, said in a statement.
However, updates made to the bill help to make its reach less broad and remove attorneys’ general ability to prosecute platforms. It also makes more precise the harm it expects social media and other websites to protect against. This has led to some opponents of the bill changing their stance."
FCC commissioner blasts Trump administration censorship policies https://www.byteseu.com/1026577/ #AnnaGomez #BrendanCarr #Censorship #ContentModeration #FCC #Geopolitics #SocialMedia #TrumpAdministration
"OTI shares the goal of creating a safer internet for our youth, but KOSA continues to pose risks to free expression and privacy. The legislation augments the federal government’s power to limit access to information online and censor online speech. Specifically, the bill’s “duty of care” provision may incentivize platforms to over-moderate or otherwise suppress access to content that they think the FTC considers to contribute to a mental health disorder like anxiety or depression. This subjective category could cover an expansive range of content, including gun violence, LGBTQ communities, reproductive rights, racial justice, or particular political philosophies. Beyond incentivizing this kind of anticipatory self-censorship, KOSA would hand the FTC a legal tool to actively compel online platforms to shape speech, raising First Amendment and fundamental democratic concerns.
These concerns about chilling effects and enabling government-directed censorship apply to any administration. And they are not theoretical risks. On the contrary, these risks are now heightened, given this administration’s dramatic assault on the FTC’s independence, the effort to use the agency to advance an openly politicized agenda, and numerous efforts across the executive branch to expand surveillance and use federal agencies to punish disfavored speech."
Mit ihren vielseitigen und interdisziplinären Forschungsthemen sind unsere Wissenschaftler:innen wieder auf der Digitalkonferenz @republica Ihre Themen reichen von #KI über #Contentmoderation, Kommunale Rechenzentren, #Desinformation Einfluss von Digitalkonzernen bis hin zu Europas digitaler Zukunft.
Eine Übersicht gibt es hier: https://www.weizenbaum-institut.de/en/news/detail/republica-2025-1-1/
THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE
¯
_
NO RECOGNITION FOR THE AUTHOR
YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.
¯
_
USERS WHO’VE STOPPED THINKING
The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.
¯
_
ZERO MODERATION FOR SMALL CREATORS
Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.
¯
_
EXTERNAL LINKS ARE SABOTAGED
Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.
¯
_
HUMANS CAN’T OUTPERFORM THE MACHINE
At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.
¯
UNPAID LABOR, ALGORITHMIC DENIAL, AND SYSTEMIC SABOTAGE
May 7, 2025
YouTube built an empire on our free time, our passion, our technical investments—and above all, on a promise: “share what you love, and the audience will follow.” Thousands of independent creators believed it. So did I. For ten years, I invested, produced, commented, hosted, edited, imported, repaired—with discipline, ambition, and stubborn hope, all in the shadows. What I discovered wasn’t opportunity. It was silence. A system of invisible filters, algorithmic contempt, and structural sabotage. An economic machine built on the unpaid, uncredited labor of creators who believed they had a chance. A platform that shows your video to four people, then punishes you for not being “engaging” enough. This four-part investigation details what YouTube has truly cost me—in money, in time, in mental health, and in collective momentum. Every number is cross-checked. Every claim is lived. Every example is documented. This is not a rant. It’s a report from inside the wreckage.
¯
_
INVISIBLE COMMENTS: 33,000 CONTRIBUTIONS THROWN IN THE TRASH
As part of my investigation, I decided to calculate what I’ve lost on YouTube. Not an easy task: if all my videos are shadowbanned, there’s no way to measure the value of that work through view counts. But I realized something else. The comments I leave on channels—whether they perform well or not—receive wildly different levels of visibility. It’s not unusual for one of my comments to get 500 likes and 25 replies within 24 hours. In other words, when I’m allowed to exist, I know how to draw attention.
¯
_
33,000 COMMENTS... FOR WHAT?
In 10 years of using the platform, I’ve posted 33,000 comments. Each one crafted, thoughtful, polished, aimed at grabbing attention. It’s a real creative effort: to spontaneously come up with something insightful to say, every day, for a decade. I’ve contributed to the YouTube community through my likes, my reactions, my input. These comments—modest, yes, but genuine—have helped sustain and grow the platform. If each comment takes roughly 3 minutes to write, that’s 99,000 minutes of my life—60 days spent commenting non-stop. Two entire months. Two months talking into the void.
¯
_
ALGORITHMIC INVISIBILITY
By default, not all comments are shown. The “Top comments” filter displays only a select few. You have to manually click on “Newest first” to see the rest. The way "Top comments" are chosen remains vague, and there’s no indication of whether some comments are deliberately hidden. When you load a page, your own comment always appears first—but only to you. Officially, it’s for “ergonomics.” Unofficially, it gives you the illusion that your opinion matters. I estimate that, on average, one out of six comments is invisible to other users. By comparing visible and hidden replies, a simple estimate emerges: over the course of 12 months, 2 months’ worth of comments go straight to the trash.
¯
_
TWO MONTHS A YEAR WRITING INTO THE VOID
If I’ve spent 60 days commenting over 10 years, that averages out to 6 days per year. Roughly 12 hours of writing every month. So each year, I’m condemned to 1 full day (out of 6) of content invisibilized (while 5 out of 6 remains visible), dumped into a void of discarded contributions. I’m not claiming every comment I write is essential, but the complete lack of notification and the arbitrary nature of this filtering raise both moral and legal concerns. To clarify: if two months of total usage equal 24 hours of actual writing, that’s because I don’t use YouTube continuously. These 24 hours spread across two months mean I spend about 24 minutes per day writing. And if writing time represents just one-fifth of my overall engagement — including watching — that adds up to more than 2.5 hours per day on the platform. Every single day. For ten years. That’s not passive use — it’s sustained, intensive participation. On average, this means that 15 to 20% of my time spent writing comments is dumped into a virtual landfill. In my case, that’s 24 hours of annual activity wiped out. But the proportion is what matters — it scales with your usage. You see the problem.
¯
_
THE BIG PLAYERS RISE, THE REST ARE ERASED
From what I’ve observed, most major YouTubers benefit from a system that automatically boosts superficial comments to the top. The algorithm favors them. It’s always the same pattern: the system benefits a few, at the expense of everyone else.
¯
_
AN IGNORED EDITORIAL VALUE
In print journalism, a 1,500-word exclusive freelance piece is typically valued at around €300. Most YouTube comments are a few lines long—maybe 25 words. Mine often exceed 250 words. That’s ten times the average length, and far more structured. They’re not throwaway reactions, but crafted contributions: thoughtful, contextual, engaging. If we apply the same rate, then 30 such comments ≈ €1,500. It’s a bold comparison—but a fair one, when you account for quality, relevance, and editorial intent. 33,000 comments = €1,650,000 of unpaid contribution to YouTube. YouTube never rewards this kind of engagement. It doesn’t promote channels where you comment frequently. The platform isn’t designed to recognize individuals. It’s designed to extract value—for itself.
¯
»Labour pains: #ContentModeration challenges in Mastodon growth«
> The article … investigates challenges experienced by #Mastodon instances post-#Musk, based on 8 interviews with admins and moderators of 7 instances and a representative of @iftas, a NPO that supports Mastodon content moderators
by @charlotte & @talestomaz, Alexander von Humboldt Institute for Internet and Society #HIIG
"Meta is facing a second set of lawsuits in Africa over the psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse.
Lawyers are gearing up for court action against a company contracted by Meta, which owns Facebook and Instagram, after meeting moderators at a facility in Ghana that is understood to employ about 150 people.
Moderators working for Majorel in Accra claim they have suffered from depression, anxiety, insomnia and substance abuse as a direct consequence of the work they do checking extreme content.
The allegedly gruelling conditions endured by workers in Ghana are revealed in a joint investigation by the Guardian and the Bureau of Investigative Journalism."