Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Submission + - Conde Nast fined €750,000 for placing cookies without consent (noyb.eu)

AmiMoJo writes: In December 2019, noyb had filed complaints against three providers of French websites, because they had implemented cookie banners that turned a clear “NO” into “fake consent”. Even if a user went through the trouble of rejecting countless cookies on the eCommerce page CDiscount, the movie guide Allocine.fr and the fashion magazine Vanity Fair, these websites sent digital signals to tracking companies claiming that users had agreed to being tracked online. CDiscount sent “fake consent” signals to 431 tracking companies per user, Allocine to 565, and Vanity Fair to 375, an analysis of the data flows had shown.

Today, almost six (!) years after these complaints had originally been filed, the French data protection authority CNIL has finally reached a decision in the case against Vanity Fair: Conde Nast, the publisher behind Vanity Fair, has failed to obtain user consent before placing cookies. In addition, the company failed to sufficiently inform its users about the purpose of supposedly “necessary” cookies. Thirdly, the implemented mechanisms for refusing and withdrawing consent was ineffective. Conde Nast must therefore pay a fine of €750.000.

Conde Nast also owns Ars Technica.

Submission + - 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co)

alternative_right writes: AI slop feels inescapable — whether you’re watching TV, reading the news, or trying to find a new apartment.

That is, unless you’re using Slop Evader, a new browser tool that filters your web searches to only include results from before November 30, 2022 — the day that ChatGPT was released to the public.

The tool is available for Firefox and Chrome, and has one simple function: Showing you the web as it was before the deluge of AI-generated garbage. It uses Google search functions to index popular websites and filter results based on publication date, a scorched earth approach that virtually guarantees your searches will be slop-free.

Submission + - New Agent Workspace feature comes with security warning from Microsoft (scworld.com)

spatwei writes: An experimental new Windows feature that gives Microsoft Copilot access to local files comes with a warning about potential security risks.

The feature, which became available to Windows Insiders last week and is turned off by default, allows Copilot agents to work on apps and files in a dedicated space separate from the human user’s desktop. This dedicated space is called the Agent Workspace, while the agentic AI component is called Copilot Actions.

Turning on this feature creates an Agent Workspace and an agent account distinct from the user’s account, which can request access to six commonly used folders: Documents, Downloads, Desktop, Music, Pictures and Videos.

The Copilot agent can work directly with files in these folders to complete tasks such as resizing photos, renaming files or filling out forms, according to Microsoft. These tasks run in the background, isolated from the user’s main session, but can be monitored and paused by the user, allowing the user to take control as needed.

Windows documentation warns of the unique security risks associated with agentic AI, including cross-prompt injection (XPIA), where malicious instructions can be planted in documents or applications to trick the agent into performing unwanted actions like data exfiltration.

“Copilot agents’ access to files and applications greatly expands not only the scope of data that can be exfiltrated, but also the surface for an attacker to introduce an indirect prompt injection,” Shankar Krishnan, co-founder of PromptArmor, told SC Media.

Microsoft’s documentation about AI agent security emphasizes user supervision of agents’ actions, the use of least privilege principles when granting access to agent accounts and the fact that Copilot will request user approval before performing certain actions.

While Microsoft’s agentic security and privacy principles state that agents “are susceptible to attack in the same ways any other user or software components are,” Krishnan noted that the company provides “very little meaningful recommendations for customers” to address this risk when using Copilot Actions.

Submission + - AI Can't Think (theverge.com)

RossCWilliams writes:

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own.

The article goes on to point out that we use language to communicate. We use it to create metaphors to describe our reasoning. That people who have lost their language ability can still show reasoning. That human beings create knowledge when they become dissatisfied with the current metaphor. Einstein's theory of relativity was not based on scientfic research. He developed it as thought experiment because he was dissatisfied with the existing metaphor. It quotes someone who said "common sense is a collection of dead metaphors." And that AI, at best, can rearrange those dead metaphors in interesting ways. But it will never be dissatisfied with the data it has or an existing metaphor.

A different critique has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what she was talking about was the descriptions of kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet AI will never be able create them.

This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one

Submission + - Shai Hulud 2.0 worm nibbling through npm supply chain (www.wiz.io)

BooleanMusic writes: In a rare glimpse into what might be happening constantly, a large-scale npm supply chain attack was uncovered on November 24, 2025. The campaign appears highly effective, exploiting the self-reinforcing nature of modern software development—like a snake devouring its own tail—to spread across repositories. Reported impact includes credential theft, exposure of internal system details, and other severe compromises.
While the attack references the earlier “Shai Hulud” campaign disclosed this year, it’s unclear whether the same actors are behind it.

Submission + - Crime Rings Enlist Hackers to Hijack Trucks (archive.is)

schwit1 writes: By breaking into carriers’ online systems, cyber-powered criminals are making off with truckloads of electronics, beverages and other goods

In the most recent tactics identified by cybersecurity firm Proofpoint, hackers posed as freight middlemen, posting fake loads to the boards. They slipped links with malicious software into email exchanges with bidders such as trucking companies. By clicking on the links, trucking companies unwittingly downloaded remote-access software that lets the hackers take control of their online systems.

Once inside, the hackers used the truckers’ accounts to bid on real shipments, such as electronics and energy drinks, said Selena Larson, a threat researcher at Proofpoint. “They know the business,” she said. “It’s a very convincing full-scale identity takeover.”

Slashdot Top Deals

"If you are afraid of loneliness, don't marry." -- Chekhov

Working...