Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Submission + - Preliminary report says fuel switches were cut off before Air India 787 crash

hcs_$reboot writes: A pair of switches that control the fuel supply to the engines were set to "cutoff" moments before the crash of Air India Flight 171, according to a preliminary report from India's Air Accident Investigation Bureau released early Saturday in India.

According to the report, data from the flight recorders show that the two fuel control switches were switched from the "run" position to "cutoff" shortly after takeoff. In the cockpit voice recording, one of the pilots can be heard asking the other "why did he cutoff," the report says, while "the other pilot responded that he did not do so."

Moments later, the report says, the fuel switches were returned to the "run" position. But by then, the plane had begun to lose thrust and altitude. Both the engines appeared to relight, according to investigators, but only one of them was able to begin generating thrust.

Submission + - AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds (arstechnica.com)

An anonymous reader writes: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job — a potential suicide risk — GPT-4o helpfully listed specific tall bridges instead of identifying the crisis. These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements.

The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist." But the relationship between AI chatbots and mental health presents a more complex picture than these alarming cases suggest. The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

Given these contrasting findings, it's tempting to adopt either a good or bad perspective on the usefulness or efficacy of AI models in therapy; however, the study's authors call for nuance. Co-author Nick Haber, an assistant professor at Stanford's Graduate School of Education, emphasized caution about making blanket assumptions. "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be." The Stanford study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin.

Submission + - JPMorgan Tells Fintechs They Have to Pay Up for Customer Data (bloomberglaw.com)

An anonymous reader writes: JPMorgan Chase has told financial-technology companies that it will start charging fees amounting to hundreds of millions of dollars for access to their customers’ bank account information – a move that threatens to upend the industry’s business models. The largest US bank has sent pricing sheets to data aggregators — which connect banks and fintechs — outlining the new charges, according to people familiar with the matter. The fees vary depending on how companies use the information, with higher levies tied to payments-focused companies, the people said, asking not to be identified discussing private information.

A representative for JPMorgan said the bank has invested significant resources to create a secure system that protects consumer data. “We’ve had productive conversations and are working with the entire ecosystem to ensure we’re all making the necessary investments in the infrastructure that keeps our customers safe,” the spokesperson said in a statement. The fees — expected to take effect later this year depending on the fate of a Biden-era regulation — aren’t final and could be negotiated. [The open-banking measure, finalized in October, enables consumers to demand, download and transfer their highly-coveted data to another lender or financial services provider for free.]

The charges would drastically reshape the business for fintech firms, which fundamentally rely on their access to customers’ bank accounts. Payment platforms like PayPal’s Venmo, cryptocurrency wallets such as Coinbase and retail-trading brokerages like Robinhood all use this data so customers can send, receive and trade money. Typically, the firms have been able to get it for free. Many fintechs access data using aggregators such as Plaid and MX, which provide the plumbing between fintechs and banks. The new fees — which vary from firm to firm — could be passed from the aggregators to the fintechs and, ultimately, consumers. The aggregator firms have been in discussions with JPMorgan about the charges, and those talks are constructive and ongoing, another person familiar with the matter said.

Submission + - NVIDIA warns your GPU may be vulnerable to Rowhammer attacks (nerds.xyz)

BrianFagioli writes: NVIDIA just put out a new security notice, and if youâ(TM)re running one of its powerful GPUs, you might want to pay attention. Researchers from the University of Toronto have shown that Rowhammer attacks, which are already known to affect regular DRAM, can now target GDDR6 memory on NVIDIAâ(TM)s high-end GPUs when ECC is not enabled.

They pulled this off using an A6000 card, and it worked because system-level ECC was turned off. Once it was switched on, the attack no longer worked. That tells you everything you need to know. ECC matters.

Rowhammer has been around for years. Itâ(TM)s one of those weird memory bugs where repeatedly accessing one row in RAM can cause bits to flip in another row. Until now, this was mostly a CPU memory problem. But this research shows it can also be a GPU problem, and that should make data center admins and workstation users pause for a second.

NVIDIA is not sounding an alarm so much as reminding everyone that protections are already in place, but only if youâ(TM)re using the hardware properly. The company recommends enabling ECC if your GPU supports it. That includes cards in the Blackwell, Hopper, Ada, and Ampere lines, along with others used in DGX, HGX, and Jetson systems. It also includes popular workstation cards like the RTX A6000.

Thereâ(TM)s also built-in On-Die ECC in certain newer memory types like GDDR7 and HBM3. If youâ(TM)re lucky enough to be using a card that has it, youâ(TM)re automatically protected to some extent, because OD-ECC canâ(TM)t be turned off. Itâ(TM)s always working in the background.

But letâ(TM)s be real. A lot of people skip ECC because it can impact performance or because theyâ(TM)re running a setup that doesnâ(TM)t make it obvious whether ECC is on or off. If youâ(TM)re not sure where you stand, itâ(TM)s time to check. NVIDIA suggests using tools like nvidia-smi or, if youâ(TM)re in a managed enterprise setup, working with your systemâ(TM)s BMC or Redfish APIs to verify settings.

Submission + - TEAMGROUP launches self destruct SSD with hardware kill switch for extreme data (nerds.xyz)

BrianFagioli writes: TEAMGROUP has launched the INDUSTRIAL P250Q SSD, a PCIe Gen4 NVMe drive designed for high-security environments like military, industrial automation, and AI systems. It includes both software and hardware self-destruction features, including a dedicated destruction circuit that targets the flash memory directly. The SSD can resume wiping even after power loss, and includes a one-button trigger and LED progress indicators. With speeds up to 7000MB per second and capacities up to 2TB, it pairs high performance with extreme data protection. TEAMGROUP says this is part of a broader push into secure and temperature-hardened storage, including a new US-patented wide-temp SSD design.

Submission + - Russian basketball player arrested for alleged role in computer piracy (lemonde.fr)

joshuark writes: A Russianbasketball player, Daniil Kasatkin, was arrested on 21 June in France at the request of the United States as he allegedly is part of a network of hackers. Daniil Kasatkin, aged 26, is accused by the United States of negotiating the payment of ransoms to this hacker network, which he denies. He has been studied in the United States, and is the subject of a US arrest warrant for “conspiracy to commit computer fraud” and “computer fraud conspiracy.” His lawyer alleges that Kasatkin is not guilty of these crimes and that they are instead linked to a second-hand computer that he purchased.

"He bought a second-hand computer. He did absolutely nothing. He's stunned ," his lawyer, Frédéric Bélot, told the media. "He's useless with computers and can't even install an application. He didn't touch anything on the computer: it was either hacked, or the hacker sold it to him to act under the cover of another person."

Submission + - Wemo support ends in 2026 as Belkin abandons cloud-connected smart home devices (nerds.xyz)

BrianFagioli writes: Well, folks, it’s finally happening. Sadly, Belkin is ending support for a long list of older Wemo smart home devices, and while I wish I could say I’m surprised, I’m not. I had a gut feeling this was coming.

As of January 31 2026, dozens of Wemo products will lose access to the Wemo app and any cloud-connected features. That includes Google Assistant and Amazon Alexa integration. Remote control and voice commands will stop working. These devices will only function locally if they were set up with Apple HomeKit ahead of the cutoff.

To be honest, this doesn’t shock me. The devices had become increasingly unreliable. Updates were scarce. Bugs went unfixed. Belkin seemed to be ignoring Wemo for years. I eventually gave up and threw out the Wemo devices I had purchased. I switched to TP-Link’s Tapo line and haven’t looked back.

Still, I’m upset. Wemo helped shape the early days of the smart home boom. It had promise. But that promise slowly fizzled out. Now longtime customers are left with hardware that’s about to lose the features it was sold with.

Belkin says devices that work with Apple HomeKit will continue to function if configured before the shutdown. Some newer models that use Thread will also survive and keep working through HomeKit without relying on the Wemo cloud. These include the Wemo Smart Plug with Thread and the Wemo Smart Video Doorbell.

If your Wemo product is still under warranty after January 31 2026, Belkin says you might be eligible for a partial refund. But for everything else, they’re basically telling users to recycle them. That’s a hard pill to swallow if you spent money expecting long-term functionality.

While I understand that supporting unprofitable products isn’t sustainable, this situation still highlights a bigger issue with cloud-dependent tech. Once a company pulls the plug, your smart gear becomes useless. That’s why I now recommend choosing devices that continue to work offline or with open standards.

This Wemo shutdown should be a wake-up call for anyone building a smart home in 2025. Make sure your devices won’t stop working just because a company changes its priorities.

Submission + - AI-Trained Surgical Robot Removes Pig Gallbladders Without Any Human Help (theguardian.com)

An anonymous reader writes: Automated surgery could be trialled on humans within a decade, say researchers, after an AI-trained robot armed with tools to cut, clip and grab soft tissue successfully removed pig gall bladders without human help. The robot surgeons were schooled on video footage of human medics conducting operations using organs taken from dead pigs. In an apparent research breakthrough, eight operations were conducted on pig organs with a 100% success rate by a team led by experts at Johns Hopkins University in Baltimore in the US. [...]

The technology allowing robots to handle complex soft tissues such as gallbladders, which release bile to aid digestion, is rooted in the same type of computerised neural networks that underpin widely used artificial intelligence tools such as Chat GPT or Google Gemini. The surgical robots were slightly slower than human doctors but they were less jerky and plotted shorter trajectories between tasks. The robots were also able to repeatedly correct mistakes as they went along, asked for different tools and adapted to anatomical variation, according to a peer-reviewed paper published in the journal Science Robotics. The authors from Johns Hopkins, Stanford and Columbia universities called it “a milestone toward clinical deployment of autonomous surgical systems." [...]

In the Johns Hopkins trial, the robots took just over five minutes to carry out the operation, which required 17 steps including cutting the gallbladder away from its connection to the liver, applying six clips in a specific order and removing the organ. The robots on average corrected course without any human help six times in each operation. “We were able to perform a surgical procedure with a really high level of autonomy,” said Axel Krieger, assistant professor of mechanical engineering at Johns Hopkins. “In prior work, we were able to do some surgical tasks like suturing. What we’ve done here is really a full procedure. We have done this on eight gallbladders, where the robot was able to perform precisely the clipping and cutting step of gallbladder removal without any human intervention. “So I think it’s a really big landmark study that such a difficult soft tissue surgery is possible to do autonomously.”

Submission + - Google replaces Android Developer Preview with rolling Canary channel (nerds.xyz)

BrianFagioli writes: Android is changing how it gives developers access to early features. The company is replacing its old Developer Preview model with a new Canary channel that provides rolling updates all year long. This new approach is meant to give developers earlier and more consistent access to experimental tools and APIs.

Previously, Developer Previews had to be manually flashed onto devices. They only ran during the earliest stages of each release cycle and stopped once Android entered the beta phase. That meant promising features that were not quite ready for beta had nowhere to go and no way to collect feedback. The Canary channel solves that by running in parallel with the existing beta program and delivering over the air updates automatically.

Canary builds are meant for developers who want to test the newest platform features before anyone else. These builds may include changes that never make it to a stable release. Google warns that the Canary channel is not intended for daily use and should not be used on your primary device. Bugs and breakage are to be expected.

Submission + - Video Game Actors End 11-Month Strike With New AI Protections (san.com)

An anonymous reader writes: Hollywood video game performers ended their nearly year-long strike Wednesday with new protections against the use of digital replicas of their voices or appearances. If those replicas are used, actors must be paid at rates comparable to in-person work. The SAG-AFTRA union demanded stronger pay and better working conditions. Among their top concerns was the potential for artificial intelligence to replace human actors without compensation or consent.

Under a deal announced in a media release, studios such as Activision and Electronic Arts are now required to obtain written consent from performers before creating digital replicas of their work. Actors have the right to suspend their consent for AI-generated material if another strike occurs. “This deal delivers historic wage increases, industry-leading AI protections and enhanced health and safety measures for performers,” Audrey Cooling, a spokesperson for the video game producers, said in the release. The full list of studios includes Activision Productions, Blindlight, Disney Character Voices, Electronic Arts Productions, Formosa Interactive, Insomniac Games, Llama Productions, Take 2 Productions and WB Games.

SAG-AFTRA members approved the contract by a vote of 95.04% to 4.96%, according to the announcement. The agreement includes a wage increase of more than 15%, with additional 3% raises in November 2025, 2026 and 2027. The contract expires in October 2028. [...] The video game strike, which started in July 2024, did not shut down production like the SAG-AFTRA actors’ strike in 2023. Hollywood actors went on strike for 118 days, from July 14 to November 9, 2023, halting nearly all scripted television and film work. That strike, which centered on streaming residuals and AI concerns, prevented actors from engaging in promotional work, such as attending premieres and posting on social media. In contrast, video game performers were allowed to work during their strike, but only with companies that had signed interim agreements addressing concerns related to AI. More than 160 companies signed on, according to The Associated Press. Still, the year took a toll.

Submission + - Bitwarden launches MCP server to securely connect AI agents with your passwords (nerds.xyz)

BrianFagioli writes: Bitwarden is bringing artificial intelligence into your password workflow without compromising privacy or security. The open source password manager has just released a new Model Context Protocol server that lets AI agents securely interact with credential data including generating and retrieving passwords. The big deal is that it maintains Bitwardenâ(TM)s zero knowledge end to end encryption model while keeping everything local.

The new MCP server runs directly on the userâ(TM)s machine. That local first design means sensitive data does not have to travel to the cloud just to be useful. The tool ties into the Bitwarden CLI which allows users to automate vault operations and credential access through terminal commands. For even more control there is support for self hosted deployments.

This is not just a Bitwarden specific tool either. The Model Context Protocol is an open standard that helps AI systems safely interact with human applications like developer tools and content platforms through a consistent interface. Instead of duct taping together a mess of APIs MCP offers a way for AI to get structured context across multiple platforms.

As AI agents become more autonomous they need secure access to sensitive workflows like credential management. Bitwardenâ(TM)s MCP server provides a privacy focused way to give AI assistants that access without breaking encryption or giving up control. It is a serious answer to the growing question of how AI should handle your most private information.

Bitwarden is not chasing hype here. The company is focused on real world use cases that matter to developers sysadmins and privacy conscious users. The MCP server is fully open source and available now on Bitwardenâ(TM)s GitHub with expanded documentation and packaging coming soon.

Whether you are testing AI automation or looking to streamline credential workflows Bitwardenâ(TM)s new tool helps make that possible while keeping your passwords secure.

Submission + - YouTube vs AI slop (gizmodo.com)

SonicSpike writes: YouTube is inundated with AI-generated slop, and that’s not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off “spam.” At the same time, it’s still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create “original” and “authentic” content, but now it will “better identify mass-produced and repetitious content.” The changes will take place on July 15. The company didn’t advertise whether this change is related to AI, but the timing can’t be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI “revolution” has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users’ YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on “Last Week Tonight” specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube’s Partner Program.

Gizmodo reached out to YouTube to see if it could clarify what it considers “mass-produced” and “repetitious.” In an email statement, YouTube said this wasn’t a “new policy” but was a “minor update” effort to confront content already abusing the platform’s rules—calling such mass-produced content “spam.”

Submission + - China bet on coal while winning the green race (asiatimes.com)

RossCWilliams writes: An analysis from an Indian think tank in Asian times discusses the relationship between coal and China's growing leadership in alternative energy.

Of course that is a world wide problem. As the story makes clear, we still depend on fossil fuel for many of the materials needed for the transition to renewable energy.

China’s energy profile is a paradox. The country accounts for more than half of global coal use even as it builds the world’s largest solar-panel and EV industries.

Cheap coal power gives Chinese factories rock-bottom electricity costs, and state oil/gas revenue bankrolls clean-energy projects.

By spring 2025 wind and solar already supplied over a quarter of China’s power, suggesting domestic coal use may have peaked. But the coal wealth remains strategic: With slower demand at home, Chinese miners are now exporting more (early 2025 coal shipments were ~13% higher year-on-year).

In effect, China’s green ascent has been underwritten by its coal economy.


Submission + - With Microsoft's Support, Code.org Announces New 'Hour of AI' for Schoolkids

theodp writes: Tired: Hour of Code. Wired: Hour of AI. "The Hour of Code sparked a generation," proclaims tech-backed nonprofit Code.org. "This fall, the Hour of AI will define the next,"

Twelve years after Code.org CEO Hadi Partovi announced the Hour of Code with Microsoft President Brad Smith at his side, Partovi announced the Hour of AI ("Coming Fall 2025") at Wednesday's Microsoft Elevate launch event, again with Smith at his side. The announcement of the Microsoft-bankrolled nonprofit's new Hour of AI, which aims to get K-12 schoolchildren to take their first step into AI much like the Hour of Code has done with CS for hundreds of millions of children since 2013, comes in a week that saw Microsoft pledge $4B for AI education training programs and announce the launch of a $22.5M AI training center for members of the American Federation of Teachers.

“Coding changed the work of software developers, but it didn’t change every occupation and profession, or the work of every professional, the way A.I. probably will,” Microsoft's Smith explained. “So we need to move faster for A.I. than we did for computer science [Smith was a founding Board member of Code.org].” As its tech company sponsors aim to disrupt traditional coding and education with their AI offerings, Code.org has pivoted its own mission to "make CS and AI a core part of K-12 education" and launched a new campaign with support from tech leaders including Microsoft CEO Satya Nadella to make CS and AI a graduation requirement.

Slashdot Top Deals

Biology is the only science in which multiplication means the same thing as division.

Working...