Businesses

The New American Hustle: Dividends Over Day Jobs (bloomberg.com) 3

Young Americans are abandoning traditional retirement planning for dividend-focused ETFs that promise immediate income and freedom from traditional employment. Income-generating ETFs captured one in six dollars flowing into equity ETFs in 2025, pushing the sector to $750 billion -- with the most aggressive funds offering yields above 8% quadrupling to $160 billion over three years.

The r/dividends subreddit has grown tenfold to 780,000 members over five years, while YouTube channels and Discord servers dedicated to dividend investing proliferate. YieldMax's MSTY fund, offering a 90% distribution rate through complex derivatives, has underperformed MicroStrategy stock by 120 percentage points since February 2024 when dividends are reinvested -- nearly 200 points when payouts are withdrawn. Speaking to Bloomberg, finance professor Samuel Hartzmark identified this as the "free dividends fallacy," where investors fail to recognize that dividends reduce share prices rather than creating additional wealth.
Microsoft

Some Angry GitHub Users Are Rebelling Against GitHub's Forced Copilot AI Features (theregister.com) 27

Slashdot reader Charlotte Web shared this report from the Register: Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories. The second most popular discussion — where popularity is measured in upvotes — is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered, despite an abundance of comments critical of generative AI and Copilot...

The author of the first, developer Andi McClure, published a similar request to Microsoft's Visual Studio Code repository in January, objecting to the reappearance of a Copilot icon in VS Code after she had uninstalled the Copilot extension... "I've been for a while now filing issues in the GitHub Community feedback area when Copilot intrudes on my GitHub usage," McClure told The Register in an email. "I deeply resent that on top of Copilot seemingly training itself on my GitHub-posted code in violation of my licenses, GitHub wants me to look at (effectively) ads for this project I will never touch. If something's bothering me, I don't see a reason to stay quiet about it. I think part of how we get pushed into things we collectively don't want is because we stay quiet about it."

It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution. It's what the Servo project characterizes in its ban on AI code contributions as the lack of code correctness guarantees, copyright issues, and ethical concerns. Similar objections have been used to justify AI code bans in GNOME's Loupe project, FreeBSD, Gentoo, NetBSD, and QEMU... Calls to shun Microsoft and GitHub go back a long way in the open source community, but moved beyond simmering dissatisfaction in 2022 when the Software Freedom Conservancy (SFC) urged free software supporters to give up GitHub, a position SFC policy fellow Bradley M. Kuhn recently reiterated.

McClure says In the last six months their posts have drawn more community support — and tells the Register there's been a second change in how people see GitHub within the last month. After GitHub moved from a distinct subsidiary to part of Microsoft's CoreAI group, "it seems to have galvanized the open source community from just complaining about Copilot to now actively moving away from GitHub."
IT

There's 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago (fortune.com) 50

An anonymous reader shared this report from Fortune: The percentage of young Gen Z employees between the ages of 21 and 25 has been cut in half at technology companies over the past two years, according to recent data from compensation management software business Pave with workforce data from more than 8,300 companies.

These young workers accounted for 15% of the workforce at large public tech firms in January 2023. By August 2025, they only represented 6.8%. The situation isn't pretty at big private tech companies, either — during that same time period, the proportion of early-career Gen Z employees dwindled from 9.3% to 6.8%. Meanwhile, the average age of a worker at a tech company has risen dramatically over those two and a half years. Between January 2023 and July 2025, the average age of all employees at large public technology businesses rose from 34.3 years to 39.4 years — more than a five year difference. On the private side, the change was less drastic, with the typical age only increasing from 35.1 to 36.6 years old...

"If you're 35 or 40 years old, you're pretty established in your career, you have skills that you know cannot yet be disrupted by AI," Matt Schulman, founder and CEO of Pave, tells Fortune. "There's still a lot of human judgment when you're operating at the more senior level...If you're a 22-year-old that used to be an Excel junkie or something, then that can be disrupted. So it's almost a tale of two cities." Schulman points to a few reasons why tech company workforces are getting older and locking Gen Z out of jobs. One is that big companies — like Salesforce, Meta, and Microsoft — are becoming a lot more efficient thanks to the advent of AI. And despite their soaring trillion-dollar profits, they're cutting employees at the bottom rungs in favor of automation. Entry-level jobs have also dwindled because of AI agents, and stalling promotions across many agencies looking to do more with less. Once technology companies weed out junior roles, occupied by Gen Zers, their workforces are bound to rise in age.

Schulman tells Fortune Gen Z also has an advantage: that tech corporations can see them as fresh talent that "can just break the rules and leverage AI to a much greater degree without the hindrance of years of bias." And Priya Rathod, workplace trends editor for LinkedIn, tells Fortune there's promising tech-industry entry roles in AI ethics, cybersecurity, UX, and product operations. "Building skills through certifications, gig work, and online communities can open doors....

"For Gen Z, the right certifications or micro credentials can outweigh a lack of years on the resume. This helps them stay competitive even when entry level opportunities shrink."
NASA

A New Four-Person Crew Will Simulate a Year-Long Mars Mission, NASA Announces (nasa.gov) 23

Somewhere in Houston, four research volunteers "will soon participate in NASA's year-long simulation of a Mars mission," NASA announced this week, saying it will provide "foundational data to inform human exploration of the Moon, Mars, and beyond."

The 378-day simulation will take place inside a 3D-printed, 1,700-square-foot habitat at NASA's Johnson Space Center in Houston — starting on October 19th and continuing until Halloween of 2026: Through a series of Earth-based missions called CHAPEA (Crew Health and Performance Exploration Analog), NASA aims to evaluate certain human health and performance factors ahead of future Mars missions. The crew will undergo realistic resource limitations, equipment failures, communication delays, isolation and confinement, and other stressors, along with simulated high-tempo extravehicular activities. These scenarios allow NASA to make informed trades between risks and interventions for long-duration exploration missions.

"As NASA gears up for crewed Artemis missions, CHAPEA and other ground analogs are helping to determine which capabilities could best support future crews in overcoming the human health and performance challenges of living and operating beyond Earth's resources — all before we send humans to Mars," said Sara Whiting, project scientist with NASA's Human Research Program at NASA Johnson. Crew members will carry out scientific research and operational tasks, including simulated Mars walks, growing a vegetable garden, robotic operations, and more. Technologies specifically designed for Mars and deep space exploration will also be tested, including a potable water dispenser and diagnostic medical equipment...

This mission, facilitated by NASA's Human Research Program, is the second one-year Mars surface simulation conducted through CHAPEA. The first mission concluded on July 6, 2024.

Microsoft

Microsoft's Analog Optical Computer Shows AI Promise (microsoft.com) 24

Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. "As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers," explains a Microsoft blog post.

They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said.

Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy...

As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.

Microsoft

Microsoft's Cloud Services Disrupted by Red Sea Cable Cuts (bbc.com) 39

An anonymous reader shared this report from the BBC: Microsoft's Azure cloud services have been disrupted by undersea cable cuts in the Red Sea, the US tech giant says.

Users of Azure — one of the world's leading cloud computing platforms — would experience delays because of problems with internet traffic moving through the Middle East, the company said. Microsoft did not explain what might have caused the damage to the undersea cables, but added that it had been able to reroute traffic through other paths.

Over the weekend, there were reports suggesting that undersea cable cuts had affected the United Arab Emirates and some countries in Asia.... On Saturday, NetBlocks, an organisation that monitors internet access, said a series of undersea cable cuts in the Red Sea had affected internet services in several countries, including India and Pakistan.

"We do expect higher latency on some traffic that previously traversed through the Middle East," Microsoft said in their status announcement — while stressing that traffic "that does not traverse through the Middle East is not impacted".
China

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign (msn.com) 15

As America's trade talks with China were set to begin last July, a "puzzling" email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. "But why had the chairman sent the message from a nongovernment address...?"

"The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal." It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump's trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing's Ministry of State Security... The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn't be determined whether the attackers had successfully breached any of the targets.

A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was "working with our partners to identify and pursue those responsible...." The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China's spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump's phone calls actually targeted more than 80 countries and reached across the globe...

The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio's voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May... The FBI issued a warning that month that "malicious actors have impersonated senior U.S. officials" targeting contacts with AI-generated voice messages and texts.

And in January, the article points out, all the staffers on Moolenaar's committee "received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
The Media

Publishers Demand 'AI Overview' Traffic Stats from Google, Alleging 'Forced' Deals (theguardian.com) 14

AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition. So they've joined other top news organizations (including Guardian Media Group and the magazine trade body the Periodical Publishers Association) in asking the regulators "to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers," reports the Guardian: Publishers — already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news — argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or "drop out of all search results", according to several sources... In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content. However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers' overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies. "Google Discover is of zero product importance to Google at all," he says. "It allows Google to funnel more traffic to publishers as traffic from search declines ... Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want."

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models. The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the "value being scraped" out of the £125bn sector. Some publishers have struck bilateral licensing deals with AI companies — such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI — while others such as the BBC have taken action against AI companies alleging copyright theft. "It is a two-pronged attack on publishers, a sort of pincer movement," says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. "Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis."

"At the moment the AI and tech community are showing no signs of supporting publisher revenue," says the chief executive of the UK's Periodical Publishers Association...
Linux

Linus Torvalds Expresses Frustration With 'Garbage' Link Tags In Git Commits (phoronix.com) 63

"I have not pulled this, I'm annoyed by having to even look at this, and if you actually expect me to pull this I want a real explanation and not a useless link," Linus Torvalds posted Friday on the Linux kernel mailing list.

Phoronix explains: It's become a common occurrence seeing "Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch... Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value. He commented yesterday on a block pull request that he pulled and then backed out of:

"And dammit, this commit has that promising 'Link:' argument that I hoped would explain why this pointless commit exists, but AS ALWAYS that link only wasted my time by pointing to the same damn information that was already there. I was hoping that it would point to some oops report or something that would explain why my initial reaction was wrong.

"Stop this garbage already. Stop adding pointless Link arguments that waste people's time. Add the link if it has *ADDITIONAL* information....

"Yes, I'm grumpy. I feel like my main job — really my only job — is to try to make sense of pull requests, and that's why I absolutely detest these things that are automatically added and only make my job harder."

A longer discussion ensued...
  • Torvalds: [A] "perfect" model might be to actually have some kind of automation of "unless there was actual discussion about it". But I feel such a model might be much too complicated, unless somebody *wants* to explore using AI because their job description says "Look for actual useful AI uses". In today's tech world, I assume such job descriptions do exist. Sigh...
  • Torvalds: I do think it makes sense for patch series that (a) are more than a small handful of patches and (b) have some real "story" to them (ie a cover letter that actually explains some higher-level issues)...

Torvalds also had two responses to a poster who'd said "IMHO it's better to have a Link and it _potentially_ being useful than not to have it and then need to search around for it."

  • Torvalds: No. Really. The issue is "potentially — but very likely not — useful" vs "I HIT THIS TEN+ TIMES EVERY SINGLE F%^& RELEASE".

    There is just no comparison. I have literally *never* found the original submission email to be useful, and I'm tired of the "potentially useful" argument that has nothing to back it up with. It's literally magical thinking of "in some alternate universe, pigs can fly, and that link might be useful"
  • Torvalds: And just to clarify: the hurt is real. It's not just the disappointment. It's the wasted effort of following a link and having to then realize that there's nothing useful there. Those links *literally* double the effort for me when I try to be careful about patches...

    The cost is real. The cost is something I've complained about before... Yes, it's literally free to you to add this cost. No, *YOU* don't see the cost, and you think it is helpful. It's not. It's the opposite of helpful. So I want commit messages to be relevant and explain what is going on, and I want them to NOT WASTE MY TIME.

    And I also don't want to ignore links that are actually *useful* and give background information. Is that really too much to ask for?

Torvalds points out he's brought this up four times before — once in 2022.

  • Torvalds: I'm a bit frustrated, exactly because this _has_ been going on for years. It's not a new peeve.

    And I don't think we have a good central place for that kind of "don't do this". Yes, there's the maintainer summit, but that's a pretty limited set of people. I guess I could mention it in my release notes, but I don't know who actually reads those either.. So I end up just complaining when I see it.

Biotech

Scientists Discuss Next Steps to Prevent Dangerous 'Mirror Life' Research (msn.com) 70

USA Today has an update on the curtailing of "mirror life" research: Kate Adamala had been working on something dangerous. At her synthetic biology lab, Adamala had been taking preliminary steps toward creating a living cell from scratch with one key twist: All the organism's building blocks would be flipped. Changing these molecules would create an unnatural mirror image of a cell, as different as your right hand from your left. The endeavor was not only a fascinating research challenge, but it also could be used to improve biotechnology and medicine. As Adamala and her colleagues talked with biosecurity experts about the project, however, grave concerns began brewing. "They started to ask questions like, 'Have you considered what happens if that cell gets released or what would happen if it infected a human?'" said Adamala, an associate professor at the University of Minnesota. They hadn't.

So researchers brought together dozens of experts in a variety of disciplines from around the globe, including two Nobel laureates, who worked for months to determine the risks of creating "mirror life" and the chances those dangers could be mitigated. Ultimately, they concluded, mirror cells could inflict "unprecedented and irreversible harm" on our world. "We cannot rule out a scenario in which a mirror bacterium acts as an invasive species across many ecosystems, causing pervasive lethal infections in a substantial fraction of plant and animal species, including humans," the scientists wrote in a paper published in the journal Science in December alongside a 299-page technical report...

[Report co-author Vaughn Cooper, a professor at the University of Pittsburgh who studies how bacteria adapt to new environments] said it's not yet possible to build a cell from scratch, mirror or otherwise, but researchers have begun the process by synthesizing mirror proteins and enzymes. He and his colleagues estimated that given enough resources and manpower, scientists could create a complete mirror bacteria within a decade. But for now, the world is probably safe from mirror cells. Adamala said virtually everyone in the small scientific community that was interested in developing such cells has agreed not to as a result of the findings.

The paper prompted nearly 100 scientists and ethicists from around the world to gather in Paris in June to further discuss the risks of creating mirror organisms. Many felt self-regulation is not enough, according to the institution that hosted the event, and researchers are gearing up to meet again in Manchester, England, and Singapore to discuss next steps.

AI

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds (phys.org) 53

How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.
The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses."

Other results from the survey:
  • 47 respondents (20.3%) never used AI assistants in this course.
  • Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low."
  • "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

Firefox

New In Firefox Nightly Builds: Copilot Chatbot, New Tab Widgets, JPEG-XL Support (omgubuntu.co.uk) 36

The blog OMG Ubuntu notes that Microsoft Copilot chatbot support has been added in the latest Firefox Nightly builds. "Firefox's sidebar already offers access to popular chatbots, including OpenAI's ChatGPT, Anthropic's Claude, Le Chat's Mistral and Google's Gemini. It previously offered HuggingChat too." As the testing bed for features Mozilla wants to add to stable builds (though not all make it — eh, rounded bottom window corners?), this is something you can expect to find in a future stable update... Copilot in Firefox offers the same features as other chatbots: text prompts, upload files or images, generate images, support for entering voice prompts (for those who fancy their voice patterns being analysed and trained on). And like those other chatbots, there are usage limits, privacy policies, and (for some) account creation needed. In testing, Copilot would only generate half a summary for a webpage, telling me it was too long to produce without me signing in/up for an account.

On a related note, Mozilla has updated stable builds to let third-party chatbots summarise web pages when browsing (in-app callout alerts users to the 'new' feature). Users yet to enable chatbots are subtly nudged to do so each time they right-click on web page. [Between "Take Screenshot" and "View Page Source" there's a menu option for "Ask an AI Chatbot."] Despite making noise about its own (sluggish, but getting faster) on-device AI features that are privacy-orientated, Mozilla is bullish on the need for external chatbots.

The article suggests Firefox wants to keep up with Edge and Chrome (which can "infuse first-party AI features directly.") But it adds that Firefox's nightly build is also testing some non-AI features, like new task and timer widgets on Firefox's New Tab page. And "In Firefox Labs, there are is an option to enable JPEG XL support, a super-optimised version of JPEG that is gaining traction (despite Google's intransigence).

Other Firefox news:
  • Google "can keep paying companies like Mozilla to make Google the default search engine, as long as these deals aren't exclusive anymore," reports the blog It's FOSS News. (The judge wrote that "Cutting off payments from Google almost certainly will impose substantial — in some cases, crippling — downstream harms to distribution partners..." according to CNBC — especially since the non-profit Mozilla Foundation gets most of its annual revenue from its Google's search deal.)
  • Don't forget you can now search your tabs, bookmarks and browsing history right from the address bar with keywords like @bookmarks, @tabs, and @history. (And @actions pulls up a list of actions like "Open private window" or "Restart Firefox").

Programming

32% of Senior Developers Say Half Their Shipped Code is AI-Generated (infoworld.com) 55

In July 791 professional coders were surveyed by Fastly about their use of AI coding tools, reports InfoWorld. The results?

"About a third of senior developers (10+ years of experience) say over half their shipped code is AI-generated," Fastly writes, "nearly two and a half times the rate reported by junior developers (0-2 years of experience), at 13%." "AI will bench test code and find errors much faster than a human, repairing them seamlessly. This has been the case many times," one senior developer said...

Senior developers were also more likely to say they invest time fixing AI-generated code. Just under 30% of seniors reported editing AI output enough to offset most of the time savings, compared to 17% of juniors. Even so, 59% of seniors say AI tools help them ship faster overall, compared to 49% of juniors. Just over 50% of junior developers say AI makes them moderately faster. By contrast, only 39% of more senior developers say the same.

But senior devs are more likely to report significant speed gains: 26% say AI makes them a lot faster, double the 13% of junior devs who agree. One reason for this gap may be that senior developers are simply better equipped to catch and correct AI's mistakes... Nearly 1 in 3 developers (28%) say they frequently have to fix or edit AI-generated code enough that it offsets most of the time savings. Only 14% say they rarely need to make changes. And yet, over half of developers still feel faster with AI tools like Copilot, Gemini, or Claude.

Fastly's survey isn't alone in calling AI productivity gains into question. A recent randomized controlled trial (RCT) of experienced open-source developers found something even more striking: when developers used AI tools, they took 19% longer to complete their tasks. This disconnect may come down to psychology. AI coding often feels smooth... but the early speed gains are often followed by cycles of editing, testing, and reworking that eat into any gains. This pattern is echoed both in conversations we've had with Fastly developers and in many of the comments we received in our survey...

Yet, AI still seems to improve developer job satisfaction. Nearly 80% of developers say AI tools make coding more enjoyable... Enjoyment doesn't equal efficiency, but in a profession wrestling with burnout and backlogs, that morale boost might still count for something.

Fastly quotes one developer who said their AI tool "saves time by using boilerplate code, but it also needs manual fixes for inefficiencies, which keep productivity in check."

The study also found the practice of green coding "goes up sharply with experience. Just over 56% of junior developers say they actively consider energy use in their work, while nearly 80% among mid- and senior-level engineers consider this when coding."
Science

Switching Off One Crucial Protein Appears to Reverse Brain Aging in Mice (sciencealert.com) 22

A research team just discovered older mice have more of the protein FTL1 in their hippocampus, reports ScienceAlert. The hippocampus is the region of the brain involved in memory and learning. And the researchers' paper says their new data raises "the exciting possibility that the beneficial effects of targeting neuronal ferritin light chain 1 (FTL1) at old age may extend more broadly, beyond cognitive aging, to neurodegenerative disease conditions in older people." FTL1 is known to be related to storing iron in the body, but hasn't come up in relation to brain aging before... To test its involvement after their initial findings, the researchers used genetic editing to overexpress the protein in young mice, and reduce its level in old mice. The results were clear: the younger mice showed signs of impaired memory and learning abilities, as if they were getting old before their time, while in the older mice there were signs of restored cognitive function — some of the brain aging was effectively reversed...

"It is truly a reversal of impairments," says biomedical scientist Saul Villeda, from the University of California, San Francisco. "It's much more than merely delaying or preventing symptoms." Further tests on cells in petri dishes showed how FTL1 stopped neurons from growing properly, with neural wires lacking the branching structures that typically provide links between nerve cells and improve brain connectivity...

"We're seeing more opportunities to alleviate the worst consequences of old age," says Villeda. "It's a hopeful time to be working on the biology of aging."

The research was led by a team from the University of California, San Francisco — and published in Nature Aging..
Security

First AI-Powered 'Self-Composing' Ransomware Was Actually Just a University Research Project (tomshardware.com) 6

Cybersecurity company ESET thought they'd discovered the first AI-powered ransomware in the wild, which they'd dubbed "PromptLock". But it turned out to be the work of university security researchers...

"Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary," the researchers write in a research paper, calling it "Ransomware 3.0: Self-Composing and LLM-Orchestrated." Their prototype "uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to "generate malicious Lua scripts on the fly." Tom's Hardware said that would help PromptLock evade detection: If they had to call an API on [OpenAI's] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
The whole thing was actually an experiment by researchers at NYU's Tandon School of Engineering. So "While it is the first to be AI-powered," the school said in an announcement, "the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment."

An NYU spokesperson told Tom's Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake: But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.

"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."

As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers...

"The study serves as an early warning to help defenders prepare countermeasures," NYU said in an announcement, "before bad actors adopt these AI-powered techniques."

ESET posted on Mastodon that "Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware."

And the ESET researcher who'd mistakenly thought the ransomware was "in the wild" had warned that looking ahead, ransomware "will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever."

Slashdot Top Deals