×
Supercomputing

New Advances Promise Secure Quantum Computing At Home (phys.org) 27

Scientists from Oxford University Physics have developed a breakthrough in cloud-based quantum computing that could allow it to be harnessed by millions of individuals and companies. The findings have been published in the journal Physical Review Letters. Phys.Org reports: In the new study, the researchers use an approach dubbed "blind quantum computing," which connects two totally separate quantum computing entities -- potentially an individual at home or in an office accessing a cloud server -- in a completely secure way. Importantly, their new methods could be scaled up to large quantum computations. "Using blind quantum computing, clients can access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without revealing any useful information. Realizing this concept is a big step forward in both quantum computing and keeping our information safe online," said study lead Dr. Peter Drmota, of Oxford University Physics.

The researchers created a system comprising a fiber network link between a quantum computing server and a simple device detecting photons, or particles of light, at an independent computer remotely accessing its cloud services. This allows so-called blind quantum computing over a network. Every computation incurs a correction that must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers used a unique combination of quantum memory and photons to achieve this. The results could ultimately lead to commercial development of devices to plug into laptops, to safeguard data when people are using quantum cloud computing services.
"We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity," said Professor David Lucas, who co-heads the Oxford University Physics research team and is lead scientist at the UK Quantum Computing and Simulation Hub, led from Oxford University Physics.
Crime

Former Google Engineer Indicted For Stealing AI Secrets To Aid Chinese Companies 28

Linwei Ding, a former Google software engineer, has been indicted for stealing trade secrets related to AI to benefit two Chinese companies. He faces up to 10 years in prison and a $250,000 fine on each criminal count. Reuters reports: Ding's indictment was unveiled a little over a year after the Biden administration created an interagency Disruptive Technology Strike Force to help stop advanced technology being acquired by countries such as China and Russia, or potentially threaten national security. "The Justice Department just will not tolerate the theft of our trade secrets and intelligence," U.S. Attorney General Merrick Garland said at a conference in San Francisco.

According to the indictment, Ding stole detailed information about the hardware infrastructure and software platform that lets Google's supercomputing data centers train large AI models through machine learning. The stolen information included details about chips and systems, and software that helps power a supercomputer "capable of executing at the cutting edge of machine learning and AI technology," the indictment said. Google designed some of the allegedly stolen chip blueprints to gain an edge over cloud computing rivals Amazon.com and Microsoft, which design their own, and reduce its reliance on chips from Nvidia.

Hired by Google in 2019, Ding allegedly began his thefts three years later, while he was being courted to become chief technology officer for an early-stage Chinese tech company, and by May 2023 had uploaded more than 500 confidential files. The indictment said Ding founded his own technology company that month, and circulated a document to a chat group that said "We have experience with Google's ten-thousand-card computational power platform; we just need to replicate and upgrade it." Google became suspicious of Ding in December 2023 and took away his laptop on Jan. 4, 2024, the day before Ding planned to resign.
A Google spokesperson said: "We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets. After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement."
Supercomputing

Investors Threw 50% Less Money At Quantum Last Year (theregister.com) 32

Dan Robinson reports via The Register: Quantum companies received 50 percent less venture cap funding last year as investors switched to generative AI or shied away from risky bets on Silicon Valley startups. Progress in quantum computing is being made, but practical applications of the technology are still likely years away. Investment in quantum technology reached a high of $2.2 billion in 2022, as confidence (or hype) grew in this emerging market, but that funding fell to about $1.2 billion last year, according to the latest State of Quantum report, produced by The Quantum Insider, with quantum computing company IQM, plus VCs OpenOcean and Lakestar. The picture is even starker in the US, where there was an 80 percent decline in venture capital for quantum, while the APAC region dropped by 17 percent, and EMEA grew slightly by three percent.

But the report denies that we have reached a "quantum winter," comparable with the "AI winter" periods of scarce funding and little progress. Instead, the quantum industry continues to progress towards useful quantum systems, just at a slower pace, and the decline in funding must be seen as part of broader venture capital trends, it insists. "Calendar year 2023 was an interesting year with regards to quantum," Heather West, research manager for Quantum Computing, Infrastructure Systems, Platforms, and Technology at IDC told The Register. "With the increased interest in generative AI, we started to observe that some of the funding that was being invested into quantum was transferred to AI initiatives and companies. Generative AI was seen as the new disruptive technology which end users could use immediately to gain an advantage or value, whereas quantum, while expected to be a disruptive technology, is still very early in development," West told The Register.

Gartner Research vice president Matthew Brisse agreed. "It's due to the slight shift of CIO priorities toward GenAI. If organizations were spending 10 innovation dollars on quantum, now they are spending five. Not abandoning it, but looking at GenAI to provide value sooner to the organization than quantum," he told us. Meanwhile, venture capitalists in America are fighting shy of risky bets on Silicon Valley startups and instead keeping their powder dry as they look to more established technology companies or else shore up their existing portfolio of investments, according to the Financial Times.

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Supercomputing

Quantum Computing Startup Says It Will Beat IBM To Error Correction (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent. Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard University lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building. Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits. So, while the announcement should be viewed cautiously -- several companies have promised rapid scaling and then failed to deliver -- there are some reasons it should be viewed seriously as well. [...]

As our earlier coverage described, the Harvard lab where the technology behind Quera's hardware was developed has already demonstrated a key step toward error correction. It created logical qubits from small collections of atoms, performed operations on them, and determined when errors occurred (those errors were not corrected in these experiments). But that work relied on operations that are relatively easy to perform with trapped atoms: two qubits were superimposed, and both were exposed to the same combination of laser lights, essentially performing the same manipulation on both simultaneously. Unfortunately, only a subset of the operations that are likely to be desired for a calculation can be done that way. So, the road map includes a demonstration of additional types of operations in 2024 and 2025. At the same time, the company plans to rapidly scale the number of qubits. Its goal for 2024 hasn't been settled on yet, but [Quera's Yuval Boger] indicated that the goal is unlikely to be much more than double the current 256. By 2025, however, the road map calls for over 3,000 qubits and over 10,000 a year later. This year's small step will add pressure to the need for progress in the ensuing years.

If things go according to plan, the 3,000-plus qubits of 2025 can be combined to produce 30 logical qubits, meaning about 100 physical qubits per logical one. This allows fairly robust error correction schemes and has undoubtedly been influenced by Quera's understanding of the error rate of its current atomic qubits. That's not enough to perform any algorithms that can't be simulated on today's hardware, but it would be more than sufficient to allow people to get experience with developing software using the technology. (The company will also release a logical qubit simulator to help here.) Quera will undoubtedly use this system to develop its error correction process -- Boger indicated that the company expected it would be transparent to the user. In other words, people running operations on Quera's hardware can submit jobs knowing that, while they're running, the system will be handling the error correction for them. Finally, the 2026 machine will enable up to 100 logical qubits, which is expected to be sufficient to perform useful calculations, such as the simulation of small molecules. More general-purpose quantum computing will need to wait for higher qubit counts still.

Supercomputing

How a Cray-1 Supercomputer Compares to a Raspberry Pi (roylongbottom.org.uk) 145

Roy Longbottom worked for the U.K. covernment's Central Computer Agency from 1960 to 1993, and "from 1972 to 2022 I produced and ran computer benchmarking and stress testing programs..." Known as the official design authority for the Whetstone benchmark), Longbottom writes that "In 2019 (aged 84), I was recruited as a voluntary member of Raspberry Pi pre-release Alpha testing team."

And this week — now at age 87 — Longbottom has created a web page titled "Cray 1 supercomputer performance comparisons with home computers, phones and tablets." And one statistic really captures the impact of our decades of technological progress.

"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1."


Thanks to long-time Slashdot reader bobdevine for sharing the link.
Supercomputing

Quantum Computer Sets Record For Largest Ever Number of 'Logical Quantum Bits' (newscientist.com) 16

An anonymous reader quotes a report from New Scientist: Another quantum computing record has been broken. A team has built a quantum computer with the largest ever number of so-called logical qubits (quantum bits). Unlike standard qubits, logical qubits are better able to carry out computations unmarred by errors, making the new device a potentially important step towards practical quantum computing. How complicated of a calculation a quantum computer can complete depends on the number of qubits it contains. Recently, IBM and California-based Atom Computing unveiled devices with more than 1000 qubits, nearly tripling the size of previously largest quantum computers. But the existence of these devices has not led to an immediate and dramatic increase in computing capability, because larger quantum computers often also make more errors.

To make a quantum computer that can correct its errors, researchers from the quantum computing start-up QuEra in Boston and several academics focused instead on increasing its number of logical qubits, which are groups of qubits that are connected to each other through quantum entanglement. In conventional computers, error-correction relies on keeping multiple redundant copies of information, but quantum information is fundamentally different and cannot be copied -- so researchers use entanglement to spread it across several qubits, which achieves a similar redundancy, says Dolev Bluvstein at Harvard University in Massachusetts who was part of the team. To make their quantum computer, the researchers started with several thousand rubidium atoms in an airless container. They then used forces from lasers and magnets to cool the atoms to temperatures close to absolute zero where their quantum properties are most prominent. Under these conditions, they could control the atoms' quantum states very precisely by again hitting them with lasers. Accordingly, they first created 280 qubits from the atoms and then went a step further by using another laser pulse to entangle groups of those – for instance 7 qubits at a time -- to make a logical qubit. By doing this, the researchers were able to make as many as 48 logical qubits at one time. This is more than 10 times the number of logical qubits that have ever been created before.

"It's a big deal to have that many logical qubits. A very remarkable result for any quantum computing platform" says Mark Saffman at the University of Wisconsin-Madison. He says that the new quantum computer greatly benefits from being made of atoms that are controlled by light because this kind of control is very efficient. QuEra's computer makes its qubits interact and exchange information by moving them closer to each other inside the computer with optical "tweezers" made of laser beams. In contrast, chip-based quantum computers, like those made by IBM and Google, must use multiple wires to control each qubit. Bluvstein and his colleagues implemented several computer operations, codes and algorithms on the new computer to test the logical qubits' performance. He says that though these tests were more preliminary than the calculations that quantum computers will eventually perform, the team already found that using logical qubits led to fewer errors than seen in quantum computers using physical qubits.
The research has been published in the journal Nature.
IBM

IBM Claims Quantum Computing Research Milestone (ft.com) 33

Quantum computing is starting to fulfil its promise as a crucial scientific research tool, IBM researchers claim, as the US tech group attempts to quell fears that the technology will fail to match high hopes for it. From a report: The company is due to unveil 10 projects on Monday that point to the power of quantum calculation when twinned with established techniques such as conventional supercomputing, said Dario Gil, its head of research. "For the first time now we have large enough systems, capable enough systems, that you can do useful technical and scientific work with it," Gil said in an interview. The papers presented on Monday are the work of IBM and partners including the Los Alamos National Laboratory, University of California, Berkeley, and the University of Tokyo. They focus mainly on areas such as simulating quantum physics and solving problems in chemistry and materials science.

Expectations that quantum systems would by now be close to commercial uses prompted a wave of funding for the technology in recent years. But signs that business applications are further off than expected have led to warnings of a possible "quantum winter" of waning investor confidence and financial backing. IBM's announcements suggest the technology's main applications have not yet fully extended to the broad range of commercialisable computing tasks many in the field want to see. "It's going to take a while before we go from scientific value to, let's say, business value," said Jay Gambetta, IBM's vice-president of quantum. "But in my opinion the difference between research and commercialisation is getting tighter."

China

China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor (tomshardware.com) 73

An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks.

The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS.

The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.

Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

Microsoft

Microsoft Partners With VCs To Give Startups Free AI Chip Access (techcrunch.com) 4

In the midst of an AI chip shortage, Microsoft wants to give a privileged few startups free access to "supercomputing" resources from its Azure cloud for developing AI models. From a report: Microsoft today announced it's updating its startup program, Microsoft for Startups Founders Hub, to include a no-cost Azure AI infrastructure option for "high-end," Nvidia-based GPU virtual machine clusters to train and run generative models, including large language models along the lines of ChatGPT. Y Combinator and its community of startup founders will be the first to gain access to the clusters in private preview. Why Y Combinator? Annie Pearl, VP of growth and ecosystems, Microsoft, called YC the "ideal initial partner," given its track record working with startups "at the earliest stages."

"We're working closely with Y Combinator to prioritize the asks from their current cohort, and then alumni, as part of our initial preview," Pearl said. "The focus will be on tasks like training and fine-tuning use cases that unblock innovation." It's not the first time Microsoft's attempted to curry favor with Y Combinator startups. In 2015, the company said it would give $500,000 in Azure credits to YC's Winter 2015 batch, a move that at the time was perceived as an effort to draw these startups away from rival clouds. One might argue the GPU clusters for AI training and inferencing are along the same self-serving vein.

Encryption

Scientist Claims Quantum RSA-2048 Encryption Cracking Breakthrough (tomshardware.com) 129

Mark Tyson reports via Tom's Hardware: A commercial smartphone or Linux computer can be used to crack RSA-2048 encryption, according to a prominent research scientist. Dr Ed Gerck is preparing a research paper with the details but couldn't hold off from bragging about his incredible quantum computing achievement (if true) on his LinkedIn profile. Let us be clear: the claims seem spurious, but it should be recognized that the world isn't ready for an off-the-shelf system that can crack RSA-2048, as major firms, organizations, and governments haven't yet transitioned to encryption tech that is secured for the post-quantum era.

In his social media post, Gerck states that a humble device like a smartphone can crack the strongest RSA encryption keys in use today due to a mathematical technique that "has been hidden for about 2,500 years -- since Pythagoras." He went on to make clear that no cryogenics or special materials were used in the RSA-2048 key-cracking feat. BankInfoSecurity reached out to Gerck in search of some more detailed information about his claimed RSA-2048 breakthrough and in the hope of some evidence that what is claimed is possible and practical. Gerck shared an abstract of his upcoming paper. This appears to show that instead of using Shor's algorithm to crack the keys, a system based on quantum mechanics was used, and it can run on a smartphone or PC.

In some ways, it is good that the claimed breakthrough doesn't claim to use Shor's algorithm. Alan Woodward, a professor of computer science at the University of Surrey, told BankInfoSecurity that no quantum computer in existence has enough gates to implement Shor's algorithm and break RSA-2048. So at least this part of Gerck's explanation checks out. However, the abstract of Gerck's paper looks like it is "all theory proving various conjectures - and those proofs are definitely in question," according to Woodward. The BankInfoSecurity report on Gerck's "QC Algorithms: Faster Calculation of Prime Numbers" paper quotes other skeptics, most of whom are waiting for more information and proofs before they organize a standing ovation for Gerck.

Supercomputing

Atom Computing Is the First To Announce a 1,000+ Qubit Quantum Computer (arstechnica.com) 20

John Timmer reports via Ars Technica: Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits -- a system that operated using only 100 qubits. The error rate for individual qubit operations is high enough that it won't be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company's claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it'll simply run multiple instances in parallel to boost the chance of returning the right answer. [...]

Atom Computing is now using the system internally and plans to open it up for public use next year. The system has moved from a 10x10 grid to a 35x35 grid, bringing the potential sites for atoms up to 1,225. So far, testing has taken place with up to 1,180 atoms present, making it the largest machine that anyone has publicly acknowledged (at least in terms of qubit count). The qubits are housed in a 12x5 foot box that contains the lasers and optics, along with the vacuum system and a bit of unused space -- Atom CEO Rob Hayes quipped that "there's a lot of air inside that box." It does not, however, contain the computer hardware that controls the system and its operations. The grid of atoms it's used to create, by contrast, is only about 100 microns per side, so it won't strain the hardware to keep increasing the qubit count.

China

Chinese Scientists Claim Record-Smashing Quantum Computing Breakthrough (scmp.com) 44

From the South China Morning Post: Scientists in China say their latest quantum computer has solved an ultra-complicated mathematical problem within a millionth of a second — more than 20 billion years quicker than the world's fastest supercomputer could achieve the same task. The JiuZhang 3 prototype also smashed the record set by its predecessor in the series, with a one million-fold increase in calculation speed, according to a paper published on Tuesday by the peer-reviewed journal Physical Review Letters...

The series uses photons — tiny particles that travel at the speed of light — as the physical medium for calculations, with each one carrying a qubit, the basic unit of quantum information... The fastest classical supercomputer Frontier — developed in the US and named the world's most powerful in mid-2022 — would take over 20 billion years to complete the same task, the researchers said.

The article claims they've increased the number of photons from 76 to 113 in the first two versions, improving to 255 in the latest iteration.

Thanks to long-time Slashdot reader hackingbear for sharing the news.
Supercomputing

Europe's First Exascale Supercomputer Will Run On ARM Instead of X86 (extremetech.com) 40

An anonymous reader quotes a report from ExtremeTech: One of the world's most powerful supercomputers will soon be online in Europe, but it's not just the raw speed that will make the Jupiter supercomputer special. Unlike most of the Top 500 list, the exascale Jupiter system will rely on ARM cores instead of x86 parts. Intel and AMD might be disappointed, but Nvidia will get a piece of the Jupiter action. [...] Jupiter is a project of the European High-Performance Computing Joint Undertaking (EuroHPC JU), which is working with computing firms Eviden and ParTec to assemble the machine. Europe's first exascale computer will be installed at the Julich Supercomputing Centre in Munich, and assembly could start as soon as early 2024.

EuroHPC has opted to go with SiPearl's Rhea processor, which is based on ARM architecture. Most of the top 10 supercomputers in the world are running x86 chips, and only one is running on ARM. While ARM designs were initially popular in mobile devices, the compact, efficient cores have found use in more powerful systems. Apple has recently finished moving all its desktop and laptop computers to the ARM platform, and Qualcomm has new desktop-class chips on its roadmap. Rhea is based on ARM's Neoverse V1 CPU design, which was developed specifically for high-performance computing (HPC) applications with 72 cores. It supports HBM2e high-bandwidth memory, as well as DDR5, and the cache tops out at an impressive 160MB.
The report says the Jupiter system "will have Nvidia's Booster Module, which includes GPUs and Mellanox ultra-high bandwidth interconnects," and will likely include the current-gen H100 chips. "When complete, Jupiter will be near the very top of the supercomputer list."
United States

Los Alamos's New Project: Updating America's Aging Nuclear Weapons (apnews.com) 192

During World War II, "Los Alamos was the perfect spot for the U.S. government's top-secret Manhattan Project," remembers the Associated Press.

"The community is facing growing pains again, 80 years later, as Los Alamos National Laboratory takes part in the nation's most ambitious nuclear weapons effort since World War II." The mission calls for modernizing the arsenal with droves of new workers producing plutonium cores — key components for nuclear weapons. Some 3,300 workers have been hired in the last two years, with the workforce now topping more than 17,270. Close to half of them commute to work from elsewhere in northern New Mexico and from as far away as Albuquerque, helping to nearly double Los Alamos' population during the work week... While the priority at Los Alamos is maintaining the nuclear stockpile, the lab also conducts a range of national security work and research in diverse fields of space exploration, supercomputing, renewable energy and efforts to limit global threats from disease and cyberattacks...

The headline grabber, though, is the production of plutonium cores. Lab managers and employees defend the massive undertaking as necessary in the face of global political instability. With most people in Los Alamos connected to the lab, opposition is rare. But watchdog groups and non-proliferation advocates question the need for new weapons and the growing price tag... Aside from pressing questions about the morality of nuclear weapons, watchdogs argue the federal government's modernization effort already has outpaced spending predictions and is years behind schedule. Independent government analysts issued a report earlier this month that outlined the growing budget and schedule delays.

"A hairline scratch on a warhead's polished black cone could send the bomb off course..." notes an earlier article.

"The U.S. will spend more than $750 billion over the next 10 years replacing almost every component of its nuclear defenses, including new stealth bombers, submarines and ground-based intercontinental ballistic missiles in the country's most ambitious nuclear weapons effort since the Manhattan Project."
Encryption

Google Releases First Quantum-Resilient FIDO2 Key Implementation (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: Google has announced the first open-source quantum resilient FIDO2 security key implementation, which uses a unique ECC/Dilithium hybrid signature schema co-created with ETH Zurich. FIDO2 is the second major version of the Fast IDentity Online authentication standard, and FIDO2 keys are used for passwordless authentication and as a multi-factor authentication (MFA) element. Google explains that a quantum-resistant FIDO2 security key implementation is a crucial step towards ensuring safety and security as the advent of quantum computing approaches and developments in the field follow an accelerating trajectory.

To protect against quantum computers, a new hybrid algorithm was created by combining the established ECDSA algorithm with the Dilithium algorithm. Dilithium is a quantum-resistant cryptographic signature scheme that NIST included in its post-quantum cryptography standardization proposals, praising its strong security and excellent performance, making it suitable for use in a wide array of applications. This hybrid signature approach that blends classic and quantum-resistant features wasn't simple to manifest, Google says. Designing a Dilithium implementation that's compact enough for security keys was incredibly challenging. Its engineers, however, managed to develop a Rust-based implementation that only needs 20KB of memory, making the endeavor practically possible, while they also noted its high-performance potential.

The hybrid signature schema was first presented in a 2022 paper (PDF) and recently gained recognition at the ACNS (Applied Cryptography and Network Security) 2023, where it won the "best workshop paper" award. This new hybrid implementation is now part of the OpenSK, Google's open-source security keys implementation that supports the FIDO U2F and FIDO2 standards. The tech giant hopes that its proposal will be adopted by FIDO2 as a new standard and supported by major web browsers with large user bases. The firm calls the application of next-gen cryptography at the internet scale "a massive undertaking" and urges all stakeholders to move quickly to maintain good progress on that front.

Supercomputing

Can Computing Clean Up Its Act? (economist.com) 107

Long-time Slashdot reader SpzToid shares a report from The Economist: What you notice first is how silent it is," says Kimmo Koski, the boss of the Finnish IT Centre for Science. Dr Koski is describing LUMI -- Finnish for "snow" -- the most powerful supercomputer in Europe, which sits 250km south of the Arctic Circle in the town of Kajaani in Finland. LUMI, which was inaugurated last year, is used for everything from climate modeling to searching for new drugs. It has tens of thousands of individual processors and is capable of performing up to 429 quadrillion calculations every second. That makes it the third-most-powerful supercomputer in the world. Powered by hydroelectricity, and with its waste heat used to help warm homes in Kajaani, it even boasts negative emissions of carbon dioxide. LUMI offers a glimpse of the future of high-performance computing (HPC), both on dedicated supercomputers and in the cloud infrastructure that runs much of the internet. Over the past decade the demand for HPC has boomed, driven by technologies like machine learning, genome sequencing and simulations of everything from stockmarkets and nuclear weapons to the weather. It is likely to carry on rising, for such applications will happily consume as much computing power as you can throw at them. Over the same period the amount of computing power required to train a cutting-edge AI model has been doubling every five months. All this has implications for the environment.

HPC -- and computing more generally -- is becoming a big user of energy. The International Energy Agency reckons data centers account for between 1.5% and 2% of global electricity consumption, roughly the same as the entire British economy. That is expected to rise to 4% by 2030. With its eye on government pledges to reduce greenhouse-gas emissions, the computing industry is trying to find ways to do more with less and boost the efficiency of its products. The work is happening at three levels: that of individual microchips; of the computers that are built from those chips; and the data centers that, in turn, house the computers. [...] The standard measure of a data centre's efficiency is the power usage effectiveness (pue), the ratio between the data centre's overall power consumption and how much of that is used to do useful work. According to the Uptime Institute, a firm of it advisers, a typical data centre has a pue of 1.58. That means that about two-thirds of its electricity goes to running its computers while a third goes to running the data centre itself, most of which will be consumed by its cooling systems. Clever design can push that number much lower.

Most existing data centers rely on air cooling. Liquid cooling offers better heat transfer, at the cost of extra engineering effort. Several startups even offer to submerge circuit boards entirely in specially designed liquid baths. Thanks in part to its use of liquid cooling, Frontier boasts a pue of 1.03. One reason lumi was built near the Arctic Circle was to take advantage of the cool sub-Arctic air. A neighboring computer, built in the same facility, makes use of that free cooling to reach a pue rating of just 1.02. That means 98% of the electricity that comes in gets turned into useful mathematics. Even the best commercial data centers fall short of such numbers. Google's, for instance, have an average pue value of 1.1. The latest numbers from the Uptime Institute, published in June, show that, after several years of steady improvement, global data-centre efficiency has been stagnant since 2018.
The report notes that the U.S., Britain and the European Union, among others, are considering new rules that "could force data centers to become more efficient." Germany has proposed the Energy Efficiency Act that would mandate a minimum pue of 1.5 by 2027, and 1.3 by 2030.
Supercomputing

Intel To Start Shipping a Quantum Processor (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Intel does a lot of things, but it's mostly noted for making and shipping a lot of processors, many of which have been named after bodies of water. So, saying that the company is set to start sending out a processor called Tunnel Falls would seem unsurprising if it weren't for some key details. Among them: The processor's functional units are qubits, and you shouldn't expect to be able to pick one up on New Egg. Ever. Tunnel Falls appears to be named after a waterfall near Intel's Oregon facility, where the company's quantum research team does much of its work. It's a 12-qubit chip, which places it well behind the qubit count of many of Intel's competitors -- all of which are making processors available via cloud services. But Jim Clarke, who heads Intel's quantum efforts, said these differences were due to the company's distinct approach to developing quantum computers.

Intel, in contrast, is attempting to build silicon-based qubits that can benefit from the developments that most of the rest of the company is working on. The company hopes to "ride the coattails of what the CMOS industry has been doing for years," Clarke said in a call with the press and analysts. The goal, according to Clarke, is to make sure the answer to "what do we have to change from our silicon chip in order to make it?" is "as little as possible." The qubits are based on quantum dots, structures that are smaller than the wavelength of an electron in the material. Quantum dots can be used to trap individual electrons, and the properties of the electron can then be addressed to store quantum information. Intel uses its fabrication expertise to craft the quantum dot and create all the neighboring features needed to set and read its state and perform manipulations.

However, Clarke said there are different ways of encoding a qubit in a quantum dot (Loss-DiVincenzo, singlet-triplet, and exchange-only, for those curious). This gets at another key difference with Intel's efforts: While most of its competitors are focused solely on fostering a software developer community, Intel is simultaneously trying to develop a community that will help it improve its hardware. (For software developers, the company also released a software developer kit.) To help get this community going, Intel will send Tunnel Falls processors out to a few universities: The Universities of Maryland, Rochester, Wisconsin, and Sandia National Lab will be the first to receive the new chip, and the company is interested in signing up others. The hope is that researchers at these sites will help Intel characterize sources of error and which forms of qubits provide the best performance.
"Overall, Intel has made a daring choice for its quantum strategy," concludes Ars' John Timmer. "Electron-based qubits have been more difficult to work with than many other technologies because they tend to have shorter life spans before they decohere and lose the information they should be holding. Intel is counting on rapid iteration, a large manufacturing capacity, and a large community to help it figure out how to overcome this. But testing quantum computing chips and understanding why their qubits sometimes go wrong is not an easy process; it requires highly specialized refrigeration hardware that takes roughly a day to get the chips down to a temperature where they can be used."

"The company seems to be doing what it needs to overcome that bottleneck, but it's likely to need more than three universities to sign up if the strategy is going to work."
Supercomputing

Iran Unveils 'Quantum' Device That Anyone Can Buy for $589 on Amazon (vice.com) 67

What Iran's military called "the first product of the quantum processing algorithm" of the Naval university appears to be a stock development board, available widely online for around $600. Motherboard reports: According to multiple state-linked news agencies in Iran, the computer will help Iran detect disturbances on the surface of water using algorithms. Iranian Rear Admiral Habibollah Sayyari showed off the board during the ceremony and spoke of Iran's recent breakthroughs in the world of quantum technology. The touted quantum device appears to be a development board manufactured by a company called Diligent. The brand "ZedBoard" appears clearly in pictures. According to the company's website, the ZedBoard has everything the beginning developer needs to get started working in Android, Linux, and Windows. It does not appear to come with any of the advanced qubits that make up a quantum computer, and suggested uses include "video processing, reconfigurable computing, motor control, software acceleration," among others.

"I'm sure this board can work perfectly for people with more advanced [Field Programmable Gate Arrays] experience, however, I am a beginner and I can say that this is also a good beginner-friendly board," said one review on Diligent's website. Those interested in the board can buy one on Amazon for $589. It's impossible to know if Iran has figured out how to use off-the-shelf dev boards to make quantum algorithms, but it's not likely.

Slashdot Top Deals