Learning the Hamiltonian governing a quantum system is a central task in quantum metrology, sensing, and device characterization. Existing Heisenberg-limited Hamiltonian learning protocols either require multi-qubit operations that are prone to noise, or single-qubit operations whose frequency or strength increases with the desired precision. These two requirements limit the applicability of Hamiltonian learning on near-term quantum platforms. We present a protocol that learns a quantum Hamiltonian with the optimal Heisenberg-limited scaling using only single-qubit control in the form of static fields with strengths that are independent of the target precision. Our protocol is robust against the state preparation and measurement (SPAM) error. By overcoming these limitations, our protocol provides new tools for device characterization and quantum sensing. We demonstrate that our method achieves the Heisenberg-limited scaling through rigorous mathematical proof and numerical experiments. We also prove an information-theoretic lower bound showing that a non-vanishing static field strength is necessary for achieving the Heisenberg limit unless one employs an extensive number of discrete control operations.
Quantum low-density parity-check (qLDPC) codes have emerged as a promising approach for realizing low-overhead logical quantum memories. Recent theoretical developments have established shift automorphisms as a fundamental building block for completing the universal set of logical gates for qLDPC codes. However, practical challenges remain because the existing SWAP-based shift automorphism yields logical error rates that are orders of magnitude higher than those for fault-tolerant idle operations. In this work, we address this issue by dynamically varying the syndrome measurement circuits to implement the shift automorphisms without reducing the circuit distance. We benchmark our approach on both twisted and untwisted weight-6 generalized toric codes, including the gross code family. Our time-dynamic circuits for shift automorphisms achieve performance comparable to the idle operations under the circuit-level noise model (SI1000). Specifically, the dynamic circuits achieve more than an order of magnitude reduction in logical error rates relative to the SWAP-based scheme for the gross code at a physical error rate of $10^{-3}$, employing the BP-OSD decoder. Our findings improve both the error resilience and the time overhead of the shift automorphisms in qLDPC codes. Furthermore, our work can lead to alternative syndrome extraction circuit designs, such as leakage removal protocols, providing a practical pathway to utilizing dynamic circuits that extend beyond surface codes towards qLDPC codes.
Dicke states serve as a critical resource in quantum metrology, communication, and computation. However, unitary preparation of Dicke states is limited to logarithmic depth in standard circuit models and existing constant-depth protocols require measurement and feed-forward. In this work, we present the first unitary, constant-depth protocols for exact Dicke state preparation. We overcome the logarithmic-depth barrier by moving beyond the standard circuit model and leveraging global interactions (native to architectures such as neutral atoms and trapped ions). Specifically, utilizing unbounded CZ gates (i.e. within the QAC$^0$ circuit class), we offer circuits for exact computation of constant-weight Dicke states, using polynomial ancillae, and approximation of weight-1 Dicke states (i.e. $W$ states), using only constant ancillae. Granted additional access to the quantum FAN-OUT operation (i.e. upgrading to the QAC$_f^0$ circuit class), we also achieve exact preparation of arbitrary-weight Dicke states, with polynomial ancillae. These protocols distinguish the constant-depth capabilities of quantum architectures based on connectivity and offer a novel path toward resolving a long-standing quantum complexity conjecture.
Magic states are essential for universal quantum computation and are widely viewed as a key source of quantum advantage, yet in realistic devices they are inevitably noisy. In this work, we characterize how noise on injected magic resources changes the classical simulability of quantum circuits and when it induces a transition from classically intractable behavior to efficient classical simulation. We adopt a resource-centric noise model in which only the injected magic components are noisy, while the baseline states, operations, and measurements belong to an efficiently simulable family. Within this setting, we develop an approximate classical sampling algorithm with controlled error and prove explicit noise-dependent conditions under which the algorithm runs in polynomial time. Our framework applies to both qubit circuits with Clifford baselines and fermionic circuits with matchgate baselines, covering representative noise channels such as dephasing and particle loss. We complement the analysis with numerical estimates of the simulation cost, providing concrete thresholds and runtime scaling across practically relevant parameter regimes.
Consider quantum channels with input dimension $d_1$, output dimension $d_2$ and Kraus rank at most $r$. Any such channel must satisfy the constraint $rd_2\geq d_1$, and the parameter regime $rd_2=d_1$ is called the boundary regime. In this paper, we show an optimal query lower bound $\Omega(rd_1d_2/\varepsilon^2)$ for quantum channel tomography to within diamond norm error $\varepsilon$ in the away-from-boundary regime $rd_2\geq 2d_1$, matching the existing upper bound $O(rd_1d_2/\varepsilon^2)$. In particular, this lower bound fully settles the query complexity for the commonly studied case of equal input and output dimensions $d_1=d_2=d$ with $r\geq 2$, in sharp contrast to the unitary case $r=1$ where Heisenberg scaling $\Theta(d^2/\varepsilon)$ is achievable.
The evolution of an isolated quantum system inevitably exhibits recurrence: the state returns to the vicinity of its initial condition after finite time. Despite its fundamental nature, a rigorous quantitative understanding of recurrence has been lacking. We establish upper bounds on the recurrence time, $t_{\mathrm{rec}} \lesssim t_{\mathrm{exit}}(\epsilon)(1/\epsilon)^d$, where $d$ is the Hilbert-space dimension, $\epsilon$ the neighborhood size, and $t_{\mathrm{exit}}(\epsilon)$ the escape time from this neighborhood. For pure states evolving under a Hamiltonian $H$, estimating $t_{\mathrm{exit}}$ is equivalent to an inverse quantum speed limit problem: finding upper bounds on the time a time-evolved state $\psi_t$ needs to depart from the $\epsilon$-vicinity of the initial state $\psi_0$. We provide a partial solution, showing that under mild assumptions $t_{\mathrm{exit}}(\epsilon) \approx \epsilon /\sqrt{ \Delta(H^2)}$, with $\Delta(H^2)$ the Hamiltonian variance in $\psi_0$. We show that our upper bound on $t_{\mathrm{rec}}$ is generically saturated for random Hamiltonians. Finally, we analyze the impact of coherence of the initial state in the eigenbasis of $H$ on recurrence behavior.
Studying the complexity of states sampled from various ensembles is a central component of quantum information theory. In this work we establish the average-case hardness of learning, in the statistical query model, the Born distributions of states sampled uniformly from the circular and (fermionic) Gaussian ensembles. These ensembles of states are induced variously by the uniform measures on the compact symmetric spaces of type AI, AII, and DIII. This finding complements analogous recent results for states sampled from the classical compact groups. On the technical side, we employ a somewhat unconventional approach to integrating over the compact groups which may be of some independent interest. For example, our approach allows us to exactly evaluate the total variation distances between the output distributions of Haar random unitary and orthogonal circuits and the constant distribution, which were previously known only approximately.
Kai Zhang, Zhengzhong Yi, Shaojun Guo, Linghang Kong, Situ Wang, Xiaoyu Zhan, Tan He, Weiping Lin, Tao Jiang, Dongxin Gao, Yiming Zhang, Fangming Liu, Fang Zhang, Zhengfeng Ji, Fusheng Chen, Jianxin Chen Fast, reliable decoders are pivotal components for enabling fault-tolerant quantum computation (FTQC). Neural network decoders like AlphaQubit have demonstrated potential, achieving higher accuracy than traditional human-designed decoding algorithms. However, existing implementations of neural network decoders lack the parallelism required to decode the syndrome stream generated by a superconducting logical qubit in real time. Moreover, integrating AlphaQubit with sliding window-based parallel decoding schemes presents non-trivial challenges: AlphaQubit is trained solely to output a single bit corresponding to the global logical correction for an entire memory experiment, rather than local physical corrections that can be easily integrated. We address this issue by training a recurrent, transformer-based neural network specifically tailored for parallel window decoding. While it still outputs a single bit, we derive training labels from a consistent set of local corrections and train on various types of decoding windows simultaneously. This approach enables the network to self-coordinate across neighboring windows, facilitating high-accuracy parallel decoding of arbitrarily long memory experiments. As a result, we overcome the throughput bottleneck that previously precluded the use of AlphaQubit-type decoders in FTQC. Our work presents the first scalable, neural-network-based parallel decoding framework that simultaneously achieves SOTA accuracy and the stringent throughput required for real-time quantum error correction. Using an end-to-end experimental workflow, we benchmark our decoder on the Zuchongzhi 3.2 superconducting quantum processor on surface codes with distances up to 7, demonstrating its superior accuracy. Moreover, we demonstrate that, using our approach, a single TPU v6e is capable of decoding surface codes with distances up to 25 within 1us per decoding round.
A key challenge in quantum sensing is the detection of weak time dependent signals, particularly those that arise as specific frequency perturbations over a background field. Conventional methods usually demand complex dynamical control of the quantum sensor and heavy classical post-processing. We propose a quantum sensor that leverages time independent interactions and entanglement to function as a passive, tunable, thresholded frequency filter. By encoding the frequency selectivity and thresholding behavior directly into the dynamics, the sensor is responsive only to a target frequency of choice whose amplitude is above a threshold. This approach circumvents the need for complex control schemes and reduces the post-processing overhead.
We introduce a quantum Maxwell erasure decoder for CSS quantum low-density parity-check (qLDPC) codes that extends peeling with bounded guessing. Guesses are tracked symbolically and can be eliminated by restrictive checks, giving a tunable tradeoff between complexity and performance via a guessing budget: an unconstrained budget recovers Maximum-Likelihood (ML) performance, while a constant budget yields linear-time decoding and approximates ML. We provide theoretical guarantees on asymptotic performance and demonstrate strong performance on bivariate bicycle and quantum Tanner codes.
Several photonic quantum technologies rely on the ability to generate multiple indistinguishable photons. Benchmarking the level of indistinguishability of these photons is essential for scalability. The Hong-Ou-Mandel dip provides a benchmark for the indistinguishability between two photons, and extending this test to the multi-photon setting has so far resulted in a protocol that computes the genuine n-photon indistinguishability (GI). However, this protocol has a sample complexity that increases exponentially with the number of input photons for an estimation of GI up to a given additive error. To address this problem, we introduce new theorems that strengthen our understanding of the relationship between distinguishability and the suppression laws of the quantum Fourier transform interferometer (QFT). Building on this, we propose a protocol using the QFT for benchmarking GI that achieves constant sample complexity for the estimation of GI up to a given additive error for prime photon numbers, and sub-polynomial scaling otherwise, representing an exponential improvement over the state of the art. We prove the optimality of our protocol in many relevant scenarios and validate our approach experimentally on Quandela's reconfigurable photonic quantum processor, where we observe a clear advantage in runtime and precision over the state of the art. We therefore establish the first scalable method for computing multi-photon indistinguishability, which applies naturally to current and near-term photonic quantum hardware.
Pinsker's inequality sets a lower bound on the Umegaki divergence of two quantum states in terms of their trace distance. In this work, we formulate corresponding estimates for a variety of quantum and classical divergences including $f$-divergences like Hellinger and $\chi^2$-divergences as well as Rényi divergences and special cases thereof like the Umegaki divergence, collision divergence, max divergence. We further provide a strategy on how to adapt these bounds to smoothed divergences.
Historically, the focus in entanglement distillation has predominantly been on the distillable entanglement, and the framework assumes complete knowledge of the initial state. In this paper, we study the reliability function of entanglement distillation, which specifies the optimal exponent of the decay of the distillation error when the distillation rate is below the distillable entanglement. Furthermore, to capture greater operational significance, we extend the framework from the standard setting of known states to a black-box setting, where distillation is performed from a set of possible states. We establish an exact finite blocklength result connecting to composite correlated hypothesis testing without any redundant correction terms. Based on this, the reliability function of entanglement distillation is characterized by the regularized quantum Hoeffding divergence. In the special case of a pure initial state, our result reduces to the error exponent for entanglement concentration derived by Hayashi et al. in 2003. Given full prior knowledge of the state, we construct a concrete optimal distillation protocol. Additionally, we analyze the strong converse exponent of entanglement distillation. While all the above results assume the free operations to be non-entangling, we also investigate other free operation classes, including PPT-preserving, dually non-entangling, and dually PPT-preserving operations.
The aim of this work is to study the zero-error capacity of pure-state classical-quantum channels in the setting of list decoding. We provide an achievability bound for list-size two and a converse bound holding for every fixed list size. The two bounds coincide for channels whose pairwise absolute state overlaps form a positive semi-definite matrix. Finally, we discuss a remarkable peculiarity of the classical-quantum case: differently from the fully classical setting, the rate at which the sphere-packing bound diverges might not be achievable by zero-error list codes, even when we take the limit of fixed but arbitrarily large list size.
This paper presents a systematic study of adversarial hypothesis testing for both quantum-quantum (QQ) and classical-quantum (CQ) channels. Unlike conventional channel discrimination, we consider a framework where the sender, Alice, selects the channel input adversarially to minimize Bob's distinguishability. We analyze this problem across four settings based on whether Alice employs i.i.d. or general inputs and whether the receiver, Bob, is informed of the specific input choice (allowing his measurement to depend on the input). We characterize the Stein exponents for each setting and reveal a striking distinction in behavior: for QQ channels with i.i.d. inputs, Bob's knowledge of the input significantly enhances distinguishability, yet this advantage vanishes when general inputs are permitted. In contrast, for CQ channels, Bob being informed provides a consistent advantage over the corresponding entanglement-breaking channels for both i.i.d. and general inputs. These results demonstrate a unique phenomenon in adversarial hypothesis testing where the CQ channel does not merely behave as a special case of the QQ channel.
The transition to the fault-tolerant era exposes the limitations of homogeneous quantum systems, where no single qubit modality simultaneously offers optimal operation speed, connectivity, and scalability. In this work, we propose a strategic approach to Heterogeneous Quantum Architectures (HQA) that synthesizes the distinct advantages of the superconducting (SC) and neutral atom (NA) platforms. We explore two architectural role assignment strategies based on hardware characteristics: (1) We offload the latency-critical Magic State Factory (MSF) to fast SC devices while performing computation on scalable NA arrays, a design we term MagicAcc, which effectively mitigates the resource-preparation bottleneck. (2) We explore a Memory-Compute Separation (MCSep) paradigm that utilizes NA arrays for high-density qLDPC memory storage and SC devices for fast surface-code processing. Our evaluation, based on a comprehensive end-to-end cost model, demonstrates that principled heterogeneity yields significant performance gains. Specifically, our designs achieve $752\times$ speedup over NA-only baselines on average and reduce the physical qubit footprint by over $10\times$ compared to SC-only systems. These results chart a clear pathway for leveraging cross-modality interconnects to optimize the space-time efficiency of future fault-tolerant quantum computers.
Calculating bounds of properties of many-body quantum systems is of paramount importance, since they guide our understanding of emergent quantum phenomena and complement the insights obtained from estimation methods. Recent semidefinite programming approaches enable probabilistic bounds from finite-shot measurements of easily accessible, yet informationally incomplete, observables. Here we render these methods scalable in the number of qubits by instead utilizing moment-matrix relaxations. After introducing the general formalism, we show how the approach can be adapted with specific knowledge of the system, such as it being the ground state of a given Hamiltonian, possessing specific symmetries or being the steady state of a given Lindbladian. Our approach defines a scalable real-world certification scheme leveraging semidefinite programming relaxations and experimental estimations which, unavoidably, contain shot noise.
Scaling laws have played a major role in the modern AI revolution, providing practitioners predictive power over how the model performance will improve with increasing data, compute, and number of model parameters. This has spurred an intense interest in the origin of neural scaling laws, with a common suggestion being that they arise from power law structure already present in the data. In this paper we study scaling laws for transformers trained to predict random walks (bigrams) on graphs with tunable complexity. We demonstrate that this simplified setting already gives rise to neural scaling laws even in the absence of power law structure in the data correlations. We further consider dialing down the complexity of natural language systematically, by training on sequences sampled from increasingly simplified generative language models, from 4,2,1-layer transformer language models down to language bigrams, revealing a monotonic evolution of the scaling exponents. Our results also include scaling laws obtained from training on random walks on random graphs drawn from Erdös-Renyi and scale-free Barabási-Albert ensembles. Finally, we revisit conventional scaling laws for language modeling, demonstrating that several essential results can be reproduced using 2 layer transformers with context length of 50, provide a critical analysis of various fits used in prior literature, demonstrate an alternative method for obtaining compute optimal curves as compared with current practice in published literature, and provide preliminary evidence that maximal update parameterization may be more parameter efficient than standard parameterization.
Fast and high fidelity shuttling of spin qubits has been demonstrated in semiconductor quantum dot devices. Several architectures based on shuttling have been proposed; it has been suggested that singlet-triplet (dual-spin) qubits could be optimal for the highest shuttling fidelities. Here we present a fault-tolerant framework for quantum error correction based on such dual-spin qubits, establishing them as a natural realisation of erasure qubits within semiconductor architectures. We introduce a hardware-efficient leakage-detection protocol that automatically projects leaked qubits back onto the computational subspace, without the need for measurement feedback or increased classical control overheads. When combined with the XZZX surface code and leakage-aware decoding, we demonstrate a twofold increase in the error correction threshold and achieve orders-of-magnitude reductions in logical error rates. This establishes the singlet-triplet encoding as a practical route toward high-fidelity shuttling and erasure-based, fault-tolerant quantum computation in semiconductor devices.
The classification of mixed-state topological order requires indices that behave monotonically under finite-depth quantum channels. In two dimensions, a braided $C^*$-tensor category, which corresponds to strong symmetry, arises from a state satisfying approximate Haag duality. In this note, we show that the $S$-matrix and topological twists of the braided $C^*$-tensor category are quantities that are monotone under finite-depth quantum channels.
We introduce SpinPulse, an open-source python package for simulating spin qubit-based quantum computers at the pulse-level. SpinPulse models the specific physics of spin qubits, particularly through the inclusion of classical non-Markovian noise. This enables realistic simulations of native gates and quantum circuits, in order to support hardware development. In SpinPulse, a quantum circuit is first transpiled into the native gate set of our model and then converted to a pulse sequence. This pulse sequence is subsequently integrated numerically in the presence of a simulated noisy experimental environment. We showcase workflows including transpilation, pulse-level compilation, hardware benchmarking, quantum error mitigation, and large-scale simulations via integration with the tensor-network library quimb. We expect SpinPulse to be a valuable open-source tool for the quantum computing community, fostering efforts to devise high-fidelity quantum circuits and improved strategies for quantum error mitigation and correction.
Product code construction is a powerful tool for constructing quantum stabilizer codes, which serve as a promising paradigm for realizing fault-tolerant quantum computation. Furthermore, the natural mapping between stabilizer codes and the ground states of exactly solvable spin models also motivates the exploration of many-body orders in the stabilizer codes. In this work, we investigate the fracton topological orders in a family of codes obtained by a recently proposed general construction. More specifically, this code family can be regarded as a class of generalized hypergraph product (HGP) codes. We term the corresponding exactly solvable spin models \textitorthoplex models, based on the geometry of the stabilizers. In the 3D orthoplex model, we identify a series of intriguing properties within this model family, including non-monotonic ground state degeneracy (GSD) as a function of system size and non-Abelian lattice defects. Most remarkably, in 4D we discover \textitfragmented topological excitations: while such excitations manifest as discrete, isolated points in real space, their projections onto lower-dimensional subsystems form connected objects such as loops, revealing the intrinsic topological nature of these excitations. Therefore, fragmented excitations constitute an intriguing intermediate class between point-like and spatially extended topological excitations. In addition, these rich features establish the generalized HGP codes as a versatile and analytically tractable platform for studying the physics of fracton orders.
We study the computation of the $\alpha$-Rényi capacity of a classical-quantum (c-q) channel for $\alpha\in(0,1)$. We propose an exponentiated-gradient (mirror descent) iteration that generalizes the Blahut-Arimoto algorithm. Our analysis establishes relative smoothness with respect to the entropy geometry, guaranteeing a global sublinear convergence of the objective values. Furthermore, under a natural tangent-space nondegeneracy condition (and a mild spectral lower bound in one regime), we prove local linear (geometric) convergence in Kullback-Leibler divergence on a truncated probability simplex, with an explicit contraction factor once the local curvature constants are bounded.
Pre-execution calibration is a major bottleneck for operating superconducting quantum processors, and qubit frequency allocation is especially challenging due to crosstalk-coupled objectives. We establish that the widely-used Snake optimizer is mathematically equivalent to Block Coordinate Descent (BCD), providing a rigorous theoretical foundation for this calibration strategy. Building on this formalization, we present a topology-aware block ordering obtained by casting order selection as a Sequence-Dependent Traveling Salesman Problem (SD-TSP) and solving it efficiently with a nearest-neighbor heuristic. The SD-TSP cost reflects how a given block choice expands the reduced-circuit footprint required to evaluate the block-local objective, enabling orders that minimize per-epoch evaluation time. Under local crosstalk/bounded-degree assumptions, the method achieves linear complexity in qubit count per epoch, while retaining calibration quality. We formalize the calibration objective, clarify when reduced experiments are equivalent or approximate to the full objective, and analyze convergence of the resulting inexact BCD with noisy measurements. Simulations on multi-qubit models show that the proposed BCD-NNA ordering attains the same optimization accuracy at markedly lower runtime than graph-based heuristics (BFS, DFS) and random orders, and is robust to measurement noise and tolerant to moderate non-local crosstalk. These results provide a scalable, implementation-ready workflow for frequency calibration on NISQ-era processors.
The scalable and deterministic preparation of large Fock-number states represents a long-standing frontier in quantum science, with direct implications for quantum metrology, communication, and simulation. Despite significant progress in small-scale implementations, extending such state generation to large excitation numbers while maintaining high fidelity remains a formidable challenge. Here, we present a scalable protocol for generating large Fock states with fidelities exceeding 0.9 up to photon numbers on the order of 100, achieved using only native control operations and, when desired, further enhanced by an optional post-selection step. Our method employs a hybrid Genetic-Adam optimization framework that combines the global search efficiency of genetic algorithms with the adaptive convergence of Adam to optimize multi-pulse control sequences comprising Jaynes-Cummings interactions and displacement operations, both of which are native to leading experimental platforms. The resulting control protocols achieve high fidelities with shallow circuit depths and strong robustness against parameter variations. These results establish an efficient and scalable pathway toward high-fidelity non-classical state generation for precision metrology and fault-tolerant quantum technologies.
Neutral atom quantum processors have emerged as a promising platform for scalable quantum information processing, offering high-fidelity operations and exceptional qubit scalability. A key challenge in realizing practical applications is efficiently extracting readout outcomes while maintaining high system throughput, i.e., the rate of quantum task executions. In this work, we develop a theoretical framework to quantify the trade-off between readout fidelity and atomic retention. Moreover, we introduce a metric of quantum circuit iteration rate (qCIR) and employ normalized quantum Fisher information to characterize system overall performance. Further, by carefully balancing fidelity and retention, we demonstrate a readout strategy for optimizing information acquisition efficiency. Considering the experimentally feasible parameters for 87Rb atoms, we demonstrate that qCIRs of 197.2Hz and 154.5Hz are achievable using single photon detectors and cameras, respectively. These results provide practical guidance for constructing scalable and high-throughput neutral atom quantum processors for applications in sensing, simulation, and near-term algorithm implementation.
Yifang Xu, Yilong Zhou, Ziyue Hua, Lida Sun, Jie Zhou, Weiting Wang, Weizhou Cai, Hongwei Huang, Lintao Xiao, Guangming Xue, Haifeng Yu, Ming Li, Chang-Ling Zou, Luyan Sun The manipulation of distinct degrees of freedom of photons plays a critical role in both classical and quantum information processing. While the principles of wave optics provide elegant and scalable control over classical light in spatial and temporal domains, engineering quantum states in Fock space has been largely restricted to few-photon regimes, hindered by the computational and experimental challenges of large Hilbert spaces. Here, we introduce ``Fock-space optics", establishing a conceptual framework of wave propagation in the quantum domain by treating photon number as a synthetic dimension. Using a superconducting microwave resonator, we experimentally demonstrate Fock-space analogues of optical propagation, refraction, lensing, dispersion, and interference with up to 180 photons. These results establish a fundamental correspondence between Schrödinger evolution in a single bosonic mode and classical paraxial wave propagation. By mapping intuitive optical concepts onto high-dimensional quantum state engineering, our work opens a path toward scalable control of large-scale quantum systems with thousands of photons and advanced bosonic information processing.
The high overhead of fault-tolerant measurement sequences (FTMSs) poses a major challenge for implementing quantum stabilizer codes. Here, we address this problem by constructing efficient FTMSs for the class of quantum Hamming codes $[\![2^r-1, 2^r-1-2r, 3]\!]$ with $r=3k+1$ ($k \in \mathbb{Z}^+$). Our key result demonstrates that the sequence length can be reduced to exactly $2r+1$-only one additional measurement beyond the original non-fault-tolerant sequence, establishing a tight lower bound. The proposed method leverages cyclic matrix transformations to systematically combine rows of the initial stabilizer matrix and preserving a self-dual CSS-like symmetry analogous to that of the original quantum Hamming codes. This induced symmetry enables hardware-efficient circuit reuse: the measurement circuits for the first $r$ stabilizers are transformed into circuits for the remaining $r$ stabilizers simply by toggling boundary Hadamard gates, eliminating redundant hardware. For distance-3 fault-tolerant error correction, our approach simultaneously reduces the time overhead via shorting the FTMS length and the hardware overhead through symmetry-enabled circuit multiplexing. These results provide an important advance towards the important open problem regarding the design of minimal FTMSs for quantum Hamming codes and may shed light on similar challenges in other quantum stabilizer codes.
Model counting of Disjunctive Normal Form (DNF) formulas is a critical problem in applications such as probabilistic inference and network reliability. For example, it is often used for query evaluation in probabilistic databases. Due to the computational intractability of exact DNF counting, there has been a line of research into a variety of approximation algorithms. These include Monte Carlo approaches such as the classical algorithms of Karp, Luby, and Madras (1989), as well as methods based on hashing (Soos et al. 2023), and heuristic approximations based on Neural Nets (Abboud, Ceylan, and Lukasiewicz 2020). We develop a new Monte Carlo approach with an adaptive stopping rule and short-circuit formula evaluation. We prove it achieves Probably Approximately Correct (PAC) learning bounds and is asymptotically more efficient than the previous methods. We also show experimentally that it out-performs prior algorithms by orders of magnitude, and can scale to much larger problems with millions of variables.
We present a unified scheme for generating high-fidelity entangling gates in superconducting platforms by continuous dynamical decoupling (CDD) combined with variational minimal-energy optimal control. During the CDD stage, we suppress residual couplings, calibration drifting, and quasistatic noise, resulting in a stable effective Hamiltonian that preserves the designed ZZ interaction intended for producing tunable couplers. In this stable $\mathrm{SU}(4)$ manifold, we calculate smooth low-energy single-quibt control functions using a variational geodesic optimization process that directly minimizes gate infidelity. We illustrate the methodology by applying it to CZ, CX, and generic engangling gates, achieving virtually unit fidelity and robustness under restricted single-qubit action, with experimentally realistic control fields. These results establish CDD-enhanced variational geometric optimal control as a practical and noise-resilient scheme for designing superconducting entangling gates.
We analyze quantum state preservation in open quantum systems using quantum error-correcting (QEC) codes that are explicitly embedded into microscopic system-bath models. Instead of abstract quantum channels, we consider multi-qubit registers coupled to bosonic thermal environments, derive a second-order master equation for the reduced dynamics, and use it to benchmark the five-qubit, Steane, and toric codes under local and collective noise. We compute state fidelities for logical qubits as functions of coupling strength, bath temperature, and the number of correction cycles. In the low-temperature regime, we find that repeated error-correction with the five-qubit code strongly suppresses decoherence and relaxation, while in the high-temperature regime, thermal excitations dominate the dynamics and reduce the benefit of all codes, though the five-qubit code still outperforms the Steane and toric codes. For two-qubit Werner states, we identify a critical evolution time before which QEC does not improve fidelity, and this time increases as entanglement grows. After this critical time, QEC does improve fidelity. Comparative analysis further reveals that the five-qubit code (the smallest perfect code) offers consistently higher fidelities than topological and concatenated architectures in these open-system settings. These findings establish a quantitative framework for evaluating QEC under realistic noise environments and provide guidance for developing noise-resilient quantum architectures in near-term quantum technologies.
A holographic entropy inequality (HEI) is a linear inequality obeyed by Ryu-Takayanagi holographic entanglement entropies, or equivalently by the minimum cut function on weighted graphs. We establish a new combinatorial framework for studying HEIs, and use it to prove several properties they share, including two majorization-related properties as well as a necessary and sufficient condition for an inequality to be an HEI. We thereby resolve all the conjectures presented in [arXiv:2508.21823], proving two of them and disproving the other two. In particular, we show that the null reduction of any superbalanced HEI passes the majorization test defined in [arXiv:2508.21823], thereby providing strong new evidence that all HEIs are obeyed in time-dependent holographic states.
Diffusion models have shown remarkable empirical success in sampling from rich multi-modal distributions. Their inference relies on numerically solving a certain differential equation. This differential equation cannot be solved in closed form, and its resolution via discretization typically requires many small iterations to produce \emphhigh-quality samples. More precisely, prior works have shown that the iteration complexity of discretization methods for diffusion models scales polynomially in the ambient dimension and the inverse accuracy $1/\varepsilon$. In this work, we propose a new solver for diffusion models relying on a subtle interplay between low-degree approximation and the collocation method (Lee, Song, Vempala 2018), and we prove that its iteration complexity scales \emphpolylogarithmically in $1/\varepsilon$, yielding the first ``high-accuracy'' guarantee for a diffusion-based sampler that only uses (approximate) access to the scores of the data distribution. In addition, our bound does not depend explicitly on the ambient dimension; more precisely, the dimension affects the complexity of our solver through the \empheffective radius of the support of the target distribution only.
We investigate late-time cosmic acceleration in $f(R,L_m)$ gravity driven by nonlinear matter contributions, focusing on the class $f(R,L_m)=R/2+c_1 L_m+c_n L_m^{n}+c_0$ with the explicit choice $L_m=\rho_m$ and an uncoupled radiation sector. We analyze two realizations: (i) Case A: $f(R,L_m)=R/2+\beta \rho_m^{n}+\gamma$, where $\gamma$ acts as a vacuum term, and (ii) Case B: $f(R,L_m)=R/2+\beta \rho_m+\gamma \rho_m^{n}$, where the nonlinear sector can mimic dark energy without an explicit cosmological constant. For each case, we construct a bounded autonomous system, classify all critical points and their stability, and compute cosmographic diagnostics. The phase-space analysis shows that Case A reproduces the standard radiation$\to$matter$\to$de~Sitter sequence only for $n\gtrsim 4/5$, with acceleration essentially enforced by the vacuum term. In contrast, Case~B admits a qualitatively distinct and phenomenologically appealing branch: for $0<n<1/2$ the system possesses a physical \emphscaling de~Sitter future attractor inside the bounded simplex, yielding radiation$\to$matter$\to$acceleration with $q=-1$ and $\omega_{\rm eff}=-1$ and without introducing $c_0$. We confront both models with background data (CC, Union3, DESI BAO, plus a BBN prior on $\Omega_b h^2$) using nested sampling and perform model comparison via Bayesian evidence and AIC/BIC. The full data combination constrains $n=1.08\pm0.05$ in Case A and $n=0.05\pm0.10$ in Case B (68\% CL), the latter lying within the accelerating window while remaining statistically consistent with $\Lambda$CDM kinematics at the background level. We also record minimal consistency conditions for stability (tensor no-ghost and luminal propagation) and motivate a dedicated perturbation-level analysis as the next step to test growth and lensing observables.
Coupling mechanical motion to an optical resonator enables displacement measurements approaching the standard quantum limit (SQL). However, increasing the optomechanical coupling strength will inevitably lead to probing of the nonlinear response of the optical resonator. Thermal intermodulation noise (TIN) arising from the nonlinear mixing of thermomechanical motion can further increase the imprecision well above the SQL and has hitherto been canceled up to second order of nonlinearity via operation at the "magic detuning". In this work, we record the output of a membrane-in-the-middle microcavity system operating at room temperature and achieving high cooperativity, $C>n_\text{th}$, and apply a nonlinear transform that removes all orders of TIN, improving the mechanical signal-to-noise ratio by nearly 10 dB. Our results can be applied to experiments affected by third-order TIN, which we expect to be the dominating intrinsic source of noise in high-cooperativity room-temperature cavity optomechanical systems.
We develop a one-out-of-two-oblivious transfer protocol over the binary-input additive white Gaussian noise channel using polar codes. The scheme uses two decoder views linked by automorphisms of the polar transform and publicly draws the encoder at random from the corresponding automorphism group. This yields perfect receiver privacy at any finite blocklength, since the public encoder distribution is independent of the receiver's choice bit. Sender privacy is obtained asymptotically via channel polarization combined with privacy amplification. Because the construction deliberately injects randomness on selected bad bit-channels, we derive a relaxed reliability criterion and evaluate finite-blocklength performance. Finally, we characterize the polar-transform automorphisms as bit-level permutations of bit-channel indices, and exploit this structure to derive and optimize an achievable finite-blocklength OT rate.
We examine the pertinent geometric characteristics of entanglement that arise from stationary Hamiltonian evolutions transitioning from separable to maximally entangled two-qubit quantum states. From a geometric perspective, each evolution is characterized by means of geodesic efficiency, speed efficiency, and curvature coefficient. Conversely, from the standpoint of entanglement, these evolutions are quantified using various metrics, such as concurrence, entanglement power, and entangling capability. Overall, our findings indicate that time-optimal evolution trajectories are marked by high geodesic efficiency, with no energy resource wastage, no curvature (i.e., zero bending), and an average path entanglement that is less than that observed in time-suboptimal evolutions. Additionally, when analyzing separable-to-maximally entangled evolutions between nonorthogonal states, time-optimal evolutions demonstrate a greater short-time degree of nonlocality compared to time-suboptimal evolutions between the same initial and final states. Interestingly, the reverse is generally true for separable-to-maximally entangled evolutions involving orthogonal states. Our investigation suggests that this phenomenon arises because suboptimal trajectories between orthogonal states are characterized by longer path lengths with smaller curvature, which are traversed with a higher energy resource wastage compared to suboptimal trajectories between nonorthogonal states. Consequently, a higher initial degree of nonlocality in the unitary time propagators appears to be essential for achieving the maximally entangled state from a separable state. Furthermore, when assessing optimal and suboptimal evolutions...
The Landau--Zener (LZ) model describes a two-level quantum system that undergoes an avoided crossing. In the adiabatic limit, the transition probability vanishes. An auxiliary control field $H_\text{CD}$ can be reverse-engineered so that the full Hamiltonian $H_0 + H_\text{CD}$ reproduces adiabaticity for all parameter values. Our aim is to construct a single control field $H_1$ that drives an ensemble of LZ-type Hamiltonians with a distribution of energy gaps. $H_1$ works best statistically, minimizing the average transition probability. We restrict our attention to a special class of $H_1$ controls, motivated by $H_\text{CD}$. We found a systematic trade-off between instantaneous adiabaticity and the final transition probability. Certain limiting cases with a linear sweep can be treated analytically; one of them being the LZ system with Dirac $\delta(t)$ function. Comprehensive and systematic numerical simulations support and extend the analytic results.
It is known that the continuous-time variant of Grover's search algorithm is characterized by quantum search frameworks that are governed by stationary Hamiltonians, which result in search trajectories confined to the two-dimensional subspace of the complete Hilbert space formed by the source and target states. Specifically, the search approach is ineffective when the source and target states are orthogonal. In this paper, we employ normalization, orthogonality, and energy limitations to demonstrate that it is unfeasible to breach time-optimality between orthogonal states with constant Hamiltonians when the evolution is limited to the two-dimensional space spanned by the initial and final states. Deviations from time-optimality for unitary evolutions between orthogonal states can only occur with time-dependent Hamiltonian evolutions or, alternatively, with constant Hamiltonian evolutions in higher-dimensional subspaces of the entire Hilbert space. Ultimately, we employ our quantitative analysis to provide meaningful insights regarding the relationship between time-optimal evolutions and analog quantum search methods. We determine that the challenge of transitioning between orthogonal states with a constant Hamiltonian in a sub-optimal time is closely linked to the shortcomings of analog quantum search when the source and target states are orthogonal and not interconnected by the search Hamiltonian. In both scenarios, the fundamental cause of the failure lies in the existence of an inherent symmetry within the system.
Jan 16 2026
math.OA arXiv:2601.10654v1
We show that there exists a completely bounded (c.b. in short) homomorphism $u$ from a $C^*$-algebra $C$ with the lifting property (in short LP) into a QWEP von Neumann algebra $N$ that is not strongly similar to a $*$-homomorphism, i.e. the similarities that ``orthogonalize" $u$ (which exist since $u$ is c.b.) cannot belong to the von Neumann algebra $N$. Moreover, the map $u$ does not admit any c.b. lifting up into the WEP $C^*$-algebra of which $N$ is a quotient. We can take $C=C^*(G)$ (full $C^*$-algebra) where $G$ is any nonabelian free group and $N= B(H)\bar \otimes M$ where $M$ is the von Neumann algebra generated by the reduced $C^*$-algebra of $G$.
Combinatorial optimization augmented machine learning (COAML) has recently emerged as a powerful paradigm for integrating predictive models with combinatorial decision-making. By embedding combinatorial optimization oracles into learning pipelines, COAML enables the construction of policies that are both data-driven and feasibility-preserving, bridging the traditions of machine learning, operations research, and stochastic optimization. This paper provides a comprehensive overview of the state of the art in COAML. We introduce a unifying framework for COAML pipelines, describe their methodological building blocks, and formalize their connection to empirical cost minimization. We then develop a taxonomy of problem settings based on the form of uncertainty and decision structure. Using this taxonomy, we review algorithmic approaches for static and dynamic problems, survey applications across domains such as scheduling, vehicle routing, stochastic programming, and reinforcement learning, and synthesize methodological contributions in terms of empirical cost minimization, imitation learning, and reinforcement learning. Finally, we identify key research frontiers. This survey aims to serve both as a tutorial introduction to the field and as a roadmap for future research at the interface of combinatorial optimization and machine learning.
We study a binary distributed hypothesis testing problem where two agents observe correlated binary vectors and communicate compressed information at the same rate to a central decision maker. In particular, we study linear compression schemes and show that simple truncation is the best linear scheme in two cases: (1) testing opposite signs of the same magnitude of correlation, and (2) testing for or against independence. We conjecture, supported by numerical evidence, that truncation is the best linear code for testing any correlations of opposite signs. Further, for testing against independence, we also compute classical random coding exponents and show that truncation, and consequently any linear code, is strictly suboptimal.
We study the high-dimensional training dynamics of a shallow neural network with quadratic activation in a teacher-student setup. We focus on the extensive-width regime, where the teacher and student network widths scale proportionally with the input dimension, and the sample size grows quadratically. This scaling aims to describe overparameterized neural networks in which feature learning still plays a central role. In the high-dimensional limit, we derive a dynamical characterization of the gradient flow, in the spirit of dynamical mean-field theory (DMFT). Under l2-regularization, we analyze these equations at long times and characterize the performance and spectral properties of the resulting estimator. This result provides a quantitative understanding of the effect of overparameterization on learning and generalization, and reveals a double descent phenomenon in the presence of label noise, where generalization improves beyond interpolation. In the small regularization limit, we obtain an exact expression for the perfect recovery threshold as a function of the network widths, providing a precise characterization of how overparameterization influences recovery.
We analyze quantum systems under a broad class of protocols in which the temperature and a Hamiltonian control parameter are ramped simultaneously and, in general, in a nonlinear fashion toward a quantum critical point. Using an open-system version of a Kitaev quantum wire as an example, we show that, unlike finite-temperature protocols at fixed temperature, these protocols allow us to probe, in an out-of-equilibrium situation and at finite temperature, the universality class (characterized by the critical exponents $\nu$ and $z$) of an equilibrium quantum phase transition at zero temperature. Key to this is the identification of ramps in which both coherent and incoherent parts of the open-system dynamics affect the excitation density in a non-negligible way. We also identify the specific ramps for which subleading corrections to the asymptotic scaling laws are suppressed, which serves as a guide to dynamically probing quantum critical exponents in experimentally realistic finite-temperature situations.
Jan 16 2026
math.OC arXiv:2601.10390v1
This paper addresses the study of algebraic versions of Farkas lemma and strong duality results in the very broad setting of infinite-dimensional conic linear programming in dual pairs of vector spaces. To this end, purely algebraic properties of perturbed optimal value functions of both primal and dual problems and their corresponding hypergraph/epigraph are investigated. The newly developed hypergraphical/epigraphical sets, inspired by Kretschmer's closedness conditions \citeKretschmer61, together with their novel convex separation-type characterizations, give rise to various perturbed Farkas-type lemmas which allow us to derive complete characterizations of ``zero duality gap''. Principally, when certain structures of algebraic or topological duals are imposed, illuminating implications of the developed condition are also explored.
We formulate a dynamic reinsurance problem in which the insurer seeks to control the terminal distribution of its surplus while minimizing the L2-norm of the ceded risk. Using techniques from martingale optimal transport, we show that, under suitable assumptions, the problem admits a tractable solution analogous to the Bass martingale. We first consider the case where the insurer wants to match a given terminal distribution of the surplus process, and then relax this condition by only requiring certain moment or risk-based constraints.
We investigate the open system dynamics of a micromaser quantum battery operating in the ultrastrong coupling (USC) regime under environmental dissipation. The battery consists of a single-mode electromagnetic cavity sequentially interacting, via the Rabi Hamiltonian, with a stream of qubits acting as chargers. Dissipative effects arise from the weak coupling of the qubit-cavity system to a thermal bath. Non-negligible in the USC regime, the counter-rotating terms substantially improve the charging speed, but also lead, in the absence of dissipation, to unbounded energy growth and highly mixed cavity states. Dissipation during each qubit-cavity interaction mitigates these detrimental effects, yielding steady-state of finite energy and ergotropy. Optimal control on qubit preparation and interaction times enhances battery's performance in: (i) Maximizing the stored ergotropy trhough an optimized charging protocol; (ii) Stabilizing the stored ergotropy against dissipative losses through an optimized measurement-based passive-feedback strategy. Overall, our numerical results demonstrate that the interplay of ultrastrong light-matter coupling, controlled dissipation, and optimized control strategies enables micromaser quantum batteries to achieve both enhanced charging performance and long-term stability under realistic conditions.
We investigate the coherence properties of parity-protected $\cos(2\varphi)$ qubits based on interferences between two Josephson elements in a superconducting loop. We show that qubit implementations of a $\cos(2\varphi)$ potential using a single loop, such as those employing semiconducting junctions, rhombus circuits, flowermon and KITE structures, can be described by the same Hamiltonian as two multi-harmonic Josephson junctions in a SQUID geometry. We find that, despite the parity protection arising from the suppression of single Cooper pair tunneling, there exists a fundamental trade-off between charge and flux noise dephasing channels. Using numerical simulations, we examine how relaxation and dephasing rates depend on external flux and circuit parameters, and we identify the best compromise for maximum coherence. With currently existing circuit parameters, the qubit lifetime $T_1$ can exceed milliseconds while the dephasing time $T_\varphi$ remains limited to only a few microseconds due to either flux or charge noise. Our findings establish practical limits on the coherence of this class of qubits and raise questions about the long-term potential of this approach.
We study the Fano effect in dissipative cavity quantum electrodynamics, which originates from the interference between the emitter's direct radiation and that mediated by a cavity mode. Starting from a two-level system coupled to a structured reservoir, we show that a quantum master equation previously derived within the Born-Markov approximation can be rederived by introducing a single auxiliary mode via pseudomode approach. We identify the corresponding spectral function of the system--environment interaction and demonstrate that it consists of a constant and a non-Lorentzian contribution forming the Fano profile. The constant term is shown to be essential for obtaining a Lindblad master equation and is directly related to the rate associated with this Fano interference. Furthermore, by applying Fano diagonalization to a common-environment setup including an explicit cavity mode, we independently derive the same spectral function in the strongest-interference regime. Our results establish a unified framework for describing the Fano effect in single-mode cavity QED systems and clarify its non-Markovian origin encoded in the spectral function.
Jan 16 2026
math.OC arXiv:2601.10086v1
In this paper, we propose a novel reformulation of the smooth nonconvex-strongly-concave (NC-SC) minimax problems that casts the problem as a joint minimization. We show that our reformulation preserves not only first-order stationarity, but also global and local optimality, second-order stationarity, and the Kurdyka-Łojasiewicz (KL) property, of the original NC-SC problem, which is substantially stronger than its nonsmooth counterpart in the literature. With these enhanced structures, we design a versatile parameter-free and nonmonotone line-search framework that does not require evaluating the inner maximization. Under mild conditions, global convergence rates can be obtained, and, with KL property, full sequence convergence with asymptotic rates is also established. In particular, we show our framework is compatible with the gradient descent-ascent (GDA) algorithm. By equipping GDA with Barzilai-Borwein (BB) step sizes and nonmonotone line-search, our method exhibits superior numerical performance against the compared benchmarks.