skip to content

Mathematical Research at the University of Cambridge

 

In the noisy intermediate-scale quantum (NISQ) era, quantum error mitigation (QEM) is essential for producing reliable outputs from quantum circuits. We present a statistical signal processing approach to QEM that estimates the most likely noiseless outputs from noisy quantum measurements. Our model assumes that circuit depth is sufficient for depolarizing noise, producing corrupted observations that resemble a uniform distribution alongside classical bit-flip errors from readout. Our method consists of two steps: a filtering stage that discards uninformative depolarizing noise and an expectation-maximization (EM) algorithm that computes a maximum likelihood (ML) estimate over the remaining data. We demonstrate the effectiveness of this approach on small-qubit systems using circuit simulations in Qiskit and IBM quantum processing unit (QPU) data, and compare its performance to contemporary statistical QEM techniques. We also show that our method scales to larger qubit counts using synthetically generated data consistent with our noise model. These results suggest that principled statistical methods can offer scalable and interpretable solutions for quantum error mitigation in realistic NISQ settings. Finally, while this talk solves a problem that appears on quantum computers, the solution technique does not require a quantum background. People who work in information theory, signal processing, and machine learning should be able to follow and appreciate the topic.

Further information

Time:

15Oct
Oct 15th 2025
14:00 to 15:00

Venue:

MR5, CMS Pavilion A

Series:

Information Theory Seminar