|
Maestro 0.2.5
Unified interface for quantum circuit simulation
|
Maestro is a high-performance C++ library that provides a unified interface and intelligent orchestration layer for quantum circuit simulation. It automates the complexity of selecting and configuring simulators, enabling researchers and developers to execute quantum circuits efficiently across CPUs, GPUs, and distributed HPC environments — without manual tuning.
| Feature | Description |
|---|---|
| Unified Abstraction | Write your circuit once (Qiskit / OpenQASM) and Maestro compiles it to the native format of the target backend |
| Prediction Engine | Automatically analyses circuit features (gate density, entanglement, locality) to predict and select the fastest backend |
| High-Performance | Multi-threading, multi-processing, and optimised state sampling to maximise throughput |
| GPU Acceleration | Integrated NVIDIA cuStateVec, custom MPS, tensor network, stabiliser, and Pauli propagator GPU kernels |
| Distributed QC | P-block composite simulation to break the memory ceiling of monolithic simulators |
| Extensible | New backends can be added by implementing a single C++ interface |
| Python Bindings | High-performance nanobind-based Python API |
The documentation is organised into the following thematic modules. Click on a module name to browse its classes, functions, and detailed documentation.
| Module | Description |
|---|---|
| Core API | Core types, orchestration entry point, and library interface |
| Circuit Representation | Quantum circuit IR — gates, operations, measurements |
| Simulation Backends | Simulator interfaces and all backend implementations |
| Distributed Quantum Computing | Distributed quantum computing — hosts, controllers, topology |
| Tensor Network Simulation | Tensor network simulation and contraction strategies |
| Job Scheduling | Job scheduling and parallel execution |
| Runtime Estimation | Runtime estimation and automatic backend selection |
| OpenQASM Interop | OpenQASM 2.0 parsing and serialisation |
| Utilities | Thread pool, tensors, logging, and other utilities |
| Python Bindings | High-performance Python bindings |
| QuEST Backend | Dynamic QuEST CPU simulator with MPI support |
| GPU Backend | CUDA-accelerated simulation (closed-source) |
For detailed build options (GPU support, optional dependencies, etc.), see the Installation Guide.
For a comprehensive walkthrough, see the Tutorial.
Maestro integrates or wraps the following simulation technologies:
| Category | Technology | Notes |
|---|---|---|
| CPU Statevector | QCSim, Qiskit Aer | Full state-vector simulation |
| CPU MPS | QCSim, Qiskit Aer | Matrix Product State with configurable bond dimension |
| CPU Stabilizer | QCSim, Qiskit Aer | Efficient Clifford-only simulation |
| CPU Tensor Network | QCSim, Qiskit Aer | Tensor network contraction |
| CPU Pauli Propagator | QCSim | Pauli propagation simulation |
| CPU Extended Stabilizer | Qiskit Aer | Near-Clifford decomposition |
| GPU Statevector | NVIDIA cuStateVec | Via dynamic GPU library loading |
| GPU MPS | Custom CUDA | Custom GPU MPS implementation |
| GPU Tensor Network | Custom CUDA | Tensor network simulation on GPU |
| GPU Pauli Propagator | Custom CUDA | Pauli propagation simulation on GPU |
| QuEST | QuEST library | CPU statevector with MPI distribution |
| Composite | p-block | Distributed state across sub-simulators |
Each backend is accessed through a C++ adapter implementing the Simulators::ISimulator interface.
Maestro supports the QuEST simulator as an alternative CPU backend. QuEST is loaded dynamically as a shared library (libmaestroquest), so it is available only when the library has been built and installed.
QuEST natively supports MPI-distributed statevector simulation, enabling execution across multiple nodes for larger qubit counts.
QuEST only supports Statevector simulation. Attempting to use it with MPS, Stabilizer, or other simulation types will raise an error.
See the Python Guide for QuEST usage from Python, including maestro.init_quest() and maestro.is_quest_available().
Maestro supports GPU-accelerated simulation via a dynamically-loaded CUDA library (libmaestro_gpu_simulators). The GPU backend is optional and loaded at runtime only when requested.
Unlike QuEST, the GPU backend supports multiple simulation types:
| SimulationType | GPU Support |
|---|---|
kStatevector | ✅ |
kMatrixProductState | ✅ |
kTensorNetwork | ✅ |
kPauliPropagator | ✅ |
kStabilizer | ❌ |
kExtendedStabilizer | ❌ |
On Linux, GetMaestroObject() automatically attempts to load the GPU library. You can also initialise it explicitly:
See the Python Guide for GPU usage from Python, including maestro.init_gpu() and maestro.is_gpu_available().
Maestro's prediction engine:
The model normalises performance features to reduce hardware dependence and can be recalibrated at installation time.
| Page | Description |
|---|---|
| Python Guide | Comprehensive Python API reference and examples |
| Installation | Build instructions and optional dependency setup |
| Tutorial | Step-by-step API usage guide |
| Contributing | How to contribute to Maestro |
| Code of Conduct | Community guidelines |
If you use Maestro in your research, please cite:
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for the full text.