Core QGML Framework
This section documents the core components of the Quantum Geometric Machine Learning (QGML) framework.
Architecture Overview
The QGML framework is built on a hierarchical architecture that promotes code reuse and modular development:
![digraph core_architecture {
rankdir=TB;
node [shape=box, style="rounded,filled", fillcolor=lightblue];
edge [arrowhead=open];
subgraph cluster_base {
label="Base Framework";
style=filled;
fillcolor=lightgray;
base [label="BaseQuantumMatrixTrainer\n• Hermitian matrix operations\n• Error Hamiltonian construction\n• Ground state computation\n• Quantum state analysis", fillcolor=lightgreen];
}
subgraph cluster_specialized {
label="Specialized Trainers";
style=filled;
fillcolor=lightyellow;
unsup [label="UnsupervisedMatrixTrainer\n• Manifold learning\n• Dimension estimation\n• Reconstruction-based loss"];
sup [label="SupervisedMatrixTrainer\n• Regression/Classification\n• Target operator learning\n• Prediction-based loss"];
geom [label="QuantumGeometryTrainer\n• Advanced geometric features\n• Topological analysis\n• Quantum information measures"];
chromo [label="ChromosomalInstabilityTrainer\n• Genomic applications\n• Mixed loss functions\n• POVM framework"];
}
subgraph cluster_analysis {
label="Analysis Modules";
style=filled;
fillcolor=lightcyan;
topo [label="TopologicalAnalyzer\n• Berry curvature\n• Chern numbers\n• Phase transitions"];
info [label="QuantumInformationAnalyzer\n• Von Neumann entropy\n• Fisher information\n• Coherence measures"];
}
base -> unsup;
base -> sup;
base -> geom;
sup -> chromo;
geom -> topo;
geom -> info;
}](../_images/graphviz-66f75a89ea80520ebf31404619775efecd3838ec.png)
Core Classes
BaseQuantumMatrixTrainer
The foundation class that implements core quantum matrix operations.
- class qgml.core.base_quantum_trainer.BaseQuantumMatrixTrainer(N: int, D: int, device: str = 'cpu', dtype: dtype = torch.complex64, seed: int | None = None)[source]
-
Base class for Quantum Matrix Machine Learning (QMML) models.
Implements core quantum matrix operations: - Hermitian matrix initialization and projection - Error Hamiltonian construction: H(x) = 1/2 Σₖ (Aₖ - xₖI)² - Ground state computation via eigendecomposition - Quantum state expectation values
Subclasses implement specific learning objectives: - Unsupervised: Manifold learning via reconstruction - Supervised: Regression/classification via target operators
- __init__(N: int, D: int, device: str = 'cpu', dtype: dtype = torch.complex64, seed: int | None = None)[source]
Initialize base quantum matrix trainer.
- Parameters:
N – Dimension of Hilbert space (matrix size N×N)
D – Number of features/input dimensions
device – Computation device (‘cpu’ or ‘cuda’)
dtype – Matrix data type (torch.cfloat for complex Hermitian matrices)
seed – Random seed for reproducibility
- compute_error_hamiltonian(x: Tensor) Tensor[source]
Compute error Hamiltonian for input point x.
The error Hamiltonian encodes the quantum geometric structure: H(x) = 1/2 Σₖ (Aₖ - xₖI)²
This Hamiltonian has minimum eigenvalue 0 when the quantum state perfectly encodes the classical input through operator expectations.
- Parameters:
x – Input point tensor of shape (D,)
- Returns:
Error Hamiltonian H(x) of shape (N, N)
- compute_ground_state(x: Tensor) Tensor[source]
Compute ground state |ψ₀(x)⟩ for input x.
The ground state minimizes the error Hamiltonian and provides the optimal quantum encoding of the classical input.
- Parameters:
x – Input point tensor of shape (D,)
- Returns:
Ground state vector |ψ₀⟩ of shape (N,)
- compute_eigensystem(x: Tensor) Tuple[Tensor, Tensor][source]
Compute full eigendecomposition of error Hamiltonian.
Useful for analyzing quantum geometric properties and excited state contributions.
- Parameters:
x – Input point tensor of shape (D,)
- Returns:
eigenvalues: Real eigenvalues sorted ascending, shape (N,)
eigenvectors: Eigenvector matrix, shape (N, N)
- Return type:
Tuple of (eigenvalues, eigenvectors)
- get_feature_expectations(x: Tensor) Tensor[source]
Compute feature operator expectations ⟨ψ₀|Aₖ|ψ₀⟩.
These expectation values represent the quantum encoding of the classical input in the learned matrix representation.
- Parameters:
x – Input point tensor of shape (D,)
- Returns:
Feature expectations of shape (D,)
- compute_quantum_fidelity(x1: Tensor, x2: Tensor) Tensor[source]
Compute quantum fidelity between ground states of two inputs.
Fidelity F(ψ₁, ψ₂) = |⟨ψ₁|ψ₂⟩|² measures quantum similarity.
- Parameters:
x1 – Input points of shape (D,)
x2 – Input points of shape (D,)
- Returns:
Quantum fidelity F ∈ [0, 1]
- get_quantum_state_properties(x: Tensor) Dict[str, Any][source]
Analyze quantum properties of the ground state.
Returns comprehensive quantum information for debugging and understanding the learned representation.
- Parameters:
x – Input point tensor of shape (D,)
- Returns:
ground_energy: Minimum eigenvalue
energy_gap: Gap to first excited state
feature_expectations: ⟨ψ|Aₖ|ψ⟩ values
quantum_purity: Tr(ρ²) for ground state
entanglement_entropy: von Neumann entropy (if applicable)
- Return type:
Dictionary containing
- abstractmethod forward(x: Tensor) Tensor[source]
Forward pass - must be implemented by subclasses.
- Parameters:
x – Input tensor
- Returns:
Model output (predictions for supervised, reconstructions for unsupervised)
Key Features:
Hermitian Matrix Operations: Initialization, projection, and manipulation
Error Hamiltonian: Construction of \(H(x) = \frac{1}{2} \sum_k (A_k - x_k I)^2\)
Ground State Computation: Eigendecomposition and quantum state analysis
Quantum Expectation Values: Computation of \(\langle \psi | A_k | \psi \rangle\)
UnsupervisedMatrixTrainer
Extends the base framework for unsupervised manifold learning.
- class qgml.learning.unsupervised_trainer.UnsupervisedMatrixTrainer(N: int, D: int, learning_rate: float = 0.001, commutation_penalty: float = 0.1, optimizer_type: str = 'adam', device: str = 'cpu', **kwargs)[source]
Bases:
BaseQuantumMatrixTrainerUnsupervised QMML trainer for manifold learning and dimension estimation.
Learns feature operators {A_k} that minimize reconstruction error: L = Σᵢ ||xᵢ - X_A(xᵢ)||² + λ * commutation_penalty
Where X_A(x) = {⟨ψ₀(x)|A_k|ψ₀(x)⟩} is the quantum point cloud.
- __init__(N: int, D: int, learning_rate: float = 0.001, commutation_penalty: float = 0.1, optimizer_type: str = 'adam', device: str = 'cpu', **kwargs)[source]
Initialize unsupervised QMML trainer.
- Parameters:
N – Hilbert space dimension
D – Feature space dimension
learning_rate – Learning rate for optimization
commutation_penalty – Weight for commutator penalty term
optimizer_type – Optimizer type (‘adam’, ‘sgd’, ‘adamw’)
device – Computation device
**kwargs – Additional arguments for base class
- forward(x: Tensor) Tensor[source]
Forward pass: compute quantum point cloud reconstruction.
- Parameters:
x – Input point of shape (D,)
- Returns:
Reconstructed point X_A(x) of shape (D,)
- compute_reconstruction_loss(points: Tensor) Tensor[source]
Compute reconstruction loss for a batch of points.
L_reconstruction = (1/N) Σᵢ ||xᵢ - X_A(xᵢ)||²
- Parameters:
points – Batch of input points, shape (batch_size, D)
- Returns:
Mean reconstruction loss
- compute_commutation_penalty() Tensor[source]
Compute commutation penalty to encourage classical structure.
Penalty = Σᵢⱼ ||[Aᵢ, Aⱼ]||²_F where [A,B] = AB - BA
Large commutators indicate non-classical quantum correlations. Small commutators suggest classical geometric structure.
- Returns:
Commutation penalty term
- compute_loss(points: Tensor) Dict[str, Tensor][source]
Compute total loss with reconstruction and commutation terms.
L_total = L_reconstruction + λ * L_commutation
- Parameters:
points – Batch of training points, shape (batch_size, D)
- Returns:
Dictionary containing individual loss components and total loss
- train_epoch(points: Tensor, batch_size: int | None = None) Dict[str, float][source]
Train for one epoch with optional batching.
- Parameters:
points – Training points, shape (n_points, D)
batch_size – Batch size for training (None = full batch)
- Returns:
Dictionary with epoch metrics
- fit(points: Tensor, n_epochs: int = 200, batch_size: int | None = None, validation_split: float = 0.0, verbose: bool = True, save_history: bool = True) Dict[str, List[float]][source]
Train the unsupervised QMML model.
- Parameters:
points – Training data, shape (n_points, D)
n_epochs – Number of training epochs
batch_size – Batch size (None = full batch)
validation_split – Fraction of data for validation
verbose – Whether to print training progress
save_history – Whether to save training history
- Returns:
Training history dictionary
- estimate_intrinsic_dimension(points: Tensor, energy_threshold: float = 0.001) Dict[str, Any][source]
Estimate intrinsic dimension by analyzing eigenvalue gaps.
The intrinsic dimension corresponds to the number of small eigenvalues in the average error Hamiltonian spectrum.
- Parameters:
points – Test points for dimension estimation
energy_threshold – Threshold for small eigenvalues
- Returns:
Dictionary with dimension estimation results
Key Features:
Reconstruction Loss: \(L = \sum_i \|x_i - X_A(x_i)\|^2\)
Commutation Penalty: Regularization via \(\sum_{i,j} \|[A_i, A_j]\|_F^2\)
Intrinsic Dimension Estimation: Eigenvalue gap analysis
Manifold Reconstruction: Quantum point cloud generation
SupervisedMatrixTrainer
Implements supervised learning with target operators.
- class qgml.learning.supervised_trainer.SupervisedMatrixTrainer(N: int, D: int, task_type: str = 'regression', loss_type: str = 'mae', learning_rate: float = 0.001, commutation_penalty: float = 0.1, optimizer_type: str = 'adam', device: str = 'cpu', **kwargs)[source]
Bases:
BaseQuantumMatrixTrainerSupervised QMML trainer for regression and classification tasks.
Learns feature operators {A_k} and target operator B to minimize: L = Σᵢ |yᵢ - ⟨ψ₀(xᵢ)|B|ψ₀(xᵢ)⟩|ᵖ + λ * commutation_penalty
Where ψ₀(x) is the ground state of H(x) = 1/2 Σₖ (Aₖ - xₖI)².
- __init__(N: int, D: int, task_type: str = 'regression', loss_type: str = 'mae', learning_rate: float = 0.001, commutation_penalty: float = 0.1, optimizer_type: str = 'adam', device: str = 'cpu', **kwargs)[source]
Initialize supervised QMML trainer.
- Parameters:
N – Hilbert space dimension
D – Feature space dimension
task_type – ‘regression’ or ‘classification’
loss_type – ‘mae’, ‘mse’, ‘huber’, ‘cross_entropy’
learning_rate – Learning rate for optimization
commutation_penalty – Weight for commutator penalty term
optimizer_type – Optimizer type (‘adam’, ‘sgd’, ‘adamw’)
device – Computation device
**kwargs – Additional arguments for base class
- forward(x: Tensor) Tensor[source]
Forward pass: compute prediction via target operator expectation.
ŷ = ⟨ψ₀(x)|B|ψ₀(x)⟩
- Parameters:
x – Input point of shape (D,)
- Returns:
Prediction (scalar for regression, logits for classification)
- predict_batch(X: Tensor) Tensor[source]
Predict for a batch of inputs.
- Parameters:
X – Batch of inputs, shape (batch_size, D)
- Returns:
Predictions, shape (batch_size,)
- compute_prediction_loss(X: Tensor, y: Tensor) Tensor[source]
Compute prediction loss for a batch.
- Parameters:
X – Input batch, shape (batch_size, D)
y – Target batch, shape (batch_size,)
- Returns:
Mean prediction loss
- compute_commutation_penalty() Tensor[source]
Compute commutation penalty including target operator.
Penalty = Σᵢⱼ ||[Aᵢ, Aⱼ]||²_F + Σᵢ ||[Aᵢ, B]||²_F
- Returns:
Total commutation penalty
- compute_loss(X: Tensor, y: Tensor) Dict[str, Tensor][source]
Compute total loss with prediction and commutation terms.
L_total = L_prediction + λ * L_commutation
- Parameters:
X – Input batch, shape (batch_size, D)
y – Target batch, shape (batch_size,)
- Returns:
Dictionary containing loss components
- train_epoch(X: Tensor, y: Tensor, batch_size: int | None = None) Dict[str, float][source]
Train for one epoch.
- Parameters:
X – Training inputs, shape (n_samples, D)
y – Training targets, shape (n_samples,)
batch_size – Batch size (None = full batch)
- Returns:
Dictionary with epoch metrics
- evaluate(X: Tensor, y: Tensor) Dict[str, float][source]
Evaluate model performance.
- Parameters:
X – Test inputs, shape (n_samples, D)
y – Test targets, shape (n_samples,)
- Returns:
Dictionary with evaluation metrics
- fit(X: Tensor, y: Tensor, n_epochs: int = 200, batch_size: int | None = None, validation_split: float = 0.2, X_val: Tensor | None = None, y_val: Tensor | None = None, verbose: bool = True, save_history: bool = True) Dict[str, List[float]][source]
Train the supervised QMML model.
- Parameters:
X – Training inputs, shape (n_samples, D)
y – Training targets, shape (n_samples,)
n_epochs – Number of training epochs
batch_size – Batch size (None = full batch)
validation_split – Fraction for validation (if X_val not provided)
X_val – Validation inputs
y_val – Validation targets
verbose – Whether to print progress
save_history – Whether to save training history
- Returns:
Training history dictionary
Key Features:
Target Operator Learning: Hermitian matrix \(B\) for predictions
Prediction Function: \(\hat{y} = \langle \psi_0(x) | B | \psi_0(x) \rangle\)
Multiple Loss Functions: MAE, MSE, Huber, Cross-entropy
Regression and Classification: Unified framework for both tasks
Mathematical Foundations
The mathematical foundation of QGML rests on several key concepts:
Quantum Matrix Geometry
Classical data points \(x \in \mathbb{R}^D\) are encoded in quantum states through the error Hamiltonian:
The ground state \(|\psi_0(x)\rangle\) satisfies:
where \(\lambda_{\min}(x)\) is the minimum eigenvalue.
Quantum Point Cloud
The quantum encoding maps classical points to quantum expectation values:
This creates a point cloud \(\mathcal{D}_X = \{X_A(x^i) | x^i \in \mathcal{X}\}\) in the quantum feature space.
Loss Functions
Unsupervised Learning:
Supervised Learning:
where \(\ell(\cdot, \cdot)\) is the task-specific loss function.
Common Usage Patterns
Basic Initialization
from qgml.core.base_quantum_trainer import BaseQuantumMatrixTrainer
from qgml.learning.unsupervised_trainer import UnsupervisedMatrixTrainer
from qgml.learning.supervised_trainer import SupervisedMatrixTrainer
# Unsupervised manifold learning
unsup_trainer = UnsupervisedMatrixTrainer(
N=8, # Hilbert space dimension
D=3, # Feature dimension
learning_rate=0.001,
commutation_penalty=0.1
)
# Supervised regression
sup_trainer = SupervisedMatrixTrainer(
N=8, D=3,
task_type='regression',
loss_type='mae',
learning_rate=0.001
)
Training Workflow
import torch
# Generate or load data
X_train = torch.randn(100, 3) # 100 samples, 3 features
y_train = torch.randn(100) # Regression targets
# Unsupervised training
unsup_history = unsup_trainer.fit(
points=X_train,
n_epochs=200,
batch_size=32,
validation_split=0.2
)
# Supervised training
sup_history = sup_trainer.fit(
X=X_train, y=y_train,
n_epochs=200,
batch_size=32,
validation_split=0.2
)
Analysis and Evaluation
# Unsupervised analysis
dim_results = unsup_trainer.estimate_intrinsic_dimension(X_train)
print(f"Estimated dimension: {dim_results['estimated_intrinsic_dimension']}")
# Reconstruction quality
original, reconstructed = unsup_trainer.reconstruct_manifold(X_train[:10])
reconstruction_error = torch.mean(torch.norm(original - reconstructed, dim=1))
# Supervised evaluation
X_test = torch.randn(20, 3)
y_test = torch.randn(20)
test_metrics = sup_trainer.evaluate(X_test, y_test)
print(f"Test R²: {test_metrics['r2_score']:.4f}")
print(f"Test MAE: {test_metrics['mae']:.4f}")
Advanced Features
Quantum State Analysis
# Analyze quantum properties of learned representations
x_sample = torch.tensor([0.5, -0.2, 0.1])
# Get quantum state properties
properties = unsup_trainer.get_quantum_state_properties(x_sample)
print(f"Ground energy: {properties['ground_energy']:.6f}")
print(f"Energy gap: {properties['energy_gap']:.6f}")
print(f"Reconstruction error: {properties['reconstruction_error']:.6f}")
# Quantum fidelity between states
x1 = torch.tensor([0.0, 0.0, 0.0])
x2 = torch.tensor([0.1, 0.1, 0.1])
fidelity = unsup_trainer.compute_quantum_fidelity(x1, x2)
print(f"Quantum fidelity: {fidelity:.6f}")
Custom Loss Functions
# Custom loss for specialized applications
class CustomQuantumTrainer(SupervisedMatrixTrainer):
def compute_custom_loss(self, X, y, weights=None):
"""Custom loss with additional regularization."""
# Standard prediction loss
pred_loss = self.compute_prediction_loss(X, y)
# Custom quantum regularization
quantum_penalty = 0.0
for x in X:
psi = self.compute_ground_state(x)
# Add penalty for highly excited states
eigenvals, _ = self.compute_eigensystem(x)
quantum_penalty += torch.sum(eigenvals[1:]) # Excited state penalty
quantum_penalty /= len(X)
return pred_loss + 0.01 * quantum_penalty
Performance Optimization
Memory Management
# For large systems, manage memory carefully
trainer = UnsupervisedMatrixTrainer(
N=16, # Larger Hilbert space
D=5,
device='cuda' if torch.cuda.is_available() else 'cpu'
)
# Process data in smaller batches
batch_size = 16 # Adjust based on memory
for epoch in range(n_epochs):
for batch_start in range(0, len(X_train), batch_size):
batch_end = min(batch_start + batch_size, len(X_train))
X_batch = X_train[batch_start:batch_end]
# Training step
trainer.train_epoch(X_batch, batch_size=None)
GPU Acceleration
# Automatic GPU detection and usage
device = 'cuda' if torch.cuda.is_available() else 'cpu'
trainer = SupervisedMatrixTrainer(
N=8, D=3,
device=device
)
# Ensure data is on the same device
X_train = X_train.to(device)
y_train = y_train.to(device)
# Training automatically uses GPU
history = trainer.fit(X_train, y_train, n_epochs=100)
Error Handling and Debugging
Common Issues
Numerical Instability: Large condition numbers in matrices
Memory Overflow: Large Hilbert space dimensions
Convergence Problems: Poor initialization or learning rates
Device Mismatch: Tensors on different devices
Debugging Tools
# Check model health
def debug_quantum_trainer(trainer, X_sample):
"""Debug quantum trainer state."""
print("=== Quantum Trainer Debug ===")
# Check feature operators
for i, A in enumerate(trainer.feature_operators):
eigenvals = torch.linalg.eigvals(A)
print(f"Feature operator {i}:")
print(f" Eigenvalue range: [{torch.min(eigenvals):.4f}, {torch.max(eigenvals):.4f}]")
print(f" Condition number: {torch.max(eigenvals) / torch.min(eigenvals):.2f}")
# Check ground state computation
x = X_sample[0]
try:
psi = trainer.compute_ground_state(x)
norm = torch.norm(psi)
print(f"Ground state norm: {norm:.6f}")
# Check Hermiticity
eigenvals, _ = trainer.compute_eigensystem(x)
print(f"Ground energy: {eigenvals[0]:.6f}")
print(f"Energy gap: {eigenvals[1] - eigenvals[0]:.6f}")
except Exception as e:
print(f"Error in ground state computation: {e}")
# Usage
debug_quantum_trainer(trainer, X_train[:5])
Extending the Framework
Creating Custom Trainers
class MyCustomTrainer(BaseQuantumMatrixTrainer):
"""Custom trainer for specialized applications."""
def __init__(self, N, D, custom_param=1.0, **kwargs):
super().__init__(N, D, **kwargs)
self.custom_param = custom_param
# Custom initialization
self.initialize_custom_operators()
def initialize_custom_operators(self):
"""Custom operator initialization."""
# Override default initialization if needed
pass
def forward(self, x):
"""Custom forward pass."""
# Implement custom prediction/reconstruction logic
return self.get_feature_expectations(x)
def compute_loss(self, X, y=None):
"""Custom loss function."""
# Implement domain-specific loss
pass
Adding Custom Analysis
class CustomAnalyzer:
"""Custom analysis module."""
def __init__(self, trainer):
self.trainer = trainer
def analyze_custom_property(self, X):
"""Analyze custom quantum property."""
results = []
for x in X:
psi = self.trainer.compute_ground_state(x)
# Compute custom quantum property
custom_value = self.compute_custom_measure(psi)
results.append(custom_value)
return torch.tensor(results)
def compute_custom_measure(self, psi):
"""Compute custom quantum measure."""
# Implement custom analysis
return torch.real(torch.sum(psi**2))
See Also
Quantum Geometry Trainer API - Advanced quantum geometric features
Topological Analysis API - Topological invariant computation
Quantum Information Analysis API - Quantum information measures
../user_guide/tutorials - Step-by-step tutorials
../math/quantum_matrix_geometry - Mathematical foundations