QGML Testing and Validation Guide

This section provides comprehensive documentation for testing and validating QGML framework components.

Overview

The QGML testing framework ensures:

  • Mathematical Correctness: Quantum state computations and geometric properties

  • Numerical Stability: Robust behavior across different parameter ranges

  • Performance Validation: Efficient computation and memory usage

  • Integration Reliability: Seamless component interactions

Test Categories

Integration Tests

End-to-end validation of complete workflows:

Unit Tests

Individual component validation:

Validation Tests

Mathematical and numerical correctness:

Running Tests

Basic Test Execution

# Run all tests
pytest

# Run specific test category
pytest tests/test_integration/

# Run with verbose output
pytest -v

# Run with coverage report
pytest --cov=qgml --cov-report=html

Test Configuration

# pytest.ini configuration
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = -v --tb=short

Continuous Integration

GitHub Actions

Automated testing on every commit:

name: QGML Tests
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: [3.8, 3.9, 3.10, 3.11]
    steps:
    - uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        pip install -e .
        pip install pytest pytest-cov
    - name: Run tests
      run: pytest --cov=qgml

Performance Benchmarking

Benchmark Suite

Comprehensive performance testing:

# benchmarks/performance_tests.py
import time
import torch
from qgml.learning.supervised_trainer import SupervisedMatrixTrainer

def benchmark_training_time():
    """Benchmark training performance."""
    trainer = SupervisedMatrixTrainer(N=8, D=3)
    X = torch.randn(1000, 3)
    y = torch.randn(1000)

    start_time = time.time()
    trainer.fit(X, y, n_epochs=100)
    end_time = time.time()

    return end_time - start_time

Memory Profiling

Memory usage analysis:

# tests/test_memory.py
import memory_profiler
from qgml.geometry.quantum_geometry_trainer import QuantumGeometryTrainer

@memory_profiler.profile
def test_memory_usage():
    """Profile memory usage during training."""
    trainer = QuantumGeometryTrainer(N=16, D=4)
    # Training code here

Troubleshooting

Common Issues

Import Errors

Ensure QGML is properly installed and Python path is correct.

Backend Issues

Check PyTorch/JAX installation and backend configuration.

Memory Issues

Reduce batch size or Hilbert space dimension for large models.

Numerical Instability

Adjust learning rates and regularization parameters.

Debug Mode

# Enable debug logging
import logging
logging.basicConfig(level=logging.DEBUG)

# Run tests with debug output
pytest -s --log-cli-level=DEBUG

Test Data

Synthetic Data

Standard test datasets for validation:

from qgml.utils.data_generation import (
    generate_spiral_data,
    generate_sphere_data,
    generate_hypercube_data
)

# Generate test datasets
X_spiral = generate_spiral_data(n_points=200, noise=0.1)
X_sphere = generate_sphere_data(n_points=150, radius=1.0)
X_hypercube = generate_hypercube_data(n_points=100, dim=3)

Real Data Integration

Integration with real-world datasets:

  • Genomics Data: Chromosomal instability analysis

  • Performance Benchmarks: Backend comparison data

  • Validation Datasets: Standard ML benchmarks

Best Practices

Test Design

  1. Isolation: Tests should not depend on each other

  2. Determinism: Use fixed random seeds for reproducible results

  3. Coverage: Aim for >90% code coverage

  4. Performance: Keep test execution time reasonable

Test Documentation

  1. Clear Purpose: Document what each test validates

  2. Expected Behavior: Specify expected outcomes

  3. Edge Cases: Test boundary conditions and error cases

  4. Performance Expectations: Include timing and memory benchmarks

Contributing

Adding New Tests

  1. Follow naming convention: test_<component>_<functionality>.py

  2. Include comprehensive docstrings

  3. Add to continuous integration pipeline

  4. Update documentation

Test Review Process

  1. Code Review: All test code must be reviewed

  2. Coverage Analysis: Ensure adequate test coverage

  3. Performance Impact: Verify tests don’t slow down CI

  4. Documentation Updates: Update test documentation

See Also