GALAHAD: Geometry-Adaptive Lyapunov-Assured Hybrid Optimizer

Author: Richard A. Feiss
Version: 1.0.0
License: MIT
Institution: Minnesota Center for Prion Research and Outreach (MNPRO), University of Minnesota


Overview

GALAHAD is a geometry-aware optimizer designed for models with heterogeneous parameter spaces — combining log-scaled, positive-only, and unconstrained Euclidean variables.
Conventional solvers assume a uniform Euclidean structure, often causing instability in biological model fitting.
This package introduces a Lyapunov-stable framework that adapts to each parameter’s geometry, improving convergence in small, noisy, or ill-conditioned datasets.

The algorithm originated during germination model fitting under contaminant exposure at MNPRO.
Earlier work on the Osmotic Stress Response Index (OSRI) and Prion Stress Response Index (PSRI) revealed that mixed-geometry parameters produced divergence in standard optimizers.
Through iterative refinement and stability monitoring, the workflow evolved into a general-purpose optimization framework now formalized as GALAHAD 1.0.0.


Algorithmic Core

Component Description
Per-geometry updates Log-space natural gradient (T), entropy mirror descent (P), Euclidean descent (E)
Trust-region projection Limits step length by curvature and scaling
Lyapunov stability check Ensures ΔV ≤ 0 at every iteration
Step-size control Combines Polyak and Barzilai–Borwein heuristics for adaptive rates
Halpern averaging Reduces oscillations in small or noisy datasets

Applications


Development Transparency

Development followed an iterative human–machine refinement process.
All mathematical design, algorithmic logic, and validation were performed by the author.
AI tools were used solely to improve documentation structure, grammar, and reproducibility wording.

Interactive sessions with Anthropic Claude (Sonnet 4.5) and OpenAI GPT-5 supported:

AI systems did not generate algorithms, mathematical content, or scientific results — they functioned only as editorial and diagnostic assistants under continuous human direction.


Acknowledgements

Developed at the Minnesota Center for Prion Research and Outreach (MNPRO), University of Minnesota.
This project is independent of the Fortran “GALAHAD” library by Gould et al.
All work, testing, and validation were conducted in R 4.4.0+ under Windows 11.


References

Amari, S. (1998). Natural gradient works efficiently in learning. Neural Computation, 10(2), 251–276.
Beck, A., & Teboulle, M. (2003). Mirror descent and nonlinear projected subgradient methods for convex optimisation. Operations Research Letters, 31(3), 167–175.
Conn, A. R., Gould, N. I. M., & Toint, P. L. (2000). Trust-Region Methods. SIAM.
Nesterov, Y. (2017). A Lyapunov analysis of momentum methods in optimisation. CORE Discussion Paper, Université catholique de Louvain.
Walne, P. L., et al. (2020). In vitro seed germination response of corn hybrids to osmotic stress conditions. Agrosystems, Geosciences & Environment, 3(1), e20087. doi:10.1002/agg2.20087
Schulman, J., Levine, S., Moritz, P., Jordan, M., & Abbeel, P. (2015). Trust region policy optimization. ICML.