Augmented Krylov subspace methods are a family of techniques which have been proposed in the context of solving both well- and ill-posed linear problems, at times with mathematically similar methods often being proposed in different communities. However, the goals when using such methods in those two contexts differs. In any Krylov subspace method, one iteratively generates a subspace from which solution approximations are drawn. In an augmentation method, one seeks to somehow enrich this space with additional information. For well-posed problems, this done both to damp the influence of parts of the spectrum which can slow convergence and to mitigate the effects of „restarting“, wherein one must discard the generated subspace due to memory constraints. For ill-posed problems (in image and signal reconstruction), these methods have been proposed for the improvement of the reconstruction and acceleration of the semiconvergence, particularly in the case where one augments with known sharp edge (i.e., large gradient-norm) features and jumps.
In this talk, we place all these methods into a common framework, which allows us to more easily relate them to one another. One can then use this understanding to standardize the process of combining the augmentation strategy with other existing iterative methods. This is demonstrated with, e.g., the Arnoldi-Tikhonov method. Numerical examples from the well- and ill-posed problems communities will be presented.