Profile
Dr. Andreas Tillmann |
I am a PostDoc at the Visual Computing Institute and the Chair of Operations Research (see my homepage there) , and head the "Data" unit of the profile area CompSE (Computational Science and Engineering) of RWTH Aachen University. I received my doctoral degree from TU Darmstadt in December 2013, and was an interim/visiting professor of mathematical optimization at TU Braunschweig in winter 2014/15. I am one of the organizers of the Workshop on Optimization, Machine Learning and Data Science, taking place at TU Braunschweig April 12-13, 2018.
My research interests focus on theoretical and practical aspects of optimization and complexity theory in areas like compressed sensing, signal and image processing, machine learning, operations research, and (discrete) geometrical problems. I am affiliated with the project "EXPRESS – EXploiting structure in comPREssed Sensing using Side constraints" within the DFG priority program CoSIP ("Compressed Sensing in Information Processing", SPP 1798). I am also the principal investigator of the project "Efficient exact maximum-likelihood decoding and minimum-distance computation for binary linear codes" funded by the RWTH ERS StartUp/DFG Excellence Initiative.
You can also find me on Google Scholar and on Research Gate.
Presentations
Event |
Type |
Title |
---|---|---|
SPARS 2019 | Poster | Structured Discrete Shape Approximation |
SPARS 2019 | Poster | Exact Recovery of Partially Sparse Signals |
EURO 2019 | Invited Talk | Airplane Boarding: Complexity, Approximation and Exact Solution |
ISMP 2018 | Invited Talk | Spark-MIP: Mixed-Integer Programming for the (Vector) Matroid Girth Problem |
SPARS 2017 | Talk | Computing the Spark of a Matrix |
SPARS 2017 | Poster | L1-HOUDINI: A New Homotopy Method for L1-Minimization |
The Aussois C.O.W. 2017 | Talk | Sparse Signal Reconstruction with Integrality Constraints |
DWCAA 2016 | Invited Talk | Exploiting hidden sparsity for image reconstruction from nonlinear measurements |
Data Science meets Optimization 2016 | Invited Talk | New Applications of Sparsity-Based Learning |
ICASSP 2016 | Poster | Dictionary Learning from Phaseless Measurements |
CSA 2015 | Poster | Dictionary Learning for Phase Retrieval |
ISMP 2015 | Invited Talk | Branch & Cut Methods for Exact Sparse Recovery |
SPARS 2015 | Poster | Heuristic Optimality Checks for Noise-Aware Sparse Recovery by L1-Minimization |
GAMM 2015 | Invited Talk | Computational Aspects of Sparse Recovery |
Sparse Tomographic Reconstruction 2015 | Invited Talk | Computational Aspects of Sparse Recovery |
ICASSP 2014 | Poster | Projection onto the Cosparse Set is NP-hard |
SPARS 2013 | Poster | Projection onto the k-Cosparse Set is NP-hard |
SPARS 2013 | Talk | The Computational Complexity of Spark, RIP, and NSP (Highlight Talk) |
MATHEON-W. Sparse Representation [...] 2012 | Invited Talk | Branch & Cut for L0-Minimization |
ISMP 2012 | Invited Talk | Heuristic Optimality Check and Computational Solver Comparison for Basis Pursuit |
SIAM Conf. on Applied Linear Algebra 2012 | Invited Talk | Solving Basis Pursuit |
SPARS 2011 | Poster | An Infeasible-Point Subgradient Algorithm and a Computational Solver Comparison for l1-Minimization |
SIAM Conf. on Optimization 2011 | Poster | An Infeasible-Point Subgradient Method Using Approximate Projections |
CSSIP 2010 | Talk | An Infeasible-Point Subgradient Method and Computational Comparison for L1-Minimization |
Publications
Structured Discrete Shape Approximation: Theoretical Complexity and Practical Algorithm
We consider the problem of approximating a two-dimensional shape contour (or curve segment) using discrete assembly systems, which allow to build geometric structures based on limited sets of node and edge types subject to edge length and orientation restrictions. We show that already deciding feasibility of such approximation problems is NP-hard, and remains intractable even for very simple setups. We then devise an algorithmic framework that combines shape sampling with exact cardinality-minimization to obtain good approximations using few components. As a particular application and showcase example, we discuss approximating shape contours using the classical Zometool construction kit and provide promising computational results, demonstrating that our algorithm is capable of obtaining good shape representations within reasonable time, in spite of the problem's general intractability. We conclude the paper with an outlook on possible extensions of the developed methodology, in particular regarding 3D shape approximation tasks.
Code available per request.
@article{TILLMANN2021101795,
title = {Structured discrete shape approximation: Theoretical complexity and practical algorithm},
journal = {Computational Geometry},
volume = {99},
pages = {101795},
year = {2021},
issn = {0925-7721},
doi = {https://doi.org/10.1016/j.comgeo.2021.101795},
url = {https://www.sciencedirect.com/science/article/pii/S0925772121000511},
author = {Andreas M. Tillmann and Leif Kobbelt},
keywords = {Shape approximation, Discrete assembly systems, Computational complexity, Mixed-integer programming, Zometool},
abstract = {We consider the problem of approximating a two-dimensional shape contour (or curve segment) using discrete assembly systems, which allow to build geometric structures based on limited sets of node and edge types subject to edge length and orientation restrictions. We show that already deciding feasibility of such approximation problems is NP-hard, and remains intractable even for very simple setups. We then devise an algorithmic framework that combines shape sampling with exact cardinality minimization to obtain good approximations using few components. As a particular application and showcase example, we discuss approximating shape contours using the classical Zometool construction kit and provide promising computational results, demonstrating that our algorithm is capable of obtaining good shape representations within reasonable time, in spite of the problem's general intractability. We conclude the paper with an outlook on possible extensions of the developed methodology, in particular regarding 3D shape approximation tasks.}
}
Computing the Spark: Mixed-Integer Programming for the (Vector) Matroid Girth Problem
We investigate the NP-hard problem of computing the spark of a matrix (i.e., the smallest number of linearly dependent columns), a key parameter in compressed sensing and sparse signal recovery. To that end, we identify polynomially solvable special cases, gather upper and lower bounding procedures, and propose several exact (mixed-)integer programming models and linear programming heuristics. In particular, we develop a branch-and-cut scheme to determine the girth of a matroid, focussing on the vector matroid case, for which the girth is precisely the spark of the representation matrix. Extensive numerical experiments demonstrate the effectiveness of our specialized algorithms compared to general-purpose black-box solvers applied to several mixed-integer programming models.
Code and test instances available per request; will become directly available on this page in the near future.
@article{Tillmann2019,
author = {Andreas M. Tillmann},
title = {{Computing the Spark: Mixed-Integer Programming\\for the (Vector) Matroid Girth Problem}},
journal = {{Computational Optimization and Applications}},
volume = {to appear},
year = {2019}
}
Joint Antenna Selection and Phase-Only Beam Using Mixed-Integer Nonlinear Programming
In this paper, we consider the problem of joint antenna selection and analog beamformer design in a downlink single-group multicast network. Our objective is to reduce the hardware requirement by minimizing the number of required phase shifters at the transmitter while fulfilling given quality of constraints. We formulate the problem as an L0 minimization problem and devise a novel branch-and-cut based algorithm to solve the formulated mixed-integer nonlinear program optimally. We also propose a suboptimal heuristic algorithm to solve the above problem with a low computational complexity. Furthermore, the performance of the suboptimal method is evaluated against the developed optimal method, which serves as a benchmark.
@inproceedings{FischerHedgeMatterPesaventoPfetschTillmann2018,
author = {T. Fischer and G. Hedge and F. Matter and M. Pesavento and M. E. Pfetsch and A. M. Tillmann},
title = {{Joint Antenna Selection and Phase-Only Beam Using Mixed-Integer Nonlinear Programming}},
booktitle = {{Proc. WSA 2018}},
pages = {},
year = {2018},
note = {to appear}
}
A Primal-Dual Homotopy Algorithm for L1-Minimization with L-infinity-Constraints
In this paper we propose a primal-dual homotopy method for L1-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies.
Matlab-Code and test instances for our method can be found below.
@article{BrauerLorenzTillmann2018,
author = {C. Brauer and D. A. Lorenz and A. M. Tillmann},
title = {{A Primal-Dual Homotopy Algorithm for $\ell_1$-Minimization with $\ell_\infty$-Constraints}},
journal = {{Computational Optimization and Applications}},
volume = {70},
number = {2},
pages = {443--478},
note = {DOI:10.1007/s10589-018-9983-4},
year = {2018}
}
Sparse Recovery with Integrality Constraints
In this paper, we investigate conditions for the unique recoverability of sparse integer-valued signals from few linear measurements. Both the objective of minimizing the number of nonzero components, the so-called L0-norm, as well as its popular substitute, the L1-norm, are covered. Furthermore, integer constraints and possible bounds on the variables are investigated. Our results show that the additional prior knowledge of signal integrality allows for recovering more signals than what can be guaranteed by the established recovery conditions from (continuous) compressed sensing. Moreover, even though the considered problems are NP-hard in general (even with an L1-objective), we investigate testing the L0-recovery conditions via some numerical experiments; it turns out that the corresponding problems are quite hard to solve in practice. However, medium-sized instances of L0- and L1-minimization with binary variables can be solved exactly within reasonable time.
@misc{LangePfetschSeibTillmann2016,
author = {J.-H. Lange and M. E. Pfetsch and B. M. Seib and A. M. Tillmann},
title = {{Sparse Recovery with Integrality Constraints}},
howpublished = {arXiv:1608.08678 [cs.IT]},
month = {August},
year = {2016}
}
DOLPHIn -- Dictionary Learning for Phase Retrieval
We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase. Specifically, we consider the task of estimating a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal in order to improve recovery performance. In this work, we consider such a sparse signal prior in the context of phase retrieval, when the sparsifying dictionary is not known in advance. Our algorithm jointly reconstructs the unknown signal-possibly corrupted by noise-and learns a dictionary such that each patch of the estimated image can be sparsely represented. Numerical experiments demonstrate that our approach can obtain significantly better reconstructions for phase retrieval problems with noise than methods that cannot exploit such “hidden” sparsity. Moreover, on the theoretical side, we provide a convergence result for our method.
Accompanying software to be found below:
- DOLPHIn - Matlab implementation of the dictionary learning method for 2D noisy (sparse) phase retrieval. (Version 1.10 of 07/25/2016; requires SPAMS package obtainable from http://spams-devel.gforge.inria.fr/)
- the test images from the paper
- a document with lots of result tables of supplementary numerical experiments
@article{TillmannEldarMairal2016b,
author = {A. M. Tillmann and Y. C. Eldar and J. Mairal},
title = {{DOLPHIn -- Dictionary Learning for Phase Retrieval}},
journal = {{IEEE Transactions on Signal Processing}},
volume = {64},
number = {24},
pages = {6485--6500},
year = {2016},
note = {DOI: 10.1109/TSP.2016.2607180}
}
Dictionary Learning from Phaseless Measurements
We propose a new algorithm to learn a dictionary along with sparse representations from signal measurements without phase. Specifically, we consider the task of reconstructing a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal in order to improve recovery performance. In this work, we consider sparse phase retrieval when the sparsifying dictionary is not known in advance, and we learn a dictionary such that each patch of the reconstructed image can be sparsely represented. Our numerical experiments demonstrate that our proposed scheme can obtain significantly better reconstructions for noisy phase retrieval problems than methods that cannot exploit such "hidden" sparsity.
@inproceedings{TillmannEldarMairal2016a,
author = {A. M. Tillmann and Y. C. Eldar and J. Mairal},
title = {{Dictionary Learning from Phaseless Measurements}},
booktitle = {{Proc. ICASSP 2016}},
pages = {4702--4706},
year = {2016},
note = {DOI: 10.1109/ICASSP.2016.7472569}
}
Equivalence of Linear Programming and Basis Pursuit
In this note, we show that linear programming and the prominent Basis Pursuit problem (i.e., minimizing the L1-norm of a vector x subject to an underdetermined linear equation system Ax = b) are theoretically equivalent, and briefly discuss possible ramifications regarding computational complexity and practical applicability.
@inproceedings{Tillmann2015,
author = {A. M. Tillmann},
title = {{Equivalence of Linear Programming and Basis Pursuit}},
booktitle = {{PAMM (Proceedings in Applied Mathematics and Mechanics}},
volume = {15},
number = {1},
pages = {735--738},
year = {2015},
note = {DOI: 10.1002/PAMM.201510351}
}
On the Computational Intractability of Exact and Approximate Dictionary Learning,
The efficient sparse coding and reconstruction of signal vectors via linear observations has received a tremendous amount of attention over the last decade. In this context, the automated learning of a suitable basis or overcomplete dictionary from training data sets of certain signal classes for use in sparse representations has turned out to be of particular importance regarding practical signal processing applications. Most popular dictionary learning algorithms involve NP-hard sparse recovery problems in each iteration, which may give some indication about the complexity of dictionary learning but does not constitute an actual proof of computational intractability. In this technical note, we show that learning a dictionary with which a given set of training signals can be represented as sparsely as possible is indeed NP-hard. Moreover, we also establish hardness of approximating the solution to within large factors of the optimal sparsity level. Furthermore, we give NP-hardness and non-approximability results for a recent dictionary learning variation called the sensor permutation problem. Along the way, we also obtain a new non-approximability result for the classical sparse recovery problem from compressed sensing.
@article{Tillmann2015,
author = {A. M. Tillmann},
title = {{On the Computational Intractability of Exact and Approximate Dictionary Learning}},
journal = {{IEEE Signal Processing Letters}},
volume = {22},
number = {1},
pages = {45--49},
year = {2015},
note = {DOI: 10.1109/LSP.2014.2345761}
}
Solving Basis Pursuit: Heuristic Optimality Check and Solver Comparison
The problem of finding a minimum L1-norm solution to an underdetermined linear system is an important problem in compressed sensing, where it is also known as basis pursuit. We propose a heuristic optimality check as a general tool for L1-minimization, which often allows for early termination by “guessing” a primal-dual optimal pair based on an approximate support. Moreover, we provide an extensive numerical comparison of various state-of-the-art L1-solvers that have been proposed during the last decade, on a large test set with a variety of explicitly given matrices and several right-hand sides per matrix reflecting different levels of solution difficulty. The results, as well as improvements by the proposed heuristic optimality check, are analyzed in detail to provide an answer to the question which algorithm is the best.
Accompanying software to be found below:
- HOC-Suite - Matlab package containing code for Heuristic Optimality Checks (HOCs) for Basis Pursuit, Basis Pursuit Denoising, and L1-Regularized Least-Squares. HOC can improve speed and accuracy of existing solvers for these problems (see README file for details, and results in the paper, along with the HOC Demo for BP). (Version 1.0 of 09/30/2013).
- ISAL1 - Matlab implementation of the Infeasible-Point Subgradient Algorithm for Basis Pursuit; also includes prototype code (ISAL1bpdn) for BP Denoising. (Version 1.00 of 09/30/2013 – current release; the paper used Version 0.91 of 10/08/2012).
- L1-Testset - The testset we used for our L1-solver comparison as ascii- or Matlab binary files, accompanied by Matlab routines for data handling. (Size of zip-files: 313MB (ascii), 1GB (mat))
@article{LorenzPfetschTillmann2015,
author = {D. A. Lorenz and M. E. Pfetsch and A. M. Tillmann},
title = {{Solving Basis Pursuit: Heuristic Optimality Check and Solver Comparison}},
journal = {{ACM Transactions on Mathematical Software}},
volume = {41},
number = {2},
pages = {Art. No. 8},
year = {2015},
note = {DOI: 10.1145/2689662}
}
Projection Onto The Cosparse Set is NP-Hard
The computational complexity of a problem arising in the context of sparse optimization is considered, namely, the projection onto the set of k-cosparse vectors w.r.t. some given matrix M. It is shown that this projection problem is (strongly) NP-hard, even in the special cases in which the matrix M contains only ternary or bipolar coefficients. Interestingly, this is in contrast to the projection onto the set of k-sparse vectors, which is trivially solved by keeping only the k largest coefficients.
@inproceedings{TillmannGribonvalPfetsch2014,
author = {A. M. Tillmann and R. Gribonval and M. E. Pfetsch},
title = {{Projection Onto The Cosparse Set is NP-Hard}},
booktitle = {{Proc. ICASSP 2014}},
pages = {7148--7152},
year = {2014},
note = {DOI: 10.1109/ICASSP.2014.6854987}
}
An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections
We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum L1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.
@article{LorenzPfetschTillmann2014,
author = {D. A. Lorenz and M. E. Pfetsch and A. M. Tillmann},
title = {{An infeasible-point subgradient method using adaptive approximate projections}},
journal = {{Computational Optimization and Applications}},
volume = {57},
number = {2},
pages = {271--306},
year = {2014},
note = {DOI: 10.1007/s10589-013-9602-3}
}
The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing
This paper deals with the computational complexity of conditions which guarantee that the NP-hard problem of finding the sparsest solution to an underdetermined linear system can be solved by efficient algorithms. In the literature, several such conditions have been introduced. The most well-known ones are the mutual coherence, the restricted isometry property (RIP), and the nullspace property (NSP). While evaluating the mutual coherence of a given matrix is easy, it has been suspected for some time that evaluating RIP and NSP is computationally intractable in general. We confirm these conjectures by showing that for a given matrix A and positive integer k, computing the best constants for which the RIP or NSP hold is, in general, NP-hard. These results are based on the fact that determining the spark of a matrix is NP-hard, which is also established in this paper. Furthermore, we also give several complexity statements about problems related to the above concepts.
A preliminary version of this work won the Best Student Paper Award at SPARS'13.
@article{TillmannPfetsch2014,
author = {A. M. Tillmann and M. E. Pfetsch},
title = {{The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing}},
journal = {{IEEE Transactions on Information Theory}},
volume = {60},
number = {2},
pages = {1248--1259},
year = {2014},
note = {DOI: 10.1109/TIT.2013.2290112}
}
Computational Aspects of Compressed Sensing
A central task in the novel field of Compressed Sensing is sparse recovery, i.e., finding the sparsest exact or approximate solution to an underdetermined linear equation system. Over the past decade, several conditions have been identified under which this generally intractable problem can be solved efficiently.
In this thesis, we investigate the computational complexity of ubiquitous sparse recovery conditions based on the spark, the restricted isometry property and the nullspace property of the sensing matrix. Furthermore, we provide an extensive numerical comparison of various state-of-the-art solvers for the most popular sparse recovery approach, the Basis Pursuit (L1-norm minimization) model, and demonstrate how a novel heuristic optimality check can significantly improve both speed and accuracy of most solvers. In addition, we introduce a new subgradient algorithm involving adaptive approximate projections for general nonsmooth convex constrained optimization and discuss its application to Basis Pursuit and variants.
@misc{Tillmann2013,
author = {A. M. Tillmann},
title = {{Computational Aspects of Compressed Sensing}},
howpublished = {Doctoral Dissertation},
school = {TU Darmstadt, Germany},
year = {2013}
}
Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography
The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.
@article{WengerAmentGutheLorenzTillmannWeiskopfMagnor2012,
author = {S. Wenger and M. Ament and S. Guthe and D. A. Lorenz and A. M. Tillmann and D. Weiskopf and M. Magnor},
title = {{Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography}},
journal = {{IEEE Transactions on Visualization and Computer Graphics}},
volume = {18},
number = {12},
pages = {2188--2197},
year = {2012},
note = {DOI: 10.1109/TVCG.2012.281}
}