A longstanding problem related to floating-point implementation of numerical programs is to provide efficient yet precise analysis of output errors. We present a framework to compute lower bounds of absolute roundoff errors, for a particular rounding model. This method applies for numerical programs implementing polynomial functions with box constrained input variables. Our study relies on semidefinite programming (SDP) relaxations and is complementary of over-approximation frameworks, consisting of obtaining upper bounds for the absolute roundoff error. Combining the results of both frameworks allows to get interval enclosures for upper bounds of roundoff errors. The under-approximation framework is based on a new hierarchy of convergent robust SDP approximations for certain classes of polynomial optimization problems. Each problem in this hierarchy can be exactly solved via SDP. By using this hierarchy, one can provide a monotone non-decreasing sequence of lower bounds converging to the absolute roundoff error of a program implementing a polynomial function, applying for a particular rounding model. We investigate the efficiency and precision of our method on non-trivial polynomial programs coming from space control, optimization and computational biology.
The EASAL software implements a suite of algorithms for understanding the structure and geometric properties of rigid point sets that are pairwise constrained by distance intervals. The algorithms explore the configuration spaces of these constraint systems using classical results for stratification of configuration spaces, new results for efficient sampling by convex parametrization, and intuitive visualization. This paper provides a high level description of the state-of-the-art, key theoretical underpinnings, major algorithms and their implementation and of major applications such as free energy and kinetics of assembly of supramolecular structures or of clusters in colloidal and soft materials.
In this work we develop a validated numerics method for the solution of linear ordinary differential equations (LODEs). A wide range of algorithms (i.e., Runge-Kutta, collocation, spectral methods) exist for numerically computing approximations of the solutions. Most of these come with proofs of asymptotic convergence, but usually, provided error bounds are non-constructive. However, in some domains like critical systems and computer-aided mathematical proofs, one needs validated effective error bounds. We focus on both the theoretical and practical complexity analysis of a so-called a posteriori quasi-Newton validation method, which mainly relies on a fixed-point argument of a contracting map. Specifically, given a polynomial approximation, obtained by some numerical algorithm and expressed in Chebyshev basis, our algorithm efficiently computes an accurate and rigorous error bound. For this, we study theoretical properties like compactness, convergence, invertibility of associated linear integral operators and their truncations in a suitable coefficient space of Chebyshev series. Then, we analyze the almost-banded matrix structure of these operators, which allows for very efficient numerical algorithms for both numerical solutions of LODEs and rigorous computation of the approximation error. Finally, several representative examples show the advantages of our algorithms as well as their theoretical and practical limits.
We experimentally study the fundamental problem of computing the volume of a convex polytope given as an intersection of linear halfspaces. We implement and evaluate randomized polynomial-time algorithms for accuratel approximating the polytope's volume in high dimensions (e.g. one hundred) based on Hit-and-run random walks. In order to carry out this efficiently, we experimentally correlate the effect of parameters, such as random walk length and number of sample points, with accuracy and runtime. Our method is based on Monte Carlo algorithms with guaranteed speed and provably high probability of success for arbitrarily-high precision. We exploit the problem's features in implementing a practical rounding procedure of polytopes, in computing only partial "generations" of random points, and in designing fast polytope boundary oracles. Our publicly available software is significantly faster than exact computation and more accurate than existing approximation methods. For illustration, volume approximations of Birkhoff polytopes B(11),..., B(15) are computed, in dimension up to 196, whereas exact methods have only computed volumes of up to B(10).
This paper has two main objectives: one is to describe some extensions of an adaptive Algebraic Multigrid (AMG) method of the form previously proposed by the first and third authors, and a second one is to present a new software framework, named BootCMatch, which implements all the components needed to build and apply the described adaptive AMG both as stand-alone solver and as preconditioner in a Krylov method. The adaptive AMG presented is meant to handle general symmetric and positive definite (SPD) sparse linear systems, without assuming any a priori information of the problem and its origin; the goal of adaptivity is to achieve a method with a prescribed convergence rate. The presented method exploits a general coarsening process based on aggregation of unknowns, obtained by a maximum weight matching in the adjacency graph of the system matrix. More specifically, a maximum product matching is employed to define an effective smoother subspace (complementary to the coarse space), a process referred to as compatible relaxation, at every level of the recursive two-level hierarchical AMG process. Results on a large variety of test cases and comparisons with related work demonstrate the reliability and efficiency of the method and of the software.
Computational models play a key role in engineering, but managing large codes and integrating them to solve multidisciplinary problems can be a bottleneck. Computational frameworks alleviate this problem by facilitating a modular implementation of computational models; however, existing frameworks are not designed to couple multiple large-scale models efficiently, especially when derivatives are needed. These limitations motivate the modular analysis and unified derivatives (MAUD) architecture. MAUD allows computational frameworks to efficiently couple heterogeneous computational models and to compute the derivatives needed for optimization in a semi-automated way. To do this, MAUD formulates the multidisciplinary model as a nonlinear system of equations, which leads to a linear equation that unifies all methods for computing derivatives including the adjoint method. This formulation enables a compact framework design that provides parallel data passing, supports hierarchical decomposition, and efficiently solves coupled systems. These features are demonstrated by solving two problems using a Python implementation of MAUD: nanosatellite optimization with more than 2 million unknowns and 25,000 design variables, and aircraft operations optimization involving over 6,000 design variables and 23,000 constraints. MAUD is now implemented in OpenMDAO, NASA's open-source framework, and it has been used to solve aircraft, satellite, wind turbine, and jet engine design problems.
This work consists of key derivations and computational tests that relate the roundoff-error-free (REF) LU factorization framework to the rational arithmetic LU factorization approach used to validate basic solutions in state-of-the-art exact linear programming solvers. Most significantly, the featured findings reveal that the integer-preserving REF factorization framework solves dense systems of linear equations one order of magnitude faster than the exact rational arithmetic approach while requiring half the memory. Additionally, this paper develops and analyzes an efficient basic solution validation implementation of Edmonds' Q-matrices. Further experiments demonstrate that the REF LU factorization framework remains significantly more efficient than this alternative integer-preserving approach in terms of memory requirements and computational effort. Altogether, the contributions of this paper strongly indicate that the REF factorization framework should be the go-to tool for basic solution validation within exact linear and mixed-integer programming.
Permutation problems are combinatorial optimization problems whose solutions are naturally codified as permutations. Due to their complexity, motivated principally by the factorial cardinality of the search space of solutions, they have been a recurrent topic for the artificial intelligence and operations research community. Recently, among the vast number of metaheuristic algorithms, new advances on estimation of distribution algorithms (EDAs) have shown outstanding performance when solving some permutation problems. These novel EDAs implement distance-based exponential probability models such as the Mallows and Generalized Mallows models. In this paper, we present a Matlab package, perm_mateda, for estimation of distribution algorithms on permutation problems, which has been implemented as an extension to the Mateda-2.0 toolbox of EDAs. Particularly, we provide implementations of the Mallows and Generalized Mallows EDAs under the Kendall's-tau, Cayley, and Ulam distances. In addition, four classical permutation problems have been also implemented: Traveling Salesman Problem, Permutation Flowshop Scheduling Problem, Linear Ordering Problem, and Quadratic Assignment Problem.