Abstract: Abstract We present three results which support the conjecture that a graph is minimally rigid in d-dimensional \(\ell _p\) -space, where \(p\in (1,\infty )\) and \(p\not =2\) , if and only if it is (d, d)-tight. Firstly, we introduce a graph bracing operation which preserves independence in the generic rigidity matroid when passing from \(\ell _p^d\) to \(\ell _p^{d+1}\) . We then prove that every (d, d)-sparse graph with minimum degree at most \(d+1\) and maximum degree at most \(d+2\) is independent in \(\ell _p^d\) . Finally, we prove that every triangulation of the projective plane is minimally rigid in \(\ell _p^3\) . A catalogue of rigidity preserving graph moves is also provided for the more general class of strictly convex and smooth normed spaces and we show that every triangulation of the sphere is independent for 3-dimensional spaces in this class. PubDate: 2021-03-13

Abstract: Abstract Nuclear Magnetic Resonance (NMR) Spectroscopy is the second most used technique (after X-ray crystallography) for structural determination of proteins. A computational challenge in this technique involves solving a discrete optimization problem that assigns the resonance frequency to each atom in the protein. This paper introduces LIAN (LInear programming Assignment for NMR), a novel linear programming formulation of the problem which yields state-of-the-art results in simulated and experimental datasets. PubDate: 2021-03-11

Abstract: Abstract New mixed integer nonlinear optimization models for the Euclidean Steiner tree problem in d-space (with \(d\ge 3\) ) will be presented in this work. All models feature a nonsmooth objective function but the continuous relaxations of their set of feasible solutions are convex. From these models, four convex mixed integer linear and nonlinear relaxations will be considered. Each relaxation has the same set of feasible solutions as the set of feasible solutions of the model from which it is derived. Finally, preliminary computational results highlighting the main features of the presented relaxations will be discussed. PubDate: 2021-03-09

Abstract: Abstract We study the problem of satisfying the maximum number of distance geometry constraints with minimum experimental error. This models the determination of the shape of proteins from atomic distance data which are obtained from nuclear magnetic resonance experiments and exhibit experimental and systematic errors. Experimental errors are represented by interval constraints on Euclidean distances. Systematic errors occur from a misassignment of distances to wrong atomic pairs: we represent such errors by maximizing the number of satisfiable distance constraints. We present many mathematical programming formulations, as well as a “matheuristic” algorithm based on reformulations, relaxations, restrictions and refinement. We show that this algorithm works on protein graphs with hundreds of atoms and thousands of distances. PubDate: 2021-03-06

Abstract: Abstract Let P be a set of n points in the plane. We compute the value of \(\theta \in [0,2\pi )\) for which the rectilinear convex hull of P, denoted by \(\mathcal {RH}_{P}({\theta })\) , has minimum (or maximum) area in optimal \(O(n\log n)\) time and O(n) space, improving the previous \(O(n^2)\) bound. Let \(\mathcal {O}\) be a set of k lines through the origin sorted by slope and let \(\alpha _i\) be the sizes of the 2k angles defined by pairs of two consecutive lines, \(i=1, \ldots , 2k\) . Let \(\Theta _{i}=\pi -\alpha _i\) and \(\Theta =\min \{\Theta _i :i=1,\ldots ,2k\}\) . We obtain: (1) Given a set \(\mathcal {O}\) such that \(\Theta \ge \frac{\pi }{2}\) , we provide an algorithm to compute the \(\mathcal {O}\) -convex hull of P in optimal \(O(n\log n)\) time and O(n) space; If \(\Theta < \frac{\pi }{2}\) , the time and space complexities are \(O(\frac{n}{\Theta }\log n)\) and \(O(\frac{n}{\Theta })\) respectively. (2) Given a set \(\mathcal {O}\) such that \(\Theta \ge \frac{\pi }{2}\) , we compute and maintain the boundary of the \({\mathcal {O}}_{\theta }\) -convex hull of P for \(\theta \in [0,2\pi )\) in \(O(kn\log n)\) time and O(kn) space, or if \(\Theta < \frac{\pi }{2}\) , in \(O(k\frac{n}{\Theta }\log n)\) time and \(O(k\frac{n}{\Theta })\) space. (3) Finally, given a set \(\mathcal {O}\) such that \(\Theta \ge \frac{\pi }{2}\) , we compute, in \(O(kn\log n)\) time and O(kn) space, the angle \(\theta \in [0,2\pi )\) PubDate: 2021-03-01

Abstract: Abstract The aim of this paper is to open up a new perspective on set and shape optimization by establishing a theory of Galerkin approximations to the space of convex and compact subsets of \({\mathbb {R}}^d\) with favorable properties, both from a theoretical and from a computational perspective. Galerkin spaces consisting of polytopes with fixed facet normals are first explored in depth and then used to solve optimization problems in the space of convex and compact subsets of \({\mathbb {R}}^d\) approximately. PubDate: 2021-03-01

Abstract: Abstract We consider the lower bounded inverse optimal value problem on minimum spanning tree under unit \(l_{\infty }\) norm. Given an edge weighted connected undirected network \(G=(V,E,\varvec{w})\) , a spanning tree \(T^0\) , a lower bound vector \(\varvec{l}\) and a value K, we aim to find a new weight vector \(\bar{\varvec{w}}\) respecting the lower bound such that \(T^0\) is a minimum spanning tree under the vector \(\bar{\varvec{w}}\) with weight K, and the objective is to minimize the modification cost under unit \(l_{\infty }\) norm. We present a mathematical model of the problem. After analyzing optimality conditions of the problem, we develop a strongly polynomial time algorithm with running time O( V E ). Finally, we give an example to demonstrate the algorithm and present the numerical experiments. PubDate: 2021-03-01

Abstract: Abstract This paper develops a powerful new variant, dubbed \(\varDelta \) -DOGS( \(\varOmega _Z\) ), of our Delaunay-based Derivative-free Optimization via Global Surrogates family of algorithms, and uses it to identify a new, low-storage, high-accuracy, Implicit/Explicit Runge–Kutta (IMEXRK) time integration scheme for the stiff ODEs arising in high performance computing applications, like the simulation of turbulence. The \(\varDelta \) -DOGS( \(\varOmega _Z\) ) algorithm, which we prove to be globally convergent under the appropriate assumptions, combines (a) the essential ideas of our \(\varDelta \) -DOGS( \(\varOmega \) ) algorithm, which is designed to efficiently optimize a nonconvex objective function f(x) within a nonconvex feasible domain \(\varOmega \) defined by a number of constraint functions \(c_\kappa (x)\) , with (b) our \(\varDelta \) -DOGS(Z) algorithm, which reduces the number of function evaluations on the boundary of the search domain via the restriction that all function evaluations lie on a Cartesian grid, which is successively refined as the iterations proceed. The optimization of the parameters of low-storage IMEXRK schemes involves a complicated set of nonconvex constraints, which leads to a challenging disconnected feasible domain, and a highly nonconvex objective function; our simulations indicate significantly faster convergence using \(\varDelta \) -DOGS( \(\varOmega _Z\) ) as compared with the original \(\varDelta \) -DOGS( \(\varOmega \) ) optimization algorithm on the problem of tuning the parameters of such schemes. A low-storage third-order IMEXRK scheme with remarkably good stability and accuracy properties is ultimately identified using this approach, and is briefly tested on Burgers’ equation. PubDate: 2021-03-01

Abstract: Abstract This paper discusses second-order cone tensor eigenvalue complementarity problem. We reformulate second-order cone tensor eigenvalue complementarity problem as two constrained polynomial optimizations. For these two reformulated optimizations, Lasserre-type semidefinite relaxation methods are proposed to compute all second-order cone tensor complementarity eigenpairs. The proposed algorithms terminate when there are finitely many second-order cone complementarity eigenvalues. Numerical examples are reported to show the efficiency of the proposed algorithms. PubDate: 2021-03-01

Abstract: Abstract On the basis of the notion of approximating family of cones and a generalized type of Gerstewitz’s/Tammer’s nonlinear scalarization functional, we establish variants of the Ekeland variational principle (for short, EVP) involving set perturbations for a type of approximate proper solutions in the sense of Henig of a vector equilibrium problem. Initially, these results are obtained for both an unconstrained and a constrained vector equilibrium problem, where the objective function takes values in a real locally convex Hausdorff topological linear space. After that, we consider special cases when the objective function takes values in a normed space and in a finite-dimensional vector space. For the finite-dimensional objective space with a polyhedral ordering cone, we give the explicit representation of variants of EVP depending on matrices, and in such a way, some selected applications for multiobjective optimization problems and vector variational inequality problems are also derived. PubDate: 2021-03-01

Abstract: Abstract Introduced in 1993, the DIRECT global optimization algorithm provided a fresh approach to minimizing a black-box function subject to lower and upper bounds on the variables. In contrast to the plethora of nature-inspired heuristics, DIRECT was deterministic and had only one hyperparameter (the desired accuracy). Moreover, the algorithm was simple, easy to implement, and usually performed well on low-dimensional problems (up to six variables). Most importantly, DIRECT balanced local and global search (exploitation vs. exploration) in a unique way: in each iteration, several points were sampled, some for global and some for local search. This approach eliminated the need for “tuning parameters” that set the balance between local and global search. However, the very same features that made DIRECT simple and conceptually attractive also created weaknesses. For example, it was commonly observed that, while DIRECT is often fast to find the basin of the global optimum, it can be slow to fine-tune the solution to high accuracy. In this paper, we identify several such weaknesses and survey the work of various researchers to extend DIRECT so that it performs better. All of the extensions show substantial improvement over DIRECT on various test functions. An outstanding challenge is to improve performance robustly across problems of different degrees of difficulty, ranging from simple (unimodal, few variables) to very hard (multimodal, sharply peaked, many variables). Opportunities for further improvement may lie in combining the best features of the different extensions. PubDate: 2021-03-01

Abstract: Abstract In this paper, we study the optimum correction of the absolute value equations through making minimal changes in the coefficient matrix and the right hand side vector and using spectral norm. This problem can be formulated as a non-differentiable, non-convex and unconstrained fractional quadratic programming problem. The regularized least squares is applied for stabilizing the solution of the fractional problem. The regularized problem is reduced to a unimodal single variable minimization problem and to solve it a bisection algorithm is proposed. The main difficulty of the algorithm is a complicated constraint optimization problem, for which two novel methods are suggested. We also present optimality conditions and bounds for the norm of the optimal solutions. Numerical experiments are given to demonstrate the effectiveness of suggested methods. PubDate: 2021-03-01

Abstract: Abstract This paper proposes an inertial Bregman proximal gradient method for minimizing the sum of two possibly nonconvex functions. This method includes two different inertial steps and adopts the Bregman regularization in solving the subproblem. Under some general parameter constraints, we prove the subsequential convergence that each generated sequence converges to the stationary point of the considered problem. To overcome the parameter constraints, we further propose a nonmonotone line search strategy to make the parameter selections more flexible. The subsequential convergence of the proposed method with line search is established. When the line search is monotone, we prove the stronger global convergence and linear convergence rate under Kurdyka–Łojasiewicz framework. Moreover, numerical results on SCAD and MCP nonconvex penalty problems are reported to demonstrate the effectiveness and superiority of the proposed methods and line search strategy. PubDate: 2021-03-01

Abstract: Abstract In this paper we consider the multiple geometrical object detection problem. On the basis of the set \(\mathcal {A}\) containing data points coming from and scattered among a number of geometrical objects not known in advance, we should reconstruct or detect those geometrical objects. A new efficient method for solving this problem based on the popular RANSAC method using parameters from the DBSCAN method is proposed. Thereby, instead of using classical indexes for recognizing the most appropriate partition, we use parameters from the DBSCAN method which define the necessary conditions proven to be far more efficient. Especially, the method is applied to solving multiple circle detection problem. In this case, we give both the conditions for the existence of the best circle as a representative of the data set and the explicit formulas for the parameters of the best circle. In the illustrative example, we consider the multiple circle detection problem for the data point set \(\mathcal {A}\) coming from 5 intersected circles not known in advance. The method is tested on numerous artificial data sets and it has shown high efficiency. The comparison of the proposed method with other well-known methods of circle detection in real-world images also indicates a significant advantage of our method. PubDate: 2021-03-01

Abstract: Abstract We study the problem of minimizing a convex function on a nonempty, finite subset of the integer lattice when the function cannot be evaluated at noninteger points. We propose a new underestimator that does not require access to (sub)gradients of the objective; such information is unavailable when the objective is a blackbox function. Rather, our underestimator uses secant linear functions that interpolate the objective function at previously evaluated points. These linear mappings are shown to underestimate the objective in disconnected portions of the domain. Therefore, the union of these conditional cuts provides a nonconvex underestimator of the objective. We propose an algorithm that alternates between updating the underestimator and evaluating the objective function. We prove that the algorithm converges to a global minimum of the objective function on the feasible set. We present two approaches for representing the underestimator and compare their computational effectiveness. We also compare implementations of our algorithm with existing methods for minimizing functions on a subset of the integer lattice. We discuss the difficulty of this problem class and provide insights into why a computational proof of optimality is challenging even for moderate problem sizes. PubDate: 2021-02-27

Abstract: Abstract Solving mixed-integer nonlinear optimization problems (MINLPs) to global optimality is extremely challenging. An important step for enabling their solution consists in the design of convex relaxations of the feasible set. Known solution approaches based on spatial branch-and-bound become more effective the tighter the used relaxations are. Relaxations are commonly established by convex underestimators, where each constraint function is considered separately. Instead, a considerably tighter relaxation can be found via so-called simultaneous convexification, where convex underestimators are derived for more than one constraint function at a time. In this work, we present a global solution approach for solving mixed-integer nonlinear problems that uses simultaneous convexification. We introduce a separation method that relies on determining the convex envelope of linear combinations of the constraint functions and on solving a nonsmooth convex problem. In particular, we apply the method to quadratic absolute value functions and derive their convex envelopes. The practicality of the proposed solution approach is demonstrated on several test instances from gas network optimization, where the method outperforms standard approaches that use separate convex relaxations. PubDate: 2021-02-22

Abstract: Abstract We present two bilevel programming formulations for the aircraft deconfliction problem: one based on speed regulation in k dimensions, the other on heading angle changes in 2 dimensions. We propose three reformulations of each problem based on KKT conditions and on two different duals of the lower-level subproblems. We also propose a cut generation algorithm to solve the bilevel formulations. Finally, we present computational results on a variety of instances. PubDate: 2021-02-18

Abstract: Abstract The paper is dedicated to the study of strong duality for a problem of linear copositive programming. Based on the recently introduced concept of the set of normalized immobile indices, an extended dual problem is deduced. The dual problem satisfies the strong duality relations and does not require any additional regularity assumptions such as constraint qualifications. The main difference with the previously obtained results consists in the fact that now the extended dual problem uses neither the immobile indices themselves nor the explicit information about the convex hull of these indices. The strong duality formulations presented in the paper for linear copositive problems have similar structure and properties as that proposed in the works by M. Ramana, L. Tuncel, and H. Wolkowicz, for semidefinite programming. PubDate: 2021-02-18

Abstract: Abstract Relaxed correlation clustering (RCC) is a vertex partitioning problem that aims at minimizing the so-called relaxed imbalance in signed graphs. RCC is considered to be an NP-hard unsupervised learning problem with applications in biology, economy, image recognition and social network analysis. In order to solve it, we propose two linear integer programming formulations and a local search-based metaheuristic. The latter relies on auxiliary data structures to efficiently perform move evaluations during the search process. Extensive computational experiments on existing and newly proposed benchmark instances demonstrate the superior performance of the proposed approaches when compared to those available in the literature. While the exact approaches obtained optimal solutions for open problems, the proposed heuristic algorithm was capable of finding high quality solutions within a reasonable CPU time. In addition, we also report improving results for the symmetrical version of the problem. Moreover, we show the benefits of implementing the efficient move evaluation procedure that enables the proposed metaheuristic to be scalable, even for large-size instances. PubDate: 2021-02-13