Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose a general solution approach for min-max-robust counterparts of combinatorial optimization problems with uncertain linear objectives. We focus on the discrete scenario case, but our approach can be extended to other types of uncertainty sets such as polytopes or ellipsoids. Concerning the underlying certain problem, the algorithm is entirely oracle-based, i.e., our approach only requires a (primal) algorithm for solving the certain problem. It is thus particularly useful in case the certain problem is well-studied but its combinatorial structure cannot be directly exploited in a tailored robust optimization approach, or in situations where the underlying problem is only defined implicitly by a given software. The idea of our algorithm is to solve the convex relaxation of the robust problem by a simplicial decomposition approach, the main challenge being the non-differentiability of the objective function in the case of discrete or polytopal uncertainty. The resulting dual bounds are then used within a tailored branch-and-bound framework for solving the robust problem to optimality. By a computational evaluation, we show that our method outperforms straightforward linearization approaches on the robust minimum spanning tree problem. Moreover, using the Concorde solver for the certain oracle, our approach computes much better dual bounds for the robust traveling salesman problem in the same amount of time. PubDate: 2023-01-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The minimum sum-of-squares clustering problem is a very important problem in data mining and machine learning with very many applications in, e.g., medicine or social sciences. However, it is known to be NP-hard in all relevant cases and to be notoriously hard to be solved to global optimality in practice. In this paper, we develop and test different tailored mixed-integer programming techniques to improve the performance of state-of-the-art MINLP solvers when applied to the problem—among them are cutting planes, propagation techniques, branching rules, or primal heuristics. Our extensive numerical study shows that our techniques significantly improve the performance of the open-source MINLP solver SCIP. Consequently, using our novel techniques, we can solve many instances that are not solvable with SCIP without our techniques and we obtain much smaller gaps for those instances that can still not be solved to global optimality. PubDate: 2023-01-10
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we propose a new algorithm combining the Douglas–Rachford (DR) algorithm and the Frank–Wolfe algorithm, also known as the conditional gradient (CondG) method, for solving the classic convex feasibility problem. Within the algorithm, which will be named Approximate Douglas–Rachford (ApDR) algorithm, the CondG method is used as a subroutine to compute feasible inexact projections on the sets under consideration, and the ApDR iteration is defined based on the DR iteration. The ApDR algorithm generates two sequences, the main sequence, based on the DR iteration, and its corresponding shadow sequence. When the intersection of the feasible sets is nonempty, the main sequence converges to a fixed point of the usual DR operator, and the shadow sequence converges to the solution set. We provide some numerical experiments to illustrate the behaviour of the sequences produced by the proposed algorithm. PubDate: 2023-01-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A new framework for portfolio diversification is introduced which goes beyond the classical mean-variance approach and portfolio allocation strategies such as risk parity. It is based on a novel concept called portfolio dimensionality that connects diversification to the non-Gaussianity of portfolio returns and can typically be defined in terms of the ratio of risk measures which are homogenous functions of equal degree. The latter arises naturally due to our requirement that diversification measures should be leverage invariant. We introduce this new framework and argue the benefits relative to existing measures of diversification in the literature, before addressing the question of optimizing diversification or, equivalently, dimensionality. Maximising portfolio dimensionality leads to highly non-trivial optimization problems with objective functions which are typically non-convex and potentially have multiple local optima. Two complementary global optimization algorithms are thus presented. For problems of moderate size and more akin to asset allocation problems, a deterministic Branch and Bound algorithm is developed, whereas for problems of larger size a stochastic global optimization algorithm based on Gradient Langevin Dynamics is given. We demonstrate analytically and through numerical experiments that the framework reflects the desired properties often discussed in the literature. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider a knapsack-constrained maximization problem of a nonnegative monotone DR-submodular function f over a bounded integer lattice \([\varvec{B}]\) in \({\mathbb {R}}_+^n\) , \(\max \{f({\varvec{x}}): {\varvec{x}}\in [\varvec{B}] \text {~and~} \sum _{i=1}^n {\varvec{x}}(i)c(i)\le 1\}\) , where n is the cardinality of a ground set N and \(c(\cdot )\) is a cost function defined on N. Soma and Yoshida [Math. Program., 172 (2018), pp. 539-563] present a \((1-e^{-1}-O(\epsilon ))\) -approximation algorithm for this problem by combining threshold greedy algorithm with partial element enumeration technique. Although the approximation ratio is almost tight, their algorithm runs in \(O(\frac{n^3}{\epsilon ^3}\log ^3 \tau [\log ^3 \left\ \varvec{B}\right\ _\infty + \frac{n}{\epsilon }\log \left\ \varvec{B}\right\ _\infty \log \frac{1}{\epsilon c_{\min }}])\) time, where \(c_{\min }=\min _i c(i)\) and \(\tau \) is the ratio of the maximum value of f to the minimum nonzero increase in the value of f. Besides, Ene and Nguy \(\tilde{\check{\text {e}}}\) n [arXiv:1606.08362, 2016] indirectly give a \((1-e^{-1}-O(\epsilon ))\) -approximation algorithm with \(O({(\frac{1}{\epsilon })}^{ O(1/\epsilon ^4)}n \log {\Vert \varvec{B}\Vert }_\infty \log ^2{(n \log {\Vert \varvec{B}\Vert }_\infty )})\) time. But their algorithm is random. In this paper, we make full use of the DR-submodularity over a bounded integer lattice, carry forward the greedy idea in the continuous process and provide a simple deterministic rounding method so as to obtain a feasible solution of the original problem without loss of objective value. We present a deterministic algorithm and theoretically reduce its running time to a new record, \(O\big ((\frac{1}{\epsilon })^{O({1}/{\epsilon ^5})} \cdot n \log \frac{1}{c_{\min }} \log {\Vert \varvec{B}\Vert _\infty }\big )\) , with the same approximate ratio. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The online optimization problem with non-convex loss functions over a closed convex set, coupled with a set of inequality (possibly non-convex) constraints is a challenging online learning problem. A proximal method of multipliers with quadratic approximations (named as OPMM) is presented to solve this online non-convex optimization with long term constraints. Regrets of the violation of Karush-Kuhn-Tucker conditions of OPMM for solving online non-convex optimization problems are analyzed. Under mild conditions, it is shown that this algorithm exhibits \({{\mathcal {O}}}(T^{-1/8})\) Lagrangian gradient violation regret, \({{\mathcal {O}}}(T^{-1/8})\) constraint violation regret and \({{\mathcal {O}}}(T^{-1/4})\) complementarity residual regret if parameters in the algorithm are properly chosen, where T denotes the number of time periods. For the case that the objective is a convex quadratic function, we demonstrate that the regret of the objective reduction can be established even the feasible set is non-convex. For the case when the constraint functions are convex, if the solution of the subproblem in OPMM is obtained by solving its dual, OPMM is proved to be an implementable projection method for solving the online non-convex optimization problem. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we present DC (difference of convex functions) auxiliary principle methods for solving lexicographic equilibrium problems. Under the strongly monotone and Lipchitz-type assumptions of the cost bifunction, we study the convergence of the sequence generated by the proposed algorithms to a unique solution of the considered lexicographic equilibrium problem. Moreover, we also study the asymptotic behavior of the algorithm for solving the considered problem under the presence of computational errors. Finally, we give some numerical experiments to illustrate the behaviour of the proposed algorithms and provide their comparison with some known algorithms. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Coexhausters are families of convex compact sets that allow one to represent the approximation of the increment of a function at a given point in the form of minmax or maxmin of affine functions. We demonstrate that this representation can be used to define a piecewise affine function and therefore coexhausters are a natural technique for studying the problem of finding a global minimum of piecewise affine functions. All the conditions and methods in the current study were obtained by means of coexhausters theory. Firstly, we apply coexhauster based conditions to state and prove necessary and sufficient conditions for a piecewise affine function to be bounded from below. Secondly, we use coexhausters to construct a simple method which allows one to get the minimum value of the studied function and the corresponding set of all its global minimizers. Illustrative numerical examples are provided throughout the paper. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The aim of this paper is to establish scalar characterizations of minimal, weak minimal and strict minimal solutions in terms of a generalized oriented distance function defined on sets in real normed linear space with respect to a point in the space. Further, we study the lower and upper semicontinuity of the generalized oriented distance function. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The problem of packing a given set of freely translated and rotated convex polygons in a minimum-perimeter convex polygon (in particular the minimum-perimeter convex hull) is introduced. A mathematical model of the problem using the phi-function technique is provided. Problem instances with up to 6 convex polygons are solved by the global NLP solver BARON to get a minimum-perimeter convex hull. Numerical experiments for larger instances are reported using the local NLP solver IPOPT. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Huemer et al. (Discrete Mathematics, 2019) proved that for any two point sets R and B with \( R = B \) , the perfect matching that matches points of R with points of B, and maximizes the total squared Euclidean distance of the matched pairs, has the property that all the disks induced by the matching have a common point. Each pair of matched points \(p\in R\) and \(q\in B\) induces the disk of smallest diameter that covers p and q. Following this research line, in this paper we consider the perfect matching that maximizes the total Euclidean distance. First, we prove that this new matching for R and B does not always ensure the common intersection property of the disks. Second, we extend the study of this new matching for sets of 2n uncolored points in the plane, where a matching is just a partition of the points into n pairs. As the main result, we prove that in this case all disks of the matching do have a common point. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we study an uncertain inequality system, where the input data are uncertain and belong to prescribed uncertainty sets. Using the deterministic approach in robust optimization, we treat this uncertain system by examining the so-called robust system. This approach enables us to compute the second order tangent sets for the solution set of the robust system and then obtain the second order epi-subderivative for the indicator function of its solution set. In this way, we are able to calculate the graphical derivative for the normal cone mapping of solution set of the robust system under certain qualification conditions. As applications, we establish second order necessary and sufficient optimality conditions, and derive necessary and sufficient conditions for stability properties such as the isolated calmness of optimization problems involving uncertain constraints under weak qualification conditions. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we derive second-order necessary optimality conditions and second-order sufficient optimality condition for locally Pareto solutions to a class of multi-objective optimal control problems with mixed pointwise constraints. The necessary optimality conditions for the multi-objective optimal control problem are established via optimality conditions of a specific vector optimization problem in Banach spaces, which is proved directly without using the scalarization method. PubDate: 2023-01-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract For nonconvex quadratically constrained quadratic programs (QCQPs), we first show that, under certain feasibility conditions, the standard semidefinite programming (SDP) relaxation is exact for QCQPs with bipartite graph structures. The exact optimal solutions are obtained by examining the dual SDP relaxation and the rank of the optimal solution of this dual SDP relaxation under strong duality. Our results generalize the previous results on QCQPs with sign-definite bipartite graph structures, QCQPs with forest structures, and QCQPs with nonpositive off-diagonal data elements. Second, we propose a conversion method from QCQPs with no particular structure to the ones with bipartite graph structures. As a result, we demonstrate that a wider class of QCQPs can be exactly solved by the SDP relaxation. Numerical instances are presented for illustration. PubDate: 2022-12-31
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents the distributed primal outer approximation (DiPOA) algorithm for solving sparse convex programming (SCP) problems with separable structures, efficiently, and in a decentralized manner. The DiPOA algorithm development consists of embedding the recently proposed relaxed hybrid alternating direction method of multipliers (RH-ADMM) algorithm into the outer approximation (OA) algorithm. We also propose two main improvements to control the quality and the number of cutting planes that approximate nonlinear functions. In particular, the RH-ADMM algorithm acts as a distributed numerical engine inside the DiPOA algorithm. DiPOA takes advantage of the multi-core architecture of modern processors to speed up optimization algorithms. The proposed distributed algorithm makes practical the solution of SCP in learning and control problems from the application side. This paper concludes with a performance analysis of DiPOA for the distributed sparse logistic regression and quadratically constrained optimization problems. Finally, the paper concludes with a numerical comparison with state-of-the-art optimization solvers. PubDate: 2022-12-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We introduce a generalization of separability for global optimization, presented in the context of a simple branch and bound method. Our results apply to continuously differentiable objective functions implemented as computer programs. A significant search space reduction can be expected to yield an acceleration of any global optimization method. We show how to utilize interval derivatives calculated by adjoint algorithmic differentiation to examine the monotonicity of the objective with respect to so called structural separators and how to verify the latter automatically. PubDate: 2022-12-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The quadratically constrained quadratic programming problem often appears in various fields such as engineering practice, management science and network communication. This paper mainly considers a non-convex quadratic programming problem with convex quadratic constraints. Firstly, the objective function of the problem is reconstructed into a form composed of only one convex function and several linear functions by using the eigenvalue decomposition technique of matrix. Then the reconstructed problem is converted to the equivalent problem with simple concave quadratic objective function in the outcome space by introducing appropriate auxiliary variables, and its feasible domain is convex. Based on the branch-and-bound framework which can guarantee the global optimality of the solution, a global optimization algorithm for solving the equivalent problem is proposed, which integrates the effective relaxation process and the branching process related to the outer approximation technique. Finally, the effectiveness and feasibility of the algorithm are illustrated by numerical experiments. PubDate: 2022-12-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we consider solving the tensor complementarity problem (TCP). We first introduce the concept of the implicit Z-tensor, which is a generalization of Z-tensor. Then, based on a new fixed point reformulation of the TCP, we design an iterative algorithm for solving the TCP with an implicit Z-tensor under the assumption that the feasible set of the problem involved is nonempty. We prove that the proposed fixed point iterative method converges monotonically downward to a solution of the TCP. Furthermore, we establish the global linear rate of convergence of the proposed method under some reasonable assumptions. Compared with the existing related studies, the proposed method not only solves a wider range of TCPs, but also has a lower computational cost. The numerical results verify our theoretical findings. PubDate: 2022-12-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Bayesian optimization is a popular tool for optimizing time-consuming objective functions with a limited number of function evaluations. In real-life applications like engineering design, the designer often wants to take multiple objectives as well as input uncertainty into account to find a set of robust solutions. While this is an active topic in single-objective Bayesian optimization, it is less investigated in the multi-objective case. We introduce a novel Bayesian optimization framework to perform multi-objective optimization considering input uncertainty. We propose a robust Gaussian Process model to infer the Bayes risk criterion to quantify robustness, and we develop a two-stage Bayesian optimization process to search for a robust Pareto frontier, i.e., solutions that have good average performance under input uncertainty. The complete framework supports various distributions of the input uncertainty and takes full advantage of parallel computing. We demonstrate the effectiveness of the framework through numerical benchmarks. PubDate: 2022-12-09