Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We study a PDE-constrained optimal control problem that involves functions of bounded variation as controls and includes the TV seminorm of the control in the objective. We apply a path-following inexact Newton method to the problems that arise from smoothing the TV seminorm and adding an \(H^1\) regularization. We prove in an infinite-dimensional setting that, first, the solutions of these auxiliary problems converge to the solution of the original problem and, second, that an inexact Newton method enjoys fast local convergence when applied to a reformulation of the auxiliary optimality systems in which the control appears as implicit function of the adjoint state. We show convergence of a Finite Element approximation, provide a globalized preconditioned inexact Newton method as solver for the discretized auxiliary problems, and embed it into an inexact path-following scheme. We construct a two-dimensional test problem with fully explicit solution and present numerical results to illustrate the accuracy and robustness of the approach. PubDate: 2022-05-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Although the performance of popular optimization algorithms such as the Douglas–Rachford splitting (DRS) and the ADMM is satisfactory in convex and well-scaled problems, ill conditioning and nonconvexity pose a severe obstacle to their reliable employment. Expanding on recent convergence results for DRS and ADMM applied to nonconvex problems, we propose two linesearch algorithms to enhance and robustify these methods by means of quasi-Newton directions. The proposed algorithms are suited for nonconvex problems, require the same black-box oracle of DRS and ADMM, and maintain their (subsequential) convergence properties. Numerical evidence shows that the employment of L-BFGS in the proposed framework greatly improves convergence of DRS and ADMM, making them robust to ill conditioning. Under regularity and nondegeneracy assumptions at the limit point, superlinear convergence is shown when quasi-Newton Broyden directions are adopted. PubDate: 2022-05-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The paper deals with the random time-dependent oligopolistic market equilibrium problem. For such a problem the firms’ point of view has been analyzed in Barbagallo and Guarino Lo Bianco (Optim. Lett. 14: 2479–2493, 2020) while here the policymaker’s point of view is studied. The random dynamic optimal control equilibrium conditions are expressed by means of an inverse stochastic time-dependent variational inequality which is proved to be equivalent to a stochastic time-dependent variational inequality. Some existence and well-posedness results for optimal regulatory taxes are obtained. Moreover a numerical scheme to compute the solution to the stochastic time-dependent variational inequality is presented. Finally an example is discussed. PubDate: 2022-05-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this two-part study, we discuss possible extensions of the main ideas and methods of constrained DC optimization to the case of nonlinear semidefinite programming problems and more general nonlinear cone constrained optimization problems. In the first paper, we analyse two different approaches to the definition of DC matrix-valued functions (namely, order-theoretic and componentwise), study some properties of convex and DC matrix-valued mappings and demonstrate how to compute DC decompositions of some nonlinear semidefinite constraints appearing in applications. We also compute a DC decomposition of the maximal eigenvalue of a DC matrix-valued function. This DC decomposition can be used to reformulate DC semidefinite constraints as DC inequality constrains. Finally, we study local optimality conditions for general cone constrained DC optimization problems. PubDate: 2022-05-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We prove some new results about the asymptotic behavior of the steepest descent algorithm for general quadratic functions. Some well-known results of this theory are developed and extended to non-convex functions. We propose an efficient strategy for choosing initial points in the algorithm and show that this strategy can dramatically enhance the performance of the method. Furthermore, a modified version of the steepest descent algorithm equipped with a pre-initialization step is introduced. We show that an initial guess near the optimal solution does not necessarily imply fast convergence. We also propose a new approach to investigate the behavior of the method for non-convex quadratic functions. Moreover, some interesting results about the role of initial points in convergence to saddle points are presented. Finally, we investigate the probability of divergence for uniform random initial points. PubDate: 2022-05-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The limited memory BFGS (L-BFGS) method is one of the popular methods for solving large-scale unconstrained optimization. Since the standard L-BFGS method uses a line search to guarantee its global convergence, it sometimes requires a large number of function evaluations. To overcome the difficulty, we propose a new L-BFGS with a certain regularization technique. We show its global convergence under the usual assumptions. In order to make the method more robust and efficient, we also extend it with several techniques such as the nonmonotone technique and simultaneous use of the Wolfe line search. Finally, we present some numerical results for test problems in CUTEst, which show that the proposed method is robust in terms of solving more problems. PubDate: 2022-05-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract As we approach the physical limits predicted by Moore’s law, a variety of specialized hardware is emerging to tackle specialized tasks in different domains. Within combinatorial optimization, adiabatic quantum computers, complementary metal-oxide semiconductor annealers, and optical parametric oscillators are a few emerging specialized hardware technologies to solve optimization problems. The Ising optimization model unifies all of these emerging special-purpose hardware for optimization in terms of mathematical framework. In other words, they are all designed to solve optimization problems expressed in the Ising model or equivalently as a quadratic unconstrained binary optimization model. Due to various constraints specific to each type of hardware, they usually suffer from a major challenge: the number of variables that the hardware can manage to solve is very limited. The local search meta-heuristic is one of the approaches to tackle large-scale problems. However, a general optimization step within local search is not traditionally formulated in the Ising form. In this work, we introduce a new modeling framework for modeling local search heuristics for special-purpose hardware. In particular, we propose models that take the limitations of the Ising model and current hardware into account. As such, we demonstrate the advantage of our approach compared to previous methods by carrying out experiments to show that our local search models produce higher-quality solutions. PubDate: 2022-05-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Particle Swarm Optimization (PSO) is a population-based metaheuristic belonging to the class of Swarm Intelligence (SI) algorithms. Nowadays, its effectiveness on many hard problems is no longer to be proven. Nevertheless, it is known to be strongly sensitive on the choice of its settings and weak for local search. In this paper, we propose a new algorithm, called QUAntum Particle Swarm Optimization (QUAPSO) based on quantum superposition to set the velocity PSO parameters, simplifying the settings of the algorithm. Another improvement, inspired by Kangaroo Algorithm (KA), was added to PSO in order to optimize its efficiency in local search. QUAPSO was compared with a set of six well-known algorithms from the literature (two parameter sets of classical PSO, KA, Differential Evolution, Simulated Annealing Particle Swarm Optimization, Bat Algorithm and Simulated Annealing Gaussian Bat Algorithm). The experimental results show that QUAPSO outperforms the competing algorithms on a set of 30 test functions. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider to solve numerically the shape optimization problems of Dirichlet Laplace eigenvalues subject to volume and perimeter constraints. By combining a level set method with the relaxation approach, the algorithm can perform shape and topological changes on a fixed grid. We use the volume expressions of Eulerian derivatives in shape gradient descent algorithms. Finite element methods are used for discretizations. Two and three-dimensional numerical examples are presented to illustrate the effectiveness of the algorithms. PubDate: 2022-04-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient computation for one of these smooth components without slowing down the overall optimal rate of convergence. This result is much sharper than the classic black-box CP complexity results especially when the difference between the two Lipschitz constants associated with these components is large. We then consider an important class of bilinear saddle point problem whose objective function is given by the summation of a smooth component and a nonsmooth one with a bilinear saddle point structure. Using the aforementioned AGS method for smooth composite optimization and Nesterov’s smoothing technique, we show that one only needs \({{\mathcal{O}}}(1/\sqrt{\varepsilon })\) gradient computations for the smooth component while still preserving the optimal \({{\mathcal{O}}}(1/\varepsilon )\) overall iteration complexity for solving these saddle point problems. We demonstrate that even more significant savings on gradient computations can be obtained for strongly convex smooth and bilinear saddle point problems. PubDate: 2022-04-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We develop a globalized Proximal Newton method for composite and possibly non-convex minimization problems in Hilbert spaces. Additionally, we impose less restrictive assumptions on the composite objective functional considering differentiability and convexity than in existing theory. As far as differentiability of the smooth part of the objective function is concerned, we introduce the notion of second order semi-smoothness and discuss why it constitutes an adequate framework for our Proximal Newton method. However, both global convergence as well as local acceleration still pertain to hold in our scenario. Eventually, the convergence properties of our algorithm are displayed by solving a toy model problem in function space. PubDate: 2022-04-09
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We extend the Malitsky-Tam forward-reflected-backward (FRB) splitting method for inclusion problems of monotone operators to nonconvex minimization problems. By assuming the generalized concave Kurdyka-Łojasiewicz (KL) property of a quadratic regularization of the objective, we show that the FRB method converges globally to a stationary point of the objective and enjoys the finite length property. Convergence rates are also given. The sharpness of our approach is guaranteed by virtue of the exact modulus associated with the generalized concave KL property. Numerical experiments suggest that FRB is competitive compared to the Douglas-Rachford method and the Boţ-Csetnek inertial Tseng’s method. PubDate: 2022-04-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We study two NP-complete graph partition problems, k-equipartition problems and graph partition problems with knapsack constraints (GPKC). We introduce tight SDP relaxations with nonnegativity constraints to get lower bounds, the SDP relaxations are solved by an extended alternating direction method of multipliers (ADMM). In this way, we obtain high quality lower bounds for k-equipartition on large instances up to \(n =1000\) vertices within as few as 5 min and for GPKC problems up to \(n=500\) vertices within as little as 1 h. On the other hand, interior point methods fail to solve instances from \(n=300\) due to memory requirements. We also design heuristics to generate upper bounds from the SDP solutions, giving us tighter upper bounds than other methods proposed in the literature with low computational expense. PubDate: 2022-03-17
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper gives an algorithm for computing local saddle points for unconstrained polynomial optimization. It is based on optimality conditions and Lasserre’s hierarchy of semidefinite relaxations. It can determine the existence of local saddle points. When there are several different local saddle point values, the algorithm can get them from the smallest one to the largest one. PubDate: 2022-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The optimization of shape functionals under convexity, diameter or constant width constraints shows numerical challenges. The support function can be used in order to approximate solutions to such problems by finite dimensional optimization problems under various constraints. We propose a numerical framework in dimensions two and three and we present applications from the field of convex geometry. We consider the optimization of functionals depending on the volume, perimeter and Dirichlet Laplace eigenvalues under the aforementioned constraints. In particular we confirm numerically Meissner’s conjecture, regarding three dimensional bodies of constant width with minimal volume. PubDate: 2022-03-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Output-based controllers are known to be fragile with respect to model uncertainties. The standard \(\mathcal {H}_{\infty }\) -control theory provides a general approach to robust controller design based on the solution of the \(\mathcal {H}_{\infty }\) -Riccati equations. In view of stabilizing incompressible flows in simulations, two major challenges have to be addressed: the high-dimensional nature of the spatially discretized model and the differential-algebraic structure that comes with the incompressibility constraint. This work demonstrates the synthesis of low-dimensional robust controllers with guaranteed robustness margins for the stabilization of incompressible flow problems. The performance and the robustness of the reduced-order controller with respect to linearization and model reduction errors are investigated and illustrated in numerical examples. PubDate: 2022-03-09
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, an inexact proximal-point penalty method is studied for constrained optimization problems, where the objective function is non-convex, and the constraint functions can also be non-convex. This method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weak-convexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradient-based method. The computational complexity of this approach is analyzed separately for the cases of convex constraint and non-convex constraint. For both cases, the complexity results are established in terms of the number of proximal gradient steps needed to find an \(\varepsilon\) -stationary point. When the constraint functions are convex, we show a complexity result of \(\tilde{O}(\varepsilon ^{-5/2})\) to produce an \(\varepsilon\) -stationary point under the Slater’s condition. When the constraint functions are non-convex, the complexity becomes \({\tilde{O}}(\varepsilon ^{-3})\) if a non-singularity condition holds on constraints and otherwise \(\tilde{O}(\varepsilon ^{-4})\) if a feasible initial solution is available. PubDate: 2022-03-03 DOI: 10.1007/s10589-022-00358-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate stationary points. As a corollary, we establish a global convergence result. We also present numerical results on benchmark problems. In terms of the number of calls of the oracle, the proposed method compares favorably with trust-region methods that use evaluations of the objective function. PubDate: 2022-03-02 DOI: 10.1007/s10589-022-00356-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly non-prox-friendly DC function. The method sequentially solves subproblems constructed from a quadratic approximation of the smooth function and a linear majorization of the concave part of the DC function. We allow the subproblem to be solved inexactly, and propose a new inexact rule to characterize the inexactness of the approximate solution. For several classical algorithms applied to the subproblem, we derive practical termination criteria so as to obtain solutions satisfying the inexact rule. We also present some convergence results for our method, including the global subsequential convergence and a non-asymptotic complexity analysis. Finally, numerical experiments are conducted to illustrate the efficiency of our method. PubDate: 2022-03-02 DOI: 10.1007/s10589-022-00357-z