A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Journal of Global Optimization
Journal Prestige (SJR): 1.311
Citation Impact (citeScore): 2
Number of Followers: 6  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1573-2916 - ISSN (Online) 0925-5001
Published by Springer-Verlag Homepage  [2467 journals]
  • The interpolating element-free Galerkin method for the p-Laplace double
           obstacle mixed complementarity problem

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, the interpolating element-free Galerkin method is presented for the p-Laplace double obstacle mixed complementarity problem when \(1<p<2\) and \(p>2\) . First, a nonlinear power penalty equation is obtained by a power penalty approximation method and the existence and uniqueness of the solution to the power penalty equation are proved when \(1<p<2\) and \(p>2\) . The convergence of the power penalty solution to the original problem and the penalty estimates are analyzed. Second, the interpolating element-free Galerkin method is constructed for the nonlinear power penalty equation. The numerical implementation is introduced in detail and the convergence of the interpolating element-free Galerkin method is also given. Error estimates indicate that the convergence order depends on not only the spatial step h and the number of bases functions m in the interpolating element-free Galerkin method, but also the index k in the penalty term, the penalty factor \(\lambda \) and p. For different p, the method that how to choose the optimal k and \(\lambda \) is also given. Numerical examples verify error estimates and illustrate the influence of each parameter on the solution.
      PubDate: 2022-12-03
       
  • A linear programming approach to approximating the infinite time reachable
           set of strictly stable linear control systems

    • Free pre-print version: Loading...

      Abstract: Abstract The infinite time reachable set of a strictly stable linear control system is the Hausdorff limit of the finite time reachable set of the origin as time tends to infinity. By definition, it encodes useful information on the long-term behavior of the control system. Its characterization as a limit set gives rise to numerical methods for its computation that are based on forward iteration of approximate finite time reachable sets. These methods tend to be computationally expensive, because they essentially perform a Minkowski sum in every single forward step. We develop a new approach to computing the infinite time reachable set that is based on the invariance properties of the control system and the desired set. These allow us to characterize a polyhedral outer approximation as the unique solution to a linear program with constraints that incorporate the system dynamics. In particular, this approach does not rely on forward iteration of finite time reachable sets.
      PubDate: 2022-12-02
       
  • A non-convex piecewise quadratic approximation of $$\ell _{0}$$
           regularization: theory and accelerated algorithm

    • Free pre-print version: Loading...

      Abstract: Abstract Non-convex regularization has been recognized as an especially important approach in recent studies to promote sparsity. In this paper, we study the non-convex piecewise quadratic approximation (PQA) regularization for sparse solutions of the linear inverse problem. It is shown that exact recovery of sparse signals and stable recovery of compressible signals are possible through local optimum of this regularization. After developing a thresholding representation theory for PQA regularization, we propose an iterative PQA thresholding algorithm (PQA algorithm) to solve this problem. The PQA algorithm converges to a local minimizer of the regularization, with an eventually linear convergence rate. Furthermore, we adopt the idea of accelerated gradient method to design the accelerated iterative PQA thresholding algorithm (APQA algorithm), which is also linearly convergent, but with a faster convergence rate. Finally, we carry out a series of numerical experiments to assess the performance of both algorithms for PQA regularization. The results show that PQA regularization outperforms \(\ell _1\) and \(\ell _{1/2}\) regularizations in terms of accuracy and sparsity, while the APQA algorithm is demonstrated to be significantly better than the PQA algorithm.
      PubDate: 2022-12-02
       
  • Discretization and global optimization for mixed integer bilinear
           programming

    • Free pre-print version: Loading...

      Abstract: Abstract We consider global optimization of mixed-integer bilinear programs (MIBLP) using discretization-based mixed-integer linear programming (MILP) relaxations. We start from the widely used radix-based discretization formulation (called R-formulation in this paper), where the base R may be any natural number, but we do not require the discretization level to be a power of R. We prove the conditions under which R-formulation is locally sharp, and then propose an \(R^+\) -formulation that is always locally sharp. We also propose an H-formulation that allows multiple bases and prove that it is also always locally sharp. We develop a global optimization algorithm with adaptive discretization (GOAD) where the discretization level of each variable is determined according to the solution of previously solved MILP relaxations. The computational study shows the computational advantage of GOAD over general-purpose global solvers BARON and SCIP.
      PubDate: 2022-12-01
       
  • Convergence and rate analysis of a proximal linearized ADMM for nonconvex
           nonsmooth optimization

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we consider a proximal linearized alternating direction method of multipliers, or PL-ADMM, for solving linearly constrained nonconvex and possibly nonsmooth optimization problems. The algorithm is generalized by using variable metric proximal terms in the primal updates and an over-relaxation step in the multiplier update. Extended results based on the augmented Lagrangian including subgradient band, limiting continuity, descent and monotonicity properties are established. We prove that the PL-ADMM sequence is bounded. Under the powerful Kurdyka-Łojasiewicz inequality we show that the PL-ADMM sequence has a finite length thus converges, and we drive its convergence rates.
      PubDate: 2022-12-01
       
  • Tight bounds on the maximal perimeter and the maximal width of convex
           small polygons

    • Free pre-print version: Loading...

      Abstract: Abstract A small polygon is a polygon of unit diameter. The maximal perimeter and the maximal width of a convex small polygon with \(n=2^s\) vertices are not known when \(s \ge 4\) . In this paper, we construct a family of convex small n-gons, \(n=2^s\) and \(s\ge 3\) , and show that the perimeters and the widths obtained cannot be improved for large n by more than \(a/n^6\) and \(b/n^4\) respectively, for certain positive constants a and b. In addition, assuming that a conjecture of Mossinghoff is true, we formulate the maximal perimeter problem as a nonlinear optimization problem involving trigonometric functions and, for \(n=2^s\) with \(3 \le s\le 7\) , we provide global optimal solutions.
      PubDate: 2022-12-01
       
  • A genetic algorithm with a self-reproduction operator to solve systems of
           nonlinear equations

    • Free pre-print version: Loading...

      Abstract: Abstract A genetic algorithm for solving systems of nonlinear equations that uses a self-reproduction operator bases on residual approaches is presented and analyzed. To ensure convergence the elitist model is used. A convergence analysis is given. With the aim of showing the advantages of the proposed genetic algorithm an extensive set of numerical experiments with standard test problems and some specific applications are reported.
      PubDate: 2022-12-01
       
  • Kernel density estimation based distributionally robust mean-CVaR
           portfolio optimization

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, by using weighted kernel density estimation (KDE) to approximate the continuous probability density function (PDF) of the portfolio loss, and to compute the corresponding approximated Conditional Value-at-Risk (CVaR), a KDE-based distributionally robust mean-CVaR portfolio optimization model is investigated. Its distributional uncertainty set (DUS) is defined indirectly by imposing the constraint on the weights in weighted KDE in terms of \(\phi \) -divergence function in order that the corresponding infinite-dimensional space of PDF is converted into the finite-dimensional space on the weights. This makes the corresponding distributionally robust optimization (DRO) problem computationally tractable. We also prove that the optimal value and solution set of the KDE-based DRO problem converge to those of the portfolio optimization problem under the true distribution. Primary empirical test results show that the proposed model is meaningful.
      PubDate: 2022-12-01
       
  • Compact mixed-integer programming formulations in quadratic optimization

    • Free pre-print version: Loading...

      Abstract: Abstract We present a technique for producing valid dual bounds for nonconvex quadratic optimization problems. The approach leverages an elegant piecewise linear approximation for univariate quadratic functions due to Yarotsky (Neural Netw 94:103–114, 2017), formulating this (simple) approximation using mixed-integer programming (MIP). Notably, the number of constraints, binary variables, and auxiliary continuous variables used in this formulation grows logarithmically in the approximation error. Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the strength (or sharpness) of our formulation and the tightness of its approximation. Further, we show that our formulation represents feasible points via a Gray code. We close with computational results on problems with quadratic objectives and/or constraints, showing that our proposed method (i) across the board outperforms existing MIP relaxations from the literature, and (ii) on hard instances produces better bounds than exact solvers within a fixed time budget.
      PubDate: 2022-12-01
       
  • On the maxima of motzkin-straus programs and cliques of graphs

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we establish a connection between the local maximizers (global maximizers) of a Motzkin-Straus quadratic program and a specific type of regular multipartite cliques. Our work extends a remarkable connection between the maximum cliques and the global maximizers of the Motzkin-Straus quadratic program. This connection and its extensions can be successfully employed in optimization to provide heuristics for the maximal cliques in graphs. We provide two counterexamples to the results from previous work about the global and local maximizers of the Motzkin-Straus quadratic program. We then amend the previous theorems by introducing a new structure. We also answer two questions raised by Pelillo and Jagota about the maxima of the Motzkin-Straus quadratic program.
      PubDate: 2022-12-01
       
  • A generalized block-iterative projection method for the common fixed point
           problem induced by cutters

    • Free pre-print version: Loading...

      Abstract: Abstract The block-iterative projections (BIP) method of Aharoni and Censor [Block-iterative projection methods for parallel computation of solutions to convex feasibility problems, Linear Algebra and its Applications 120, (1989), 165–175] is an iterative process for finding asymptotically a point in the nonempty intersection of a family of closed convex subsets. It employs orthogonal projections onto the individual subsets in an algorithmic regime that uses “blocks” of operators and has great flexibility in constructing specific algorithms from it. We extend this algorithmic scheme to handle a family of continuous cutter operators and to find a common fixed point of them. Since the family of continuous cutters includes several important specific operators, our generalized scheme, which ensures global convergence and retains the flexibility of BIP, can handle, in particular, metric (orthogonal) projectors and continuous subgradient projections, which are very important in applications. We also allow a certain kind of adaptive perturbations to be included, and along the way we derive a perturbed Fejér monotonicity lemma which is of independent interest.
      PubDate: 2022-12-01
       
  • Alternative regularizations for Outer-Approximation algorithms for convex
           MINLP

    • Free pre-print version: Loading...

      Abstract: Abstract In this work, we extend the regularization framework from Kronqvist et al. (Math Program 180(1):285–310, 2020) by incorporating several new regularization functions and develop a regularized single-tree search method for solving convex mixed-integer nonlinear programming (MINLP) problems. We propose a set of regularization functions based on distance metrics and Lagrangean approximations, used in the projection problem for finding new integer combinations to be used within the Outer-Approximation (OA) method. The new approach, called Regularized Outer-Approximation (ROA), has been implemented as part of the open-source Mixed-integer nonlinear decomposition toolbox for Pyomo—MindtPy. We compare the OA method with seven regularization function alternatives for ROA. Moreover, we extend the LP/NLP Branch and Bound method proposed by Quesada and Grossmann (Comput Chem Eng 16(10–11):937–947, 1992) to include regularization in an algorithm denoted RLP/NLP. We provide convergence guarantees for both ROA and RLP/NLP. Finally, we perform an extensive computational experiment considering all convex MINLP problems in the benchmark library MINLPLib. The computational results show clear advantages of using regularization combined with the OA method.
      PubDate: 2022-12-01
       
  • Two-step inertial Bregman alternating minimization algorithm for nonconvex
           and nonsmooth problems

    • Free pre-print version: Loading...

      Abstract: In this paper, we propose an algorithm combining Bregman alternating minimization algorithm with two-step inertial force for solving a minimization problem composed of two nonsmooth functions with a smooth one in the absence of convexity. For solving nonconvex and nonsmooth problems, we give an abstract convergence theorem for general descent methods satisfying a sufficient decrease assumption, and allowing a relative error tolerance. Our result holds under the assumption that the objective function satisfies the Kurdyka–Łojasiewicz inequality. The proposed algorithm is shown to satisfy the requirements of our abstract convergence theorem. The convergence is obtained provided an appropriate regularization of the objective function satisfies the Kurdyka–Łojasiewicz inequality. Finally, numerical results are reported to show the effectiveness of the proposed algorithm.
      PubDate: 2022-12-01
       
  • The restricted inverse optimal value problem on shortest path under
           $$l_1$$ norm on trees

    • Free pre-print version: Loading...

      Abstract: Abstract We consider the restricted inverse optimal value problem on shortest path under weighted \(l_1\) norm on trees (RIOVSPT \(\varvec{_1}\) ). It aims at adjusting some edge weights to minimize the total cost under weighted \(l_1\) norm on the premise that the length of the shortest root-leaf path of the tree is lower-bounded by a given value D, which is just the restriction on the length of a given root-leaf path \(P_0\) . If we ignore the restriction on the path \(P_0\) , then we obtain the minimum cost shortest path interdiction problem on trees (MCSPIT \(\varvec{_1}\) ). We analyze some properties of the problem (RIOVSPT \(\varvec{_1}\) ) and explore the relationship of the optimal solutions between (MCSPIT \(\varvec{_1}\) ) and (RIOVSPT \(\varvec{_1}\) ). We first take the optimal solution of the problem (MCSPIT \(\varvec{_1}\) ) as an initial infeasible solution of problem (RIOVSPT \(\varvec{_1}\) ). Then we consider a slack problem \({\textbf {(}} {{\textbf {RIOVSPT}}}\varvec{_1^s)}\) , where the length of the path \(P_0\) is greater than D. We obtain its feasible solutions gradually approaching to an optimal solution of the problem (RIOVSPT \(\varvec{_1}\) ) by solving a series of subproblems \({{\textbf {(RIOVSPT}}}\varvec{_1^i)}\) . It aims at determining the only weight-decreasing edge on the path \(P_0\) with the minimum cost so that the length of the shortest root-leaf path is no less than D. The subproblem can be solved by searching for a minimum cost cut in O(n) time. The iterations continue until the length of the path \(P_0\) equals D. Consequently, the time complexity of the algorithm is \(O(n^2)\) and we present some numerical experiments to show the efficiency of the algorithm. Additionally, we devise a linear time algorithm for the problem (RIOVSPT \(\varvec{_{u1}}\) ) under unit \(l_1\) norm.
      PubDate: 2022-11-30
       
  • Nonconvex sensitivity-based generalized Benders decomposition

    • Free pre-print version: Loading...

      Abstract: Abstract This paper considers general separable pseudoconvex optimization problems with continuous complicating variables in which primal and projected problems are both pseudoconvex problems. A novel decomposition method based on generalized Benders decomposition, named nonconvex sensitivity-based generalized Benders decomposition, is developed and proved strictly to obtain optimal solutions of general separable pseudoconvex optimization problems of interest without constructing surrogate models. By the use of a reformulation strategy (introducing an extra equality constraint and constructing several subproblems), the algorithm handles the nonconvexity by direct manipulations of consistent linear Benders cuts and the check of optimality conditions and approximating the feasible region of complicating variables by supporting hyperplanes. The master problems of the new algorithm are always linear programming problems and the solution of the algorithm contains sensitivity information about complicating variables. Moreover, the new algorithm could also be used as a tool to check the nonconvexity of an optimization problem. Two cases are given to confirm the validity and applicability of the proposed algorithm.
      PubDate: 2022-11-29
       
  • KKT-based primal-dual exactness conditions for the Shor relaxation

    • Free pre-print version: Loading...

      Abstract: Abstract In this work we present some exactness conditions for the Shor relaxation of diagonal (or, more generally, diagonalizable) QCQPs, which extend the conditions introduced in different recent papers about the same topic. It is shown that the Shor relaxation is equivalent to two convex quadratic relaxations. Then, sufficient conditions for the exactness of the relaxations are derived from their KKT systems. It will be shown that, in some cases, by this derivation previous conditions in the literature, which can be viewed as dual conditions, since they only involve the Lagrange multipliers appearing in the KKT systems, can be extended to primal-dual conditions, which also involve the primal variables appearing in the KKT systems.
      PubDate: 2022-11-29
       
  • Extension of forward-reflected-backward method to non-convex mixed
           variational inequalities

    • Free pre-print version: Loading...

      Abstract: Abstract This paper presents a modification of a recently studied forward-reflected-backward splitting method to solve non-convex mixed variational inequalities. We give global convergence results and nonasymptotic O(1/k) rate of convergence of the proposed method under some appropriate conditions and present some numerical illustrations, one of which is derived from oligopolistic equilibrium problems, to show the efficiency of our proposed method.
      PubDate: 2022-11-23
       
  • Efficient smooth minorants for global optimization of univariate functions
           with the first derivative satisfying the interval Lipschitz condition

    • Free pre-print version: Loading...

      Abstract: Abstract In 1998, the paper Sergeyev (Math Program 81(1):127–146, 1998) has been published where a smooth piece-wise quadratic minorant has been proposed for multiextremal functions f(x) with the first derivative \(f'(x)\) satisfying the Lipschitz condition with a constant L, i.e., \(f'(x)\) cannot increase with the slope higher than L and decrease with the slope smaller than \(-L\) . This minorant has been successfully applied in several efficient global optimization algorithms and used in engineering applications. In the present paper, it is supposed that the first derivative \(f'(x)\) cannot increase with the slope higher than a constant \(\beta \) and decrease with the slope smaller than \(\alpha \) . The interval \([\alpha ,\beta ]\) is called the Lipschitz interval (clearly, in this case the Lipschitz constant \(L = \max \{ \alpha , \beta \}\) ). For this class of functions, smooth piece-wise estimators (minorants and majorants) have been proposed and applied in global optimization. Both theoretically and experimentally (on 200 randomly generated test problems) it has been shown that in cases where \( \alpha \ne \beta \) the new estimators can give a significant improvement w.r.t. those proposed in Sergeyev (Math Program 81(1):127–146, 1998), for example, in the framework of branch-and-bound global optimization methods.
      PubDate: 2022-11-21
       
  • Generalized damped Newton algorithms in nonsmooth optimization via
           second-order subdifferentials

    • Free pre-print version: Loading...

      Abstract: Abstract The paper proposes and develops new globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems. These algorithms are based on the theory and calculations of second-order subdifferentials of nonsmooth functions with employing the machinery of second-order variational analysis and generalized differentiation. First we develop a globally superlinearly convergent damped Newton-type algorithm for the class of continuously differentiable functions with Lipschitzian gradients, which are nonsmooth of second order. Then we design such a globally convergent algorithm to solve a structured class of nonsmooth quadratic composite problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. Finally, we present the results of numerical experiments and compare the performance of our main algorithm applied to an important class of Lasso problems with those achieved by other first-order and second-order optimization algorithms.
      PubDate: 2022-11-09
       
  • On piecewise linear approximations of bilinear terms: structural
           comparison of univariate and bivariate mixed-integer programming
           formulations

    • Free pre-print version: Loading...

      Abstract: Abstract Bilinear terms naturally appear in many optimization problems. Their inherent non-convexity typically makes them challenging to solve. One approach to tackle this difficulty is to use bivariate piecewise linear approximations for each variable product, which can be represented via mixed-integer linear programming (MIP) formulations. Alternatively, one can reformulate the variable products as a sum of univariate functions. Each univariate function can again be approximated by a piecewise linear function and modelled via an MIP formulation. In the literature, heterogeneous results are reported concerning which approach works better in practice, but little theoretical analysis is provided. We fill this gap by structurally comparing bivariate and univariate approximations with respect to two criteria. First, we compare the number of simplices sufficient for an \( \varepsilon \) -approximation. We derive upper bounds for univariate approximations and compare them to a lower bound for bivariate approximations. We prove that for a small prescribed approximation error  \( \varepsilon \) , univariate \( \varepsilon \) -approximations require fewer simplices than bivariate \( \varepsilon \) -approximations. The second criterion is the tightness of the continuous relaxations (CR) of corresponding sharp MIP formulations. Here, we prove that the CR of a bivariate MIP formulation describes the convex hull of a variable product, the so-called McCormick relaxation. In contrast, we show by a volume argument that the CRs corresponding to univariate approximations are strictly looser. This allows us to explain many of the computational effects observed in the literature and to give theoretical evidence on when to use which kind of approximation.
      PubDate: 2022-11-03
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 35.175.107.185
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-