Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider a class of conditionally well-posed inverse problems characterized by a Hölder estimate of conditional stability on a convex compact in a Hilbert space. The input data and the operator of the forward problem are available with errors. We investigate the discretized residual functional constructed according to a general scheme of finite dimensional approximation. We prove that each its stationary point that is not too far from the finite dimensional approximation of the solution to the original inverse problem, generates an approximation from a small neighborhood of this solution. The diameter of the specified neighborhood is estimated in terms of characteristics of the approximation scheme. This partially removes iterating over local minimizers of the residual functional when implementing the discrete quasi-solution method for solving the inverse problem. The developed theory is illustrated by numerical examples. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, we present a branch-and-bound based algorithm to generate all non dominated points for a multiobjective integer programming problem with convex objective functions and linear constraints (MOICP). The principle is to solve a single objective program (P) defined from the original MOICP program with relaxed integrality constraints. Whenever an integer solution is found through the branching process, a node is created in the search tree for each criterion. That is, by adding a cutting plane that locally approximates the criterion, as to exclude a subset of dominated points. The nodes are traversed according to the depth-first strategy and the same process is repeated for the obtained programs as (P). Finally, as to illustrate the efficiency of the suggested algorithm, we present an experimental study, where we assess its efficiency using randomly generated quadratic multiobjective integer problems with linear constraints. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Simplex covering optimization problem (SCO) is modeled from the application of covering a simplex by m given balls. It contains the maximin dispersion problem as a special case. In this paper, we prove that (SCO) is NP-hard. We present an enumeration method (EM) to globally solve (SCO) and show that the complexity is strongly polynomial when m is fixed. Numerical experiments demonstrate that EM outperforms CPLEX when m is small. For larger m, we propose an efficient incomplete enumeration method based on linear programming relaxation. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, an open-source solver for mixed-integer nonlinear programming (MINLP) problems is presented. The Supporting Hyperplane Optimization Toolkit (SHOT) combines a dual strategy based on polyhedral outer approximations (POA) with primal heuristics. The POA is achieved by expressing the nonlinear feasible set of the MINLP problem with linearizations obtained with the extended supporting hyperplane (ESH) and extended cutting plane (ECP) algorithms. The dual strategy can be tightly integrated with the mixed-integer programming (MIP) subsolver in a so-called single-tree manner, i.e., only a single MIP optimization problem is solved, where the polyhedral linearizations are added as lazy constraints through callbacks in the MIP solver. This enables the MIP solver to reuse the branching tree in each iteration, in contrast to most other POA-based methods. SHOT is available as a COIN-OR open-source project, and it utilizes a flexible task-based structure making it easy to extend and modify. It is currently available in GAMS, and can be utilized in AMPL, Pyomo and JuMP as well through its ASL interface. The main functionality and solution strategies implemented in SHOT are described in this paper, and their impact on the performance are illustrated through numerical benchmarks on 406 convex MINLP problems from the MINLPLib problem library. Many of the features introduced in SHOT can be utilized in other POA-based solvers as well. To show the overall effectiveness of SHOT, it is also compared to other state-of-the-art solvers on the same benchmark set. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Given an objective function (differentiable or nondifferentiable), an important optimization problem is to find the global minimizer of the given function and a natural issue is to identify such functions with the global property. In this paper, we study real-valued functions defined on a Hilbert space and provide several sufficient conditions to ensure the existence of the global minimizer of such functions. For a proper lower semicontinuous function, we prove that a global minimizer can be guaranteed if it is bounded below, has the primal-lower-nice property and satisfies the generalized Palais-Smale condition. This result can cover the classic differentiable case whose proof depends heavily on the global existence of the ordinary differential equation. Several examples are constructed to show that the global minimizer may be violated if any of three conditions above is dropped. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper presents a novel technique to compute Lagrangian bounds for nonconvex mixed-integer quadratically constrained quadratic programming problems presenting a separable structure (i.e., a separable problems) such as those arising in deterministic equivalent representations of two-stage stochastic programming problems. In general, the nonconvex nature of these models still poses a challenge to the available solvers, which do not consistently perform well for larger-scale instances. Therefore, we propose an appealing alternative algorithm that allows for overcoming computational performance issues. Our novel technique, named the p-Lagrangian decomposition, is a decomposition method that combines Lagrangian decomposition with mixed-integer programming-based relaxations. These relaxations are obtained using the reformulated normalised multiparametric disaggregation technique and can be made arbitrarily precise by means of a precision parameter p. We provide a technical analysis showing the convergent behaviour of the approach as the approximation is made increasingly precise. We observe that the proposed method presents significant reductions in computational time when compared with a previously proposed techniques in the literature and the direct employment of a commercial solver. Moreover, our computational experiments show that the employment of a simple heuristic can recover solutions with small duality gaps. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Sparse tensor best rank-1 approximation (BR1Approx), which is a sparsity generalization of the dense tensor BR1Approx, and is a higher-order extension of the sparse matrix BR1Approx, is one of the most important problems in sparse tensor decomposition and related problems arising from statistics and machine learning. By exploiting the multilinearity as well as the sparsity structure of the problem, four polynomial-time approximation algorithms are proposed, which are easily implemented, of low computational complexity, and can serve as initial procedures for iterative algorithms. In addition, theoretically guaranteed approximation lower bounds are derived for all the algorithms. We provide numerical experiments on synthetic and real data to illustrate the efficiency and effectiveness of the proposed algorithms; in particular, serving as initialization procedures, the approximation algorithms can help in improving the solution quality of iterative algorithms while reducing the computational time. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Friedlander, Macêdo, and Pong recently introduced the projected polar proximal point algorithm (P4A) for solving optimization problems by using the closed perspective transforms of convex objectives. We analyse a generalization (GP4A) which replaces the closed perspective transform with a more general closed gauge. We decompose GP4A into the iterative application of two separate operators, and analyse it as a splitting method. By showing that GP4A and its under-relaxations exhibit global convergence whenever a fixed point exists, we obtain convergence guarantees for P4A by letting the gauge specify to the closed perspective transform for a convex function. We then provide easy-to-verify sufficient conditions for the existence of fixed points for the GP4A, using the Minkowski function representation of the gauge. Conveniently, the approach reveals that global minimizers of the objective function for P4A form an exposed face of the dilated fundamental set of the closed perspective transform. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider higher-order conditions and sensitivity analysis for solutions to equilibrium problems. The conditions for solutions are in terms of quasi-contingent derivatives and involve higher-order complementarity slackness for both the objective and the constraints and under Hölder metric subregularity assumptions. For sensitivity analysis, a formula of this type of derivative of the solution map to a parametric equilibrium problem is established in terms of the same types of derivatives of the data of the problem. Here, the concepts of a quasi-contingent derivative and critical directions are new. We consider open-cone solutions and proper solutions. We also study an important and typical special case: weak solutions of a vector minimization problem with mixed constraints. The results are significantly new and improve recent corresponding results in many aspects. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this note we study multiple-ratio fractional 0–1 programs, a broad class of \(\mathcal {NP}\) -hard combinatorial optimization problems. In particular, under some relatively mild assumptions we provide a complete characterization of the conditions, which ensure that a single-ratio function is submodular. Then we illustrate our theoretical results with the assortment optimization and facility location problems, and discuss practical situations that guarantee submodularity in the considered application settings. In such cases, near-optimal solutions for multiple-ratio fractional 0–1 programs can be found via simple greedy algorithms. PubDate: 2022-09-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, an active set strategy is presented to address the ill-conditioning of smoothing methods for solving finite linear minimax problems. Based on the first order optimality conditions, a concept of the strongly active set composed of a part of active indexes is introduced. In the active set strategy, a strongly active set is obtained by solving a linear system or a linear programming problem, then an optimal solution with its active set and Lagrange multipliers is computed by an iterative process. A hybrid algorithm combining a smoothing algorithm and the active set strategy is proposed for solving finite linear minimax problems, in which an approximate solution is obtained by the smoothing algorithm, then an optimal solution is computed by the active set strategy. The convergences of the active set strategy and the hybrid algorithm are established for general finite linear minimax problems. Preliminary numerical experiments show that the active set strategy and the hybrid algorithm are effective and robust, and the active set strategy can effectively address the ill-conditioning of smoothing methods for solving general finite linear minimax problems. PubDate: 2022-08-03

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A global optimization approach for solving non-monotone equilibrium problems (EPs) is proposed. The class of (regularized) gap functions is used to reformulate any EP as a constrained global optimization program and some bounds on the Lipschitz constant of such functions are provided. The proposed global optimization approach is a combination of an improved version of the DIRECT algorithm, which exploits local bounds of the Lipschitz constant of the objective function, with local minimizations. Unlike most existing solution methods for EPs, no monotonicity-type condition is assumed in this paper. Preliminary numerical results on several classes of EPs show the effectiveness of the approach. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The solution to a biobjective optimization problem is composed of a collection of trade-off solution called the Pareto set. Based on a computer assisted proof methodology, the present work studies the question of certifying numerically that a conjectured set is close to the Pareto set. Two situations are considered. First, we analyze the case where the conjectured set is directly provided: one objective is explicitly given as a function of the other. Second, we analyze the situation where the conjectured set is parameterized: both objectives are explicitly given as functions of a parameter. In both cases, we formulate the question of verifying that the conjectured set is close to the Pareto set as a global optimization problem. These situations are illustrated on a new class of extremal problems over convex polygons in the plane. The objectives are to maximize the area and perimeter of a polygon with a fixed diameter, for a given number of sides. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower chooses an optimal packing according to his own profits, which may differ from those of the leader. To this bilevel problem, we add uncertainty in a natural way, assuming that the leader does not have full knowledge about the follower’s problem. More precisely, adopting the robust optimization approach and assuming that the follower’s profits belong to a given uncertainty set, our aim is to compute a solution that optimizes the worst-case follower’s reaction from the leader’s perspective. By investigating the complexity of this problem with respect to different types of uncertainty sets, we make first steps towards better understanding the combination of bilevel optimization and robust combinatorial optimization. We show that the problem can be solved in polynomial time for both discrete and interval uncertainty, but that the same problem becomes NP-hard when each coefficient can independently assume only a finite number of values. In particular, this demonstrates that replacing uncertainty sets by their convex hulls may change the problem significantly, in contrast to the situation in classical single-level robust optimization. For general polytopal uncertainty, the problem again turns out to be NP-hard, and the same is true for ellipsoidal uncertainty even in the uncorrelated case. All presented hardness results already apply to the evaluation of the leader’s objective function. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider the asymptotic behavior of the infimal values and the statistical estimators of the solutions to a general stochastic optimization problem. We establish the epi-convergence of the performance criteria of approximate problems when the approximate probability laws, obtained by sampling the values of the random variable, converge weakly and tightly. Based on this key convergence, consistency properties of the infimal values and the estimators of the solutions to the approximate problems are obtained. Applying these results and properties of epi/hypo-convergence of bifunctions to Lagrangians of stochastic mathematical programs, we obtain the consistency of the saddle points of approximate Lagrangians and hence the consistency of the optimal values and the estimators of the solutions of approximate mathematical programs and their dual programs. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Multiple instance learning (MIL) is a variation of supervised learning, where data consists of labeled bags and each bag contains a set of instances. Unlike traditional supervised learning, labels are not known for the instances in MIL. Existing approaches in the literature make use of certain assumptions regarding the instance labels and propose mixed integer quadratic programs, which introduce computational difficulties. In this study, we present a novel quadratic programming (QP)-based approach to classify bags. Solution of our QP formulation links the instance-level contributions to the bag label estimates, and provides a linear bag classifier along with a decision threshold. Our approach imposes no additional constraints on relating instance labels to bag labels and can be adapted to learning applications with different MIL assumptions. Unlike existing specialized heuristic approaches to solve previous MIL formulations, our QP models can be directly solved to optimality using any commercial QP solver. Also, kindly confirm Our computational experiments show that proposed QP formulation is efficient in terms of solution time, overcoming a main drawback of previous optimization algorithms for MIL. We demonstrate the classification success of our approach compared to the state-of-the-art methods on a wide range of real world datasets. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, we develop an interactive algorithm to support a decision maker to find a most preferred lightly robust efficient solution when solving uncertain multiobjective optimization problems. It extends the interactive NIMBUS method. The main idea underlying the designed algorithm, called LR-NIMBUS, is to ask the decision maker for a most acceptable (typical) scenario, find an efficient solution for this scenario satisfying the decision maker, and then apply the derived efficient solution to generate a lightly robust efficient solution. The preferences of the decision maker are incorporated through classifying the objective functions. A lightly robust efficient solution is generated by solving an augmented weighted achievement scalarizing function. We establish the tractability of the algorithm for important classes of objective functions and uncertainty sets. As an illustrative example, we model and solve a robust optimization problem in stock investment (portfolio selection). PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, we replace the \(\ell _0\) norm with the variation of generalized Gaussian function \(\Phi _\alpha (x)\) in sparse signal recovery. We firstly show that \(\Phi _\alpha (x)\) is a type of non-convex sparsity-promoting function and clearly demonstrate the equivalence among the three minimization models \((\mathrm{P}_0):\min \limits _{x\in {\mathbb {R}}^n}\Vert x\Vert _0\) subject to \( Ax=b\) , \({\mathrm{(E}_\alpha )}:\min \limits _{x\in {\mathbb {R}}^n}\Phi _\alpha (x)\) subject to \(Ax=b\) and \((\mathrm{E}^{\lambda }_\alpha ):\min \limits _{x\in {\mathbb {R}}^n}\frac{1}{2}\Vert Ax-b\Vert ^{2}_{2}+\lambda \Phi _\alpha (x).\) The established equivalent theorems elaborate that \((\mathrm{P}_0)\) can be completely overcome by solving the continuous minimization \((\mathrm{E}_\alpha )\) for some \(\alpha \) s, while the latter is computable by solving the regularized minimization \((\mathrm{E}^{\lambda }_\alpha )\) under certain conditions. Secondly, based on DC algorithm and iterative soft thresholding algorithm, a successful algorithm for the regularization minimization \((\mathrm{E}^{\lambda }_\alpha )\) , called the DCS algorithm, is given. Finally, plenty of simulations are conducted to compare this algorithm with two classical algorithms which are half algorithm and soft algorithm, and the experiment results show that the DCS algorithm performs well in sparse signal recovery. PubDate: 2022-08-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone linear complementarity problems (SOCLCPs) is proposed, and four kinds of specific smoothing approximations are considered. In light of this approach, the SOCLCP is approximated by asymptotic lower order penalty equations with penalty parameter and smoothing parameter. When the penalty parameter tends to positive infinity and the smoothing parameter monotonically decreases to zero, we show that the solution sequence of the asymptotic lower order penalty equations converges to the solution of the SOCLCP at an exponential rate under a mild assumption. A corresponding algorithm is constructed and numerical results are reported to illustrate the feasibility of this approach. The performance profile of four specific smoothing approximations is presented, and the generalization of two approximations are also investigated. PubDate: 2022-08-01