Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, we present a Levenberg–Marquardt algorithm for nonlinear equations, where the exact Jacobians are unavailable, but their model approximations can be built in some random fashion. We study the complexity of the algorithm and show that the upper bound of the iteration numbers in expectation to obtain a first order stationary point is \(O(\epsilon ^{-3})\) . PubDate: 2024-07-18

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract One of the most significant challenges in logistics today is managing the complexity, uncertainty, and dynamism of the market. External and internal factors highly influence the efficiency of operations processes. Cross-docking Distribution Centers are one way of adopting a real-time philosophy. This work provides a Stability Approach (SA) for solving multi-dock truck scheduling problems under truck release date uncertainty. We consider time-based objectives for optimization and propose two new heuristics to obtain viable solutions for them. To evaluate the performance of the SA, we develop and analyze two other different scheduling policies, namely, Without Adjustments and Perfect Information. Extensive computational experiments allow us to compare SA with others and show that the methodology can support managers in their daily cross-docking operations, efficiently handling dynamic and uncertain data, and making good decisions quickly. PubDate: 2024-07-09

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The Flying Sidekick Traveling Salesman Problem (FSTSP) consists of using one truck and one drone to perform deliveries to a set of customers. The drone is limited to delivering to one customer at a time, after which it returns to the truck, from where it can be launched again. The goal is to minimize the time required to service all customers and return both vehicles to the depot. In the literature, we can find heuristics for this problem that follow the order-first split-second approach: find a Hamiltonian cycle h with all customers, and then remove some customers to be handled by the drone while deciding from where the drone will be launched and where it will be retrieved. Indeed, they optimally solve the h-FSTSP, which is a variation that consists of solving the FSTSP while respecting a given initial cycle h. We present the Lazy Drone Property, which guarantees that only some combinations of nodes for the launch and retrieval of the drone need to be considered by algorithms for the h-FSTSP. We also present an algorithm that uses the property, and we show experimental results which corroborate its effectiveness in decreasing the running time of such algorithms. Our algorithm was shown to be more than 84 times faster than the previously best-known ones over the literature benchmark. Moreover, on average, it considered an amount of launch and retrieval pairs that is linear on the number of customers, indicating that the algorithm’s performance should be sustainable for larger instances. PubDate: 2024-07-08

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of \(10^{-3}\) s, particularly for those problems with high dimensions and degrees of degeneracy. PubDate: 2024-07-06

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, we consider the modified second accelerated proximal gradient algorithm (APG \(_{s}\) ) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results. PubDate: 2024-07-06

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG \(\_\) BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG \(\_\) BB, and it is comparable to the latest limited memory conjugate gradient software package CG \(\_\) DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method. PubDate: 2024-07-02

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The projection is often used in solving variational inequalities. When projection onto the feasible set is not easy to calculate, the projection algorithms are replaced by the relaxed projection algorithms. However, these relaxed projection algorithms are not feasible, and to ensure the convergence of these relaxed projection algorithms, in addition to assuming some basic conditions, such as the Slater condition holds for the feasible set, the mapping is pseudomonotone and Lipschitz continuous, but also need to assume some additional conditions, which require some relationship between the mapping and the feasible set. In this paper, by replacing the projection onto the feasible set with the projection onto a ball (which changes from iteration) contained in the feasible set, a new feasible moving ball projection algorithm for pseudomonotone variational inequalities is obtained. Since the projection onto a ball has an explicit expression, this algorithm is easy to implement. At the same time, all the balls are contained in the feasible set, so the iteration points generated by this algorithm are all in the feasible set, which ensures the feasibility of this algorithm. The convergence of this algorithm is proved when the Slater condition holds for the feasible set, and the mapping is pseudomonotone and Lipschitz continuous. The fundamental difference between this moving ball projection algorithm and the previous relaxed projection algorithms lie in that the previous relaxed projection algorithms are all projected onto the half-space containing the feasible set, and this moving ball projection algorithm is projected onto a ball contained in the feasible set. In particular, this algorithm does not need to assume any additional conditions between the mapping and the feasible set. Finally, some numerical examples are given to illustrate the effectiveness of the algorithm. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Universum linear discriminant analysis was recently proposed to improve linear discriminant analysis by incorporating Universum information. However, it obtains each of the discriminant directions by just using samples from two classes, while other class samples are considered as Universum. This not only leads to the ignoring of some discriminant information, but also restricts its number of extracted features to at most \(0.5k(k-1)\) , where k is the number of classes. To fully explore discriminant information from all classes, this paper studies a novel Universum linear discriminant analysis by considering a unified model that simultaneously uses all classes. Compared to the existing Universum linear discriminant analysis, all Universum information is fully utilized in the proposed model when obtaining each discriminant direction, where the Universum can be self-constructed as well can be given advanced of any types. The constrained “concave-convex" procedure can be used to solve the proposed method, which makes the algorithm convergent to a local minimum. By employing a recursive technique, arbitrary number of discriminant directions can be obtained. Experimental results on real-world benchmark datasets and image datasets illustrate the advantages of the proposed method. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We address the 1-line Steiner tree problem with minimum number of Steiner points. Given a line l, a point set P of n terminals in \({\mathbb {R}}^2\) and a positive constant K, we are asked to find a Steiner tree \(T_{l}\) to interconnect the line l and the n terminals such that the Euclidean length of each edge in \(T_{l}\) is no more than the given positive constant K except those connecting two points on the line l, the objective is to minimize total number of the Steiner points in \(T_{l}\) , i.e. \(\min _{T_{l}}\{ S_{out} + S_{on} \}\) , where \( S_{out} \) and \( S_{on} \) are the number of Steiner points located outside of the line l and on this line l, respectively. We design a 4-approximation algorithm with time complexity of \(O(n^3)\) for the 1-line Steiner tree problem with minimum number of Steiner points. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper introduces CLS, a new line search along an arbitrary smooth search path, that starts at the current iterate tangentially to a descent direction. Like the Goldstein line search and unlike the Wolfe line search, the new line search uses, beyond the gradient at the current iterate, only function values. Using this line search with search directions satisfying the bounded angle condition, global convergence to a stationary point is proved for continuously differentiable objective functions that are bounded below and have Lipschitz continuous gradients. The standard complexity bounds are proved under several natural assumptions. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract An existence theorem of maximum points for a set preordered (not necessarily partial ordered) by a convex cone of a real linear space is presented. The proof of the theorem is different from the usual technic, that is the separation theorem, as used in Khazayel and Farajzadeh (Optim Lett 15:847–858, 2021) and Araya (Appl Math Lett 22:501–504, 2009). The main result of this gives an affirmative answer to the open problem was raised by Corley (J Optim Theory Appl 31(2):277–281, 1980) and also this paper can be viewed as a new version of the main theorem appeared in the above papers with mild assumptions and without using the separation theory and the notions of the topological interior or algebraic. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract An inherent limitation of many popular community detection methods, such as the walktrap and spin glass algorithms, is that they do not allow vertices to have membership in more than one community. Clique percolation remedies this limitation by allowing overlapping communities but does not necessarily produce solutions in accordance with the standard definition of ‘community’ (i.e., a dense subgraph of the network), often fails to assign all vertices to at least one community and presents formidable model selection challenges. In this paper, we propose a set-covering approach to overlapping community detection that enables overlapping communities to be assembled from maximal cliques, or from candidate communities formed from k-1 adjacent cliques. The promise of this new approach is demonstrated via comparison to clique percolation in a simulation experiment, as well as through application to an empirical psychological network. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this article, we study higher-order optimality conditions of robust Benson proper efficient solutions in uncertain vector optimization problems. One first introduces a new epiderivative of set-valued maps, the higher-order weak radial epiderivative. Then we investigate some of its properties. The concept of robust Benson proper effective solutions is proposed for uncertain vector optimization problems. In addition, applying the higher-order weak radial epiderivative, we establish the higher-order sufficient and necessary optimality conditions for the robust Benson proper effective solution in uncertain vector optimization problems without any convexity assumption. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Constrained Bayesian optimization optimizes a black-box objective function subject to black-box constraints. For simplicity, most existing works assume that multiple constraints are independent. To ask, when and how does dependence between constraints help', we remove this assumption and implement probability of feasibility with dependence (Dep-PoF) by applying multiple output Gaussian processes (MOGPs) as surrogate models and using expectation propagation to approximate the probabilities. We compare Dep-PoF and the independent version PoF. We propose two new acquisition functions incorporating Dep-PoF and test them on synthetic and practical benchmarks. Our results are largely negative: incorporating dependence between the constraints does not help much. Empirically, incorporating dependence between constraints may be useful if: (i) the solution is on the boundary of the feasible region(s) or (ii) the feasible set is very small. When these conditions are satisfied, the predictive covariance matrix from the MOGP may be poorly approximated by a diagonal matrix and the off-diagonal matrix elements may become important. Dep-PoF may apply to settings where (i) the constraints and their dependence are totally unknown and (ii) experiments are so expensive that any slightly better Bayesian optimization procedure is preferred. But, in most cases, Dep-PoF is indistinguishable from PoF. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The development of an efficient and practical method to recognize matrices originating from generalized graphs, in the same way that network matrices are defined over directed graphs, remains an open question. In this paper we present such a recognition procedure for binet matrices which are known to characterize an important class of integral polyhedra. A mathematical programming formulation for the recognition problem is provided, where the special structure of these matrices is expressed via a set of constraints. Finally, given that binet matrices represent certain classes of matroids, we present a set of areas ranging from combinatorial optimization to matroid theory that can benefit from our approach. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Hyperparameter tuning in the area of machine learning is often achieved using naive techniques, such as random search and grid search. However, most of these methods seldom lead to an optimal set of hyperparameters and often get very expensive. The hyperparameter optimization problem is inherently a bilevel optimization task, and there exist studies that have attempted bilevel solution methodologies to solve this problem. These techniques often assume a unique set of weights that minimizes the loss on the training set. Such an assumption is violated by deep learning architectures. We propose a bilevel solution method for solving the hyperparameter optimization problem that does not suffer from the drawbacks of the earlier studies. The proposed method is general and can be easily applied to any class of machine learning algorithms that involve continuous hyperparameters. The idea is based on the approximation of the lower level optimal value function mapping that helps in reducing the bilevel problem to a single-level constrained optimization task. The single-level constrained optimization problem is then solved using the augmented Lagrangian method. We perform extensive computational study on three datasets that confirm the efficiency of the proposed method. A comparative study against grid search, random search, Tree-structured Parzen Estimator and Quasi Monte Carlo Sampler shows that the proposed algorithm is multiple times faster and leads to models that generalize better on the testing set. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We propose a multistart algorithm to identify all local minima of a constrained, nonconvex stochastic optimization problem. The algorithm uniformly samples points in the domain and then starts a local stochastic optimization run from any point that is the “probabilistically best” point in its neighborhood. Under certain conditions, our algorithm is shown to asymptotically identify all local optima with high probability; this holds even though our algorithm is shown to almost surely start only finitely many local stochastic optimization runs. We demonstrate the performance of an implementation of our algorithm on nonconvex stochastic optimization problems, including identifying optimal variational parameters for the quantum approximate optimization algorithm. PubDate: 2024-07-01

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile. PubDate: 2024-06-25

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We consider the problem of determining the optimal node interception strategy during influence propagation over a (directed) network \(G=(V,A)\) . More specifically, this work aims to find an interception set \(D \subseteq V\) such that the influence spread over the remaining network \(G \backslash D\) under the linear threshold diffusion model is minimized. We prove its NP-hardness, even in the case when G is an undirected graph with unit edge weights. An exact algorithm based on integer linear programming and delayed constraint generation is proposed to determine the most critical nodes in the influence propagation process. Additionally, we investigate the technique of lifting inequalities of minimal activation sets. Experiments on the connected Watts-Strogatz small-world networks and real-world networks are also conducted to validate the effectiveness of our methodology. PubDate: 2024-05-13 DOI: 10.1007/s11590-024-02117-w