Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER PROGRAMMING (25 journals)

Showing 1 - 27 of 27 Journals sorted alphabetically
ACM SIGPLAN Fortran Forum     Full-text available via subscription   (Followers: 4)
ACM Transactions on Programming Languages and Systems (TOPLAS)     Hybrid Journal   (Followers: 18)
Acta Informatica     Hybrid Journal   (Followers: 5)
Advances in Image and Video Processing     Open Access   (Followers: 24)
Algorithmica     Hybrid Journal   (Followers: 9)
An International Journal of Optimization and Control: Theories & Applications     Open Access   (Followers: 12)
Computer Methods and Programs in Biomedicine     Hybrid Journal   (Followers: 6)
Constraints     Hybrid Journal  
Grey Systems : Theory and Application     Hybrid Journal  
International Journal of Parallel Programming     Hybrid Journal   (Followers: 6)
International Journal of People-Oriented Programming     Full-text available via subscription  
International Journal of Soft Computing and Software Engineering     Open Access   (Followers: 14)
Journal of Computer Languages     Hybrid Journal   (Followers: 5)
Journal of Functional Programming     Hybrid Journal   (Followers: 1)
Journal of Logical and Algebraic Methods in Programming     Hybrid Journal   (Followers: 1)
Linux Journal     Full-text available via subscription   (Followers: 25)
Mathematical and Computational Applications     Open Access   (Followers: 3)
Mathematical Programming     Hybrid Journal   (Followers: 15)
Optimization: A Journal of Mathematical Programming and Operations Research     Hybrid Journal   (Followers: 6)
Proceedings of the ACM on Programming Languages     Open Access   (Followers: 7)
Programming and Computer Software     Hybrid Journal   (Followers: 16)
Python Papers     Open Access   (Followers: 11)
Python Papers Monograph     Open Access   (Followers: 4)
Python Papers Source Codes     Open Access   (Followers: 9)
Science of Computer Programming     Hybrid Journal   (Followers: 14)
Scientific Programming     Open Access   (Followers: 12)
Theory and Practice of Logic Programming     Hybrid Journal   (Followers: 3)
Similar Journals
Journal Cover
Mathematical Programming
Journal Prestige (SJR): 2.49
Citation Impact (citeScore): 3
Number of Followers: 15  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1436-4646 - ISSN (Online) 0025-5610
Published by Springer-Verlag Homepage  [2468 journals]
  • New lower bounds on crossing numbers of $$K_{m,n}$$ from semidefinite
           programming

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we use semidefinite programming and representation theory to compute new lower bounds on the crossing number of the complete bipartite graph  \(K_{m,n}\) , extending a method from de Klerk et al. (SIAM J Discrete Math 20:189–202, 2006) and the subsequent reduction by De Klerk, Pasechnik and Schrijver (Math Prog Ser A and B 109:613–624, 2007). We exploit the full symmetry of the problem using a novel decomposition technique. This results in a full block-diagonalization of the underlying matrix algebra, which we use to improve bounds on several concrete instances. Our results imply that \(\mathop {\textrm{cr}}\limits (K_{10,n}) \ge 4.87057 n^2 - 10n\) , \(\mathop {\textrm{cr}}\limits (K_{11,n}) \ge 5.99939 n^2-12.5n\) , \( \mathop {\textrm{cr}}\limits (K_{12,n}) \ge 7.25579 n^2 - 15n\) , \(\mathop {\textrm{cr}}\limits (K_{13,n}) \ge 8.65675 n^2-18n\) for all n. The latter three bounds are computed using a new and well-performing relaxation of the original semidefinite programming bound. This new relaxation is obtained by only requiring one small matrix block to be positive semidefinite.
      PubDate: 2023-11-20
       
  • State polynomials: positivity, optimization and nonlinear Bell
           inequalities

    • Free pre-print version: Loading...

      Abstract: Abstract This paper introduces state polynomials, i.e., polynomials in noncommuting variables and formal states of their products. A state analog of Artin’s solution to Hilbert’s 17th problem is proved showing that state polynomials, positive over all matrices and matricial states, are sums of squares with denominators. Somewhat surprisingly, it is also established that a Krivine–Stengle Positivstellensatz fails to hold in the state polynomial setting. Further, archimedean Positivstellensätze in the spirit of Putinar and Helton–McCullough are presented leading to a hierarchy of semidefinite relaxations converging monotonically to the optimum of a state polynomial subject to state constraints. This hierarchy can be seen as a state analog of the Lasserre hierarchy for optimization of polynomials, and the Navascués–Pironio–Acín scheme for optimization of noncommutative polynomials. The motivation behind this theory arises from the study of correlations in quantum networks. Determining the maximal quantum violation of a polynomial Bell inequality for an arbitrary network is reformulated as a state polynomial optimization problem. Several examples of quadratic Bell inequalities in the bipartite and the bilocal tripartite scenario are analyzed. To reduce the size of the constructed SDPs, sparsity, sign symmetry and conditional expectation of the observables’ group structure are exploited. To obtain the above-mentioned results, techniques from noncommutative algebra, real algebraic geometry, operator theory, and convex optimization are employed.
      PubDate: 2023-11-03
       
  • Publisher Correction to: A new perspective on low-rank optimization

    • Free pre-print version: Loading...

      PubDate: 2023-11-01
       
  • Publisher Correction to: Lyapunov stability of the subgradient method with
           constant step size

    • Free pre-print version: Loading...

      PubDate: 2023-11-01
       
  • Discrete potential mean field games: duality and numerical resolution

    • Free pre-print version: Loading...

      Abstract: Abstract We propose and investigate a general class of discrete time and finite state space mean field game (MFG) problems with potential structure. Our model incorporates interactions through a congestion term and a price variable. It also allows hard constraints on the distribution of the agents. We analyze the connection between the MFG problem and two optimal control problems in duality. We present two families of numerical methods and detail their implementation: (i) primal-dual proximal methods (and their extension with nonlinear proximity operators), (ii) the alternating direction method of multipliers (ADMM) and a variant called ADM-G. We give some convergence results. Numerical results are provided for two examples with hard constraints.
      PubDate: 2023-11-01
       
  • A trust region method for noisy unconstrained optimization

    • Free pre-print version: Loading...

      Abstract: Abstract Classical trust region methods were designed to solve problems in which function and gradient information are exact. This paper considers the case when there are errors (or noise) in the above computations and proposes a simple modification of the trust region method to cope with these errors. The new algorithm only requires information about the size/standard deviation of the errors in the function evaluations and incurs no additional computational expense. It is shown that, when applied to a smooth (but not necessarily convex) objective function, the iterates of the algorithm visit a neighborhood of stationarity infinitely often, assuming errors in the function and gradient evaluations are bounded. It is also shown that, after visiting the above neighborhood for the first time, the iterates cannot stray too far from it, as measured by the objective value. Numerical results illustrate how the classical trust region algorithm may fail in the presence of noise, and how the proposed algorithm ensures steady progress towards stationarity in these cases.
      PubDate: 2023-11-01
       
  • Inequality constrained stochastic nonlinear optimization via active-set
           sequential quadratic programming

    • Free pre-print version: Loading...

      Abstract: Abstract We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian, and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero almost surely. Our algorithm and analysis further develop the prior work of Na et al. (Math Program, 2022. https://doi.org/10.1007/s10107-022-01846-z). Specifically, we allow nonlinear inequality constraints without requiring the strict complementary condition; refine some of designs in Na et al. (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems.
      PubDate: 2023-11-01
       
  • First- and second-order optimality conditions for second-order cone and
           semidefinite programming under a constant rank condition

    • Free pre-print version: Loading...

      Abstract: Abstract The well known constant rank constraint qualification [Math. Program. Study 21:110–126, 1984] introduced by Janin for nonlinear programming has been recently extended to a conic context by exploiting the eigenvector structure of the problem. In this paper we propose a more general and geometric approach for defining a new extension of this condition to the conic context. The main advantage of our approach is that we are able to recast the strong second-order properties of the constant rank condition in a conic context. In particular, we obtain a second-order necessary optimality condition that is stronger than the classical one obtained under Robinson’s constraint qualification, in the sense that it holds for every Lagrange multiplier, even though our condition is independent of Robinson’s condition.
      PubDate: 2023-11-01
       
  • Analysis of the optimization landscape of Linear Quadratic Gaussian (LQG)
           control

    • Free pre-print version: Loading...

      Abstract: Abstract This paper revisits the classical Linear Quadratic Gaussian (LQG) control from a modern optimization perspective. We analyze two aspects of the optimization landscape of the LQG problem: (1) Connectivity of the set of stabilizing controllers \(\mathcal {C}_n\) ; and (2) Structure of stationary points. It is known that similarity transformations do not change the input-output behavior of a dynamic controller or LQG cost. This inherent symmetry by similarity transformations makes the landscape of LQG very rich. We show that (1) The set of stabilizing controllers \(\mathcal {C}_n\) has at most two path-connected components and they are diffeomorphic under a mapping defined by a similarity transformation; (2) There might exist many strictly suboptimal stationary points of the LQG cost function over \(\mathcal {C}_n\) that are not controllable and not observable; (3) All controllable and observable stationary points are globally optimal and they are identical up to a similarity transformation. These results shed some light on the performance analysis of direct policy gradient methods for solving the LQG problem.
      PubDate: 2023-11-01
       
  • A gradient sampling algorithm for stratified maps with applications to
           topological data analysis

    • Free pre-print version: Loading...

      Abstract: Abstract We introduce a novel gradient descent algorithm refining the well-known Gradient Sampling algorithm on the class of stratifiably smooth objective functions, which are defined as locally Lipschitz functions that are smooth on some regular pieces—called the strata—of the ambient Euclidean space. On this class of functions, our algorithm achieves a sub-linear convergence rate. We then apply our method to objective functions based on the (extended) persistent homology map computed over lower-star filters, which is a central tool of Topological Data Analysis. For this, we propose an efficient exploration of the corresponding stratification by using the Cayley graph of the permutation group. Finally, we provide benchmarks and novel topological optimization problems that demonstrate the utility and applicability of our framework.
      PubDate: 2023-11-01
       
  • Correction: Global convergence of the gradient method for functions
           definable in o-minimal structures

    • Free pre-print version: Loading...

      PubDate: 2023-11-01
       
  • Efficient joint object matching via linear programming

    • Free pre-print version: Loading...

      Abstract: Abstract Joint object matching, also known as multi-image matching, namely, the problem of finding consistent partial maps among all pairs of objects within a collection, is a crucial task in many areas of computer vision. This problem subsumes bipartite graph matching and graph partitioning as special cases and is NP-hard, in general. We develop scalable linear programming (LP) relaxations with theoretical performance guarantees for joint object matching. We start by proposing a new characterization of consistent partial maps; this in turn enables us to formulate joint object matching as an integer linear programming (ILP) problem. To construct strong LP relaxations, we study the facial structure of the convex hull of the feasible region of this ILP, which we refer to as the joint matching polytope. We present an exponential family of facet-defining inequalities that can be separated in strongly polynomial time, hence obtaining a partial characterization of the joint matching polytope that is both tight and cheap to compute. To analyze the theoretical performance of the proposed LP relaxations, we focus on permutation group synchronization, an important special case of joint object matching. We show that under the random corruption model for the input maps, a simple LP relaxation, that is, an LP containing only a very small fraction of the proposed facet-defining inequalities, recovers the ground truth with high probability if the corruption level is below 40%. Finally, via a preliminary computational study on synthetic data, we show that the proposed LP relaxations outperform a popular SDP relaxation both in terms of recovery and tightness.
      PubDate: 2023-11-01
       
  • Global convergence of the gradient method for functions definable in
           o-minimal structures

    • Free pre-print version: Loading...

      Abstract: Abstract We consider the gradient method with variable step size for minimizing functions that are definable in o-minimal structures on the real field and differentiable with locally Lipschitz gradients. We prove that global convergence holds if continuous gradient trajectories are bounded, with the minimum gradient norm vanishing at the rate o(1/k) if the step sizes are greater than a positive constant. If additionally the gradient is continuously differentiable, all saddle points are strict, and the step sizes are constant, then convergence to a local minimum holds almost surely over any bounded set of initial points.
      PubDate: 2023-11-01
       
  • Data perturbations in stochastic generalized equations: statistical
           robustness in static and sample average approximated models

    • Free pre-print version: Loading...

      Abstract: Abstract Sample average approximation which is also known as Monte Carlo method has been widely used for solving stochastic programming and equilibrium problems. In a data-driven environment, samples are often drawn from empirical data and hence may be potentially contaminated. Consequently it is legitimate to ask whether statistical estimators obtained from solving the sample average approximated problems are statistically robust, that is, the difference between the laws of the statistical estimators based on contaminated data and real data is controllable under some metrics. In Guo and Xu (Math Program 190:679–720, 2021), we address the issue for the estimators of the optimal values of a wide range of stochastic programming problems. In this paper, we complement the research by investigating the optimal solution estimators and we do so by considering stochastic generalized equations (SGE) as a unified framework. Specifically, we look into the impact of a single data perturbation on the solutions of the SGE using the notion of influence function in robust statistics. Since the SGE may have multiple solutions, we use the proto-derivative of a set-valued mapping to introduce the notion of generalized influence function (GIF) and derive sufficient conditions under which the GIF is well defined, bounded and uniformly bounded. We then move on to quantitative statistical analysis of the SGE when all of sample data are potentially contaminated and demonstrate under moderate conditions quantitative statistical robustness of the solutions obtained from solving sample average approximated SGE.
      PubDate: 2023-11-01
       
  • Recognizing even-cycle and even-cut matroids

    • Free pre-print version: Loading...

      Abstract: Abstract Even-cycle matroids are elementary lifts of graphic matroids and even-cut matroids are elementary lifts of cographic matroids. We present a polynomial algorithm to check if a binary matroid is an even-cycle matroid and we present a polynomial algorithm to check if a binary matroid is an even-cut matroid. These two algorithms rely on a polynomial algorithm (to be described in a pair of follow-up papers) to check if a binary matroid is pinch-graphic.
      PubDate: 2023-11-01
       
  • A novel reformulation for the single-sink fixed-charge transportation
           problem

    • Free pre-print version: Loading...

      Abstract: Abstract The single-sink fixed-charge transportation problem is known to have many applications in the area of manufacturing and transportation as well as being an important subproblem of the fixed-charge transportation problem. However, even the best algorithms from the literature do not fully leverage the structure of this problem, to the point of being surpassed by modern general-purpose mixed-integer programming solvers for large instances. We introduce a novel reformulation of the problem and study its theoretical properties. This reformulation leads to a range of new upper and lower bounds, dominance relations, linear relaxations, and filtering procedures. The resulting algorithm includes a heuristic phase and an exact phase, the main step of which is to solve a very small number of knapsack subproblems. Computational experiments are presented for existing and new types of instances. These tests indicate that the new algorithm systematically reduces the resolution time of the state-of-the-art exact methods by several orders of magnitude.
      PubDate: 2023-11-01
       
  • A new perspective on low-rank optimization

    • Free pre-print version: Loading...

      Abstract: Abstract A key question in many low-rank problems throughout optimization, machine learning, and statistics is to characterize the convex hulls of simple low-rank sets and judiciously apply these convex hulls to obtain strong yet computationally tractable relaxations. We invoke the matrix perspective function—the matrix analog of the perspective function—to characterize explicitly the convex hull of epigraphs of simple matrix convex functions under low-rank constraints. Further, we combine the matrix perspective function with orthogonal projection matrices—the matrix analog of binary variables which capture the row-space of a matrix—to develop a matrix perspective reformulation technique that reliably obtains strong relaxations for a variety of low-rank problems, including reduced rank regression, non-negative matrix factorization, and factor analysis. Moreover, we establish that these relaxations can be modeled via semidefinite constraints and thus optimized over tractably. The proposed approach parallels and generalizes the perspective reformulation technique in mixed-integer optimization and leads to new relaxations for a broad class of problems.
      PubDate: 2023-11-01
       
  • $$\mathbf {2\times 2}$$ -Convexifications for convex quadratic
           optimization with indicator variables

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we study the convex quadratic optimization problem with indicator variables. For the \({2\times 2}\) case, we describe the convex hull of the epigraph in the original space of variables, and also give a conic quadratic extended formulation. Then, using the convex hull description for the \({2\times 2}\) case as a building block, we derive an extended SDP relaxation for the general case. This new formulation is stronger than other SDP relaxations proposed in the literature for the problem, including the optimal perspective relaxation and the optimal rank-one relaxation. Computational experiments indicate that the proposed formulations are quite effective in reducing the integrality gap of the optimization problems.
      PubDate: 2023-11-01
       
  • Lyapunov stability of the subgradient method with constant step size

    • Free pre-print version: Loading...

      Abstract: Abstract We consider the subgradient method with constant step size for minimizing locally Lipschitz semi-algebraic functions. In order to analyze the behavior of its iterates in the vicinity of a local minimum, we introduce a notion of discrete Lyapunov stability and propose necessary and sufficient conditions for stability.
      PubDate: 2023-11-01
       
  • Scalable adaptive cubic regularization methods

    • Free pre-print version: Loading...

      Abstract: Abstract Adaptive cubic regularization (ARC) methods for unconstrained optimization compute steps from linear systems involving a shifted Hessian in the spirit of the Levenberg-Marquardt and trust-region methods. The standard approach consists in performing an iterative search for the shift akin to solving the secular equation in trust-region methods. Such search requires computing the Cholesky factorization of a tentative shifted Hessian at each iteration, which limits the size of problems that can be reasonably considered. We propose a scalable implementation of ARC named ARC \(_q\) K in which we solve a set of shifted systems concurrently by way of an appropriate modification of the Lanczos formulation of the conjugate gradient (CG) method. At each iteration of ARC \(_q\) K to solve a problem with \(n\) variables, a range of \(m \ll n\) shift parameters is selected. The computational overhead in CG beyond the Lanczos process is thirteen scalar operations to update five vectors of length \(m\) , and two \(n\) -vector updates for each value of the shift. The CG variant only requires one Hessian-vector product and one dot product per iteration, independently of \(m\) . Solves corresponding to inadequate shift parameters are interrupted early. All shifted systems are solved inexactly. Such modest cost makes our implementation scalable and appropriate for large-scale problems. We provide a new analysis of the inexact ARC method including its worst case evaluation complexity, global and asymptotic convergence. We describe our implementation and report numerical experience that confirms that our implementation of ARC \(_q\) K outperforms a classic Steihaug-Toint trust-region method, and the ARC method of the GALAHAD library. The latter solves the subproblem in nested Krylov subspaces by a Lanczos-based method, which requires the storage of a dense matrix that can be comparable to or larger than the two dense arrays required by our approach if the problem is large or requires many Lanczos iterations. Finally, we generalize our convergence results to inexact Hessians and nonlinear least-squares problems.
      PubDate: 2023-10-31
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.206.12.157
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-