Authors:Gábor Bacsó; Daniel Lokshtanov; Dániel Marx; Marcin Pilipczuk; Zsolt Tuza; Erik Jan van Leeuwen Pages: 421 - 438 Abstract: In algorithmic graph theory, a classic open question is to determine the complexity of the Maximum Independent Set problem on \(P_t\) -free graphs, that is, on graphs not containing any induced path on t vertices. So far, polynomial-time algorithms are known only for \(t\le 5\) (Lokshtanov et al., in: Proceedings of the twenty-fifth annual ACM-SIAM symposium on discrete algorithms, SODA 2014, Portland, OR, USA, January 5–7, 2014, pp 570–581, 2014), and an algorithm for \(t=6\) announced recently (Grzesik et al. in Polynomial-time algorithm for maximum weight independent set on \({P}_6\) -free graphs. CoRR, arXiv:1707.05491, 2017). Here we study the existence of subexponential-time algorithms for the problem: we show that for any \(t\ge 1\) , there is an algorithm for Maximum Independent Set on \(P_t\) -free graphs whose running time is subexponential in the number of vertices. Even for the weighted version MWIS, the problem is solvable in \(2^{\mathcal {O}(\sqrt{tn \log n})}\) time on \(P_t\) -free graphs. For approximation of MIS in broom-free graphs, a similar time bound is proved. Scattered Set is the generalization of Maximum Independent Set where the vertices of the solution are required to be at distance at least d from each other. We give a complete characterization of those graphs H for which d-Scattered Set on H-free graphs can be solved in time subexponential in the size of the input (that is, in the number of vertices plus the number of edges): If every component of H is a path, then d-Scattered Set on H-free graphs with n vertices and m edges can be solved in time \(2^{\mathcal {O}( V(H) \sqrt{n+m}\log (n+m))}\) , even if d is part of the input. Otherwise, assuming the Exponential-Time Hypothesis (ETH), there is no \(2^{o(n+m)}\) -time algorithm for d-Scattered Set for any fixed \(d\ge 3\) on H-free graphs with n-vertices and m-edges. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0479-5 Issue No:Vol. 81, No. 2 (2019)

Authors:Serge Gaspers; Joachim Gudmundsson; Mitchell Jones; Julián Mestre; Stefan Rümmele Pages: 439 - 475 Abstract: A widely used class of algorithms for computing tree decompositions of graphs are heuristics that compute an elimination order, i.e., a permutation of the vertex set. In this paper, we propose to turbocharge these heuristics. For a target treewidthk, suppose the heuristic has already computed a partial elimination order of width at most k, but extending it by one more vertex exceeds the target width k. At this moment of regret, we solve a subproblem which is to recompute the last c positions of the partial elimination order such that it can be extended without exceeding width k. We show that this subproblem is fixed-parameter tractable when parameterized by k and c, but it is para-NP-hard and W[1]-hard when parameterized by only k or c, respectively. Our experimental evaluation of the FPT algorithm shows that we can trade a reasonable increase of the running time for the quality of the solution. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0499-1 Issue No:Vol. 81, No. 2 (2019)

Authors:Arne Meier; Sebastian Ordyniak; M. S. Ramanujan; Irena Schindler Pages: 476 - 496 Abstract: In the present paper, we introduce the backdoor set approach into the field of temporal logic for the global fragment of linear temporal logic. We study the parameterized complexity of the satisfiability problem parameterized by the size of the backdoor. We distinguish between backdoor detection and evaluation of backdoors into the fragments of Horn and Krom formulas. Here we classify the operator fragments of globally-operators for past/future/always, and the combination of them. Detection is shown to be fixed-parameter tractable whereas the complexity of evaluation behaves differently. We show that for Krom formulas the problem is paraNP-complete. For Horn formulas, the complexity is shown to be either fixed parameter tractable or paraNP-complete depending on the considered operator fragment. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0515-5 Issue No:Vol. 81, No. 2 (2019)

Authors:Michał Włodarczyk Pages: 497 - 518 Abstract: We introduce the non-commutative subset convolution—a convolution of functions useful when working with determinant-based algorithms. In order to compute it efficiently, we take advantage of Clifford algebras, a generalization of quaternions used mainly in the quantum field theory. We apply this tool to speed up algorithms counting subgraphs parameterized by the treewidth of a graph. We present an \(O^*((2^\omega + 1)^{tw})\) -time algorithm for counting Steiner trees and an \(O^*((2^\omega + 2)^{tw})\) -time algorithm for counting Hamiltonian cycles, both of which improve the previously known upper bounds. These constitute also the best known running times of deterministic algorithms for decision versions of these problems and they match the best obtained running times for pathwidth parameterization under assumption \(\omega = 2\) . PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0489-3 Issue No:Vol. 81, No. 2 (2019)

Authors:Kitty Meeks Pages: 519 - 540 Abstract: Many combinatorial problems involve determining whether a universe of n elements contains a witness consisting of k elements which have some specified property. In this paper we investigate the relationship between the decision and enumeration versions of such problems: efficient methods are known for transforming a decision algorithm into a search procedure that finds a single witness, but even finding a second witness is not so straightforward in general. We show that, if the decision version of the problem can be solved in time \(f(k) \cdot poly(n)\) , there is a randomised algorithm which enumerates all witnesses in time \(e^{k + o(k)} \cdot f(k) \cdot poly(n) \cdot N\) , where N is the total number of witnesses. If the decision version of the problem is solved by a randomised algorithm which may return false negatives, then the same method allows us to output a list of witnesses in which any given witness will be included with high probability. The enumeration algorithm also gives rise to an efficient algorithm to count the total number of witnesses when this number is small. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0404-y Issue No:Vol. 81, No. 2 (2019)

Authors:Cornelius Brand; Holger Dell; Marc Roth Pages: 541 - 556 Abstract: Jaeger et al. (Math Proc Camb Philos Soc 108(1):35–53, 1990) proved a dichotomy for the complexity of evaluating the Tutte polynomial at fixed points: the evaluation is #P-hard almost everywhere, and the remaining points admit polynomial-time algorithms. Dell, Husfeldt, and Wahlén (in: ICALP 2010, vol. 6198, pp. 426–437, Springer, Berlin, Heidelberg, 2010) and Husfeldt and Taslaman (in: IPEC 2010, vol. 6478, pp. 192–203, Springer, Berlin, Heidelberg, 2010) in combination with the results of Curticapean (in: ICALP 2015, pp. 380–392, Springer, 2015), extended the #P-hardness results to tight lower bounds under the counting exponential time hypothesis #ETH, with the exception of the line \(y=1\) , which was left open. We complete the dichotomy theorem for the Tutte polynomial under #ETH by proving that the number of all acyclic subgraphs of a given n-vertex graph cannot be determined in time unless #ETH fails. Another dichotomy theorem we strengthen is the one of Creignou and Hermann (Inf Comput 125(1):1–12, 1996) for counting the number of satisfying assignments to a constraint satisfaction problem instance over the Boolean domain. We prove that the #P-hard cases cannot be solved in time unless #ETH fails. The main ingredient is to prove that the number of independent sets in bipartite graphs with n vertices cannot be computed in time unless #ETH fails. In order to prove our results, we use the block interpolation idea by Curticapean and transfer it to systems of linear equations that might not directly correspond to interpolation. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0472-z Issue No:Vol. 81, No. 2 (2019)

Authors:Archontia C. Giannopoulou; Michał Pilipczuk; Jean-Florent Raymond; Dimitrios M. Thilikos; Marcin Wrochna Pages: 557 - 588 Abstract: Cutwidth is one of the classic layout parameters for graphs. It measures how well one can order the vertices of a graph in a linear manner, so that the maximum number of edges between any prefix and its complement suffix is minimized. As graphs of cutwidth at most k are closed under taking immersions, the results of Robertson and Seymour imply that there is a finite list of minimal immersion obstructions for admitting a cut layout of width at most k. We prove that every minimal immersion obstruction for cutwidth at most k has size at most \(2^{{O}(k^3\log k)}\) . As an interesting algorithmic byproduct, we design a new fixed-parameter algorithm for computing the cutwidth of a graph that runs in time \(2^{{O}(k^2\log k)}\cdot n\) , where k is the optimum width and n is the number of vertices. While being slower by a \(\log k\) -factor in the exponent than the fastest known algorithm, given by Thilikos et al. (J Algorithms 56(1):1–24, 2005; J Algorithms 56(1):25–49, 2005), our algorithm has the advantage of being simpler and self-contained; arguably, it explains better the combinatorics of optimum-width layouts. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0424-7 Issue No:Vol. 81, No. 2 (2019)

Authors:Benjamin Doerr; Christian Gießen; Carsten Witt; Jing Yang Pages: 593 - 631 Abstract: We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms in discrete search spaces. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the \((1+\lambda )\) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the \((1+\lambda )\) EA finds the optimum in an expected optimization time (number of fitness evaluations) of \(O(n\lambda /\log \lambda +n\log n)\) . This time is asymptotically smaller than the optimization time of the classic \((1+\lambda )\) EA. Previous work shows that this performance is best-possible among all \(\lambda \) -parallel mutation-based unbiased black-box algorithms. This result shows that the new way of adjusting the mutation rate can find optimal dynamic parameter values on the fly. Since our adjustment mechanism is simpler than the ones previously used for adjusting the mutation rate and does not have parameters itself, we are optimistic that it will find other applications. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0502-x Issue No:Vol. 81, No. 2 (2019)

Authors:Carsten Witt Pages: 632 - 667 Abstract: The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling \(\lambda \) solutions and adjusting the model according to the best \(\mu \) samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters \(\mu \) and \(\lambda \) . If \(\mu \ge c\log n\) for some constant \(c>0\) and \(\lambda =(1+\varTheta (1))\mu \) , we obtain a general bound \(O(\mu n)\) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval \([1/n,1-1/n]\) . If \(\mu \ge c' \sqrt{n}\log n\) for a constant \(c'>0\) and \(\lambda =(1+\varTheta (1))\mu \) , the behavior of the algorithm changes and the bound on the expected running time becomes \(O(\mu \sqrt{n})\) , which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound \(\varOmega (\mu \sqrt{n}+n\log n)\) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices \(\mu =c\log n\) and \(\mu =c'\sqrt{n}\log n\) . They also improve the previously best known upper bound \(O(n\log n\log \log n)\) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for \(\mu =c\log n\) and \(\lambda =(1+\varTheta (1))\mu \) . PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0463-0 Issue No:Vol. 81, No. 2 (2019)

Authors:Duc-Cuong Dang; Per Kristian Lehre; Phan Trung Hai Nguyen Pages: 668 - 702 Abstract: Estimation of Distribution Algorithms (EDAs) are stochastic heuristics that search for optimal solutions by learning and sampling from probabilistic models. Despite their popularity in real-world applications, there is little rigorous understanding of their performance. Even for the Univariate Marginal Distribution Algorithm (UMDA)—a simple population-based EDA assuming independence between decision variables—the optimisation time on the linear problem OneMax was until recently undetermined. The incomplete theoretical understanding of EDAs is mainly due to the lack of appropriate analytical tools. We show that the recently developed level-based theorem for non-elitist populations combined with anti-concentration results yield upper bounds on the expected optimisation time of the UMDA. This approach results in the bound \(\mathcal {O}\left( n\lambda \log \lambda +n^2\right) \) on the LeadingOnes and BinVal problems for population sizes \(\lambda >\mu =\varOmega (\log n)\) , where \(\mu \) and \(\lambda \) are parameters of the algorithm. We also prove that the UMDA with population sizes \(\mu \in \mathcal {O}\left( \sqrt{n}\right) \cap \varOmega (\log n)\) optimises OneMax in expected time \(\mathcal {O}\left( \lambda n\right) \) , and for larger population sizes \(\mu =\varOmega (\sqrt{n}\log n)\) , in expected time \(\mathcal {O}\left( \lambda \sqrt{n}\right) \) . The facility and generality of our arguments suggest that this is a promising approach to derive bounds on the expected optimisation time of EDAs. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0507-5 Issue No:Vol. 81, No. 2 (2019)

Authors:Benjamin Doerr; Carola Doerr; Timo Kötzing Pages: 703 - 748 Abstract: Following up on previous work of Cathabard et al. (in: Proceedings of foundations of genetic algorithms (FOGA’11), ACM, 2011) we analyze variants of the (1 + 1) evolutionary algorithm (EA) for problems with unknown solution length. For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield for both benchmark functions OneMax and LeadingOnes an expected optimization time that is of the same order as that of the (1 + 1) EA knowing the solution length. More than this, we show that almost the same run times can be achieved even if no a priori information on the solution length is available. We also regard the situation in which neither the number nor the positions of the bits with an influence on the fitness function are known. Solving an open problem from Cathabard et al. we show that, for arbitrary \(s\in {\mathbb {N}}\) , such OneMax and LeadingOnes instances can be solved, simultaneously for all \(n\in {\mathbb {N}}\) , in expected time \(O(n (\log (n))^2 \log \log (n) \ldots \log ^{(s-1)}(n) (\log ^{(s)}(n))^{1+\varepsilon })\) and \(O(n^2 \log (n) \log \log (n) \ldots \log ^{(s-1)}(n) (\log ^{(s)}(n))^{1+\varepsilon })\) , respectively; that is, in almost the same time as if n and the relevant bit positions were known. For the LeadingOnes case, we prove lower bounds of same asymptotic order of magnitude apart from the \((\log ^{(s)}(n))^{\varepsilon }\) factor. Aiming at closing this arbitrarily small remaining gap, we realize that there is no asymptotically best performance for this problem. For any algorithm solving, for all n, all instances of size n in expected time at most T(n), there is an algorithm doing the same in time \(T'(n)\) with \(T'=o(T)\) . For OneMax we show results of similar flavor. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0477-7 Issue No:Vol. 81, No. 2 (2019)

Authors:Chao Qian; Chao Bian; Wu Jiang; Ke Tang Pages: 749 - 795 Abstract: In many real-world optimization problems, the objective function evaluation is subject to noise, and we cannot obtain the exact objective value. Evolutionary algorithms (EAs), a type of general-purpose randomized optimization algorithm, have been shown to be able to solve noisy optimization problems well. However, previous theoretical analyses of EAs mainly focused on noise-free optimization, which makes the theoretical understanding largely insufficient for the noisy case. Meanwhile, the few existing theoretical studies under noise often considered the one-bit noise model, which flips a randomly chosen bit of a solution before evaluation; while in many realistic applications, several bits of a solution can be changed simultaneously. In this paper, we study a natural extension of one-bit noise, the bit-wise noise model, which independently flips each bit of a solution with some probability. We analyze the running time of the ( \(1+1\) )-EA solving OneMax and LeadingOnes under bit-wise noise for the first time, and derive the ranges of the noise level for polynomial and super-polynomial running time bounds. The analysis on LeadingOnes under bit-wise noise can be easily transferred to one-bit noise, and improves the previously known results. Since our analysis discloses that the ( \(1+1\) )-EA can be efficient only under low noise levels, we also study whether the sampling strategy can bring robustness to noise. We prove that using sampling can significantly increase the largest noise level allowing a polynomial running time, that is, sampling is robust to noise. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0488-4 Issue No:Vol. 81, No. 2 (2019)

Authors:Tomáš Gavenčiak; Barbara Geissmann; Johannes Lengler Pages: 796 - 827 Abstract: We study sorting of permutations by random swaps if each comparison gives the wrong result with some fixed probability \(p<1/2\) . We use this process as prototype for the behaviour of randomized, comparison-based optimization heuristics in the presence of noisy comparisons. As quality measure, we compute the expected fitness of the stationary distribution. To measure the runtime, we compute the minimal number of steps after which the average fitness approximates the expected fitness of the stationary distribution. We study the process where in each round a random pair of elements at distance at most r are compared. We give theoretical results for the extreme cases \(r=1\) and \(r=n\) , and experimental results for the intermediate cases. We find a trade-off between faster convergence (for large r) and better quality of the solution after convergence (for small r). PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0429-2 Issue No:Vol. 81, No. 2 (2019)

Authors:Feng Shi; Martin Schirneck; Tobias Friedrich; Timo Kötzing; Frank Neumann Pages: 828 - 857 Abstract: Rigorous runtime analysis is a major approach towards understanding evolutionary computing techniques, and in this area linear pseudo-Boolean objective functions play a central role. Having an additional linear constraint is then equivalent to the NP-hard Knapsack problem, certain classes thereof have been studied in recent works. In this article, we present a dynamic model of optimizing linear functions under uniform constraints. Starting from an optimal solution with respect to a given constraint bound, we investigate the runtimes that different evolutionary algorithms need to recompute an optimal solution when the constraint bound changes by a certain amount. The classical \((1{+}1)\) EA and several population-based algorithms are designed for that purpose, and are shown to recompute efficiently. Furthermore, a variant of the \((1{+}(\lambda ,\lambda ) )\) GA for the dynamic optimization problem is studied, whose performance is better when the change of the constraint bound is small. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0451-4 Issue No:Vol. 81, No. 2 (2019)

Authors:Samadhi Nallaperuma; Pietro S. Oliveto; Jorge Pérez Heredia; Dirk Sudholt Pages: 858 - 885 Abstract: We investigate popular trajectory-based algorithms inspired by biology and physics to answer a question of general significance: when is it beneficial to reject improvements' A distinguishing factor of SSWM (strong selection weak mutation), a popular model from population genetics, compared to the Metropolis algorithm (MA), is that the former can reject improvements, while the latter always accepts them. We investigate when one strategy outperforms the other. Since we prove that both algorithms converge to the same stationary distribution, we concentrate on identifying a class of functions inducing large mixing times, where the algorithms will outperform each other over a long period of time. The outcome of the analysis is the definition of a function where SSWM is efficient, while Metropolis requires at least exponential time. The identified function favours algorithms that prefer high quality improvements over smaller ones, revealing similarities in the optimisation strategies of SSWM and Metropolis respectively with best-improvement (BILS) and first-improvement (FILS) local search. We conclude the paper with a comparison of the performance of these algorithms and a (1, \(\lambda \) ) RLS on the identified function. The algorithm favours the steepest gradient with a probability that increases with the size of its offspring population. The results confirm that BILS excels and that the (1, \(\lambda \) ) RLS is efficient only for large enough population sizes. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0462-1 Issue No:Vol. 81, No. 2 (2019)

Authors:Benjamin Doerr; Philipp Fischbeck; Clemens Frahnow; Tobias Friedrich; Timo Kötzing; Martin Schirneck Pages: 886 - 915 Abstract: Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for both measures (i) by improving the run time bounds via a careful analysis, (ii) by balancing individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicate-with-all approach with randomized rumor spreading. In the latter, each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern island models running simple \((1+1)\) evolutionary algorithms to optimize the classic test functions OneMax and LeadingOnes. We investigate binary trees, d-dimensional tori, and complete graphs as communication topologies. PubDate: 2019-02-01 DOI: 10.1007/s00453-018-0445-2 Issue No:Vol. 81, No. 2 (2019)

Authors:Hugo Gilbert; Olivier Spanjaard Abstract: This paper deals with fairness in stable marriage problems. The idea studied here is to achieve fairness thanks to a Generalized Gini Index (GGI), a well-known criterion in inequality measurement, that includes both the egalitarian and utilitarian criteria as special cases. We show that determining a stable marriage optimizing a GGI criterion of agents’ disutilities is an NP-hard problem. We then provide a polynomial time 2-approximation algorithm in the general case, as well as an exact algorithm which is polynomial time in the case of a constant number of non-zero weights parametrizing the GGI criterion. PubDate: 2019-02-08 DOI: 10.1007/s00453-019-00550-3

Authors:R. Ganian; N. S. Narayanaswamy; S. Ordyniak; C. S. Rahul; M. S. Ramanujan Abstract: Let G be an undirected simple graph having n vertices and let \(f:V(G)\rightarrow \{0,\dots , n-1\}\) be a function. An f-factor of G is a spanning subgraph H such that \(d_H(v)=f(v)\) for every vertex \(v\in V(G)\) . The subgraph H is called a connected f-factor if, in addition, H is connected. A classical result of Tutte (Can J Math 6(1954):347–352, 1954) is the polynomial time algorithm to check whether a given graph has a specified f-factor. However, checking for the presence of a connectedf-factor is easily seen to generalize Hamiltonian Cycle and hence is \(\mathsf {NP}\) -complete. In fact, the Connected f-Factor problem remains \(\mathsf {NP}\) -complete even when we restrict f(v) to be at least \(n^{\epsilon }\) for each vertex v and constant \(0\le \epsilon <1\) ; on the other side of the spectrum of nontrivial lower bounds on f, the problem is known to be polynomial time solvable when f(v) is at least \(\frac{n}{3}\) for every vertex v. In this paper, we extend this line of work and obtain new complexity results based on restrictions on the function f. In particular, we show that when f(v) is restricted to be at least \(\frac{n}{(\log n)^c}\) , the problem can be solved in quasi-polynomial time in general and in randomized polynomial time if \(c\le 1\) . Furthermore, we show that when \(c>1\) , the problem is \(\mathsf {NP}\) -intermediate. PubDate: 2019-01-30 DOI: 10.1007/s00453-019-00546-z