Authors:J. Ian Munro; Gonzalo Navarro; Jesper Sindahl Nielsen; Rahul Shah; Sharma V. Thankachan Pages: 379 - 393 Abstract: Let \(\mathcal {D} = \{\mathsf {T}_1,\mathsf {T}_2, \ldots ,\mathsf {T}_D\}\) be a collection of D string documents of n characters in total, that are drawn from an alphabet set \(\varSigma =[\sigma ]\) . The top-k document retrieval problem is to preprocess \(\mathcal{D}\) into a data structure that, given a query \((P[1\ldots p],k)\) , can return the k documents of \(\mathcal{D}\) most relevant to the pattern P. The relevance is captured using a predefined ranking function, which depends on the set of occurrences of P in \(\mathsf {T}_d\) . For example, it can be the term frequency (i.e., the number of occurrences of P in \(\mathsf {T}_d\) ), or it can be the term proximity (i.e., the distance between the closest pair of occurrences of P in \(\mathsf {T}_d\) ), or a pattern-independent importance score of \(\mathsf {T}_d\) such as PageRank. Linear space and optimal query time solutions already exist for the general top-k document retrieval problem. Compressed and compact space solutions are also known, but only for a few ranking functions such as term frequency and importance. However, space efficient data structures for term proximity based retrieval have been evasive. In this paper we present the first sub-linear space data structure for this relevance function, which uses only o(n) bits on top of any compressed suffix array of \(\mathcal{D}\) and solves queries in \(O((p+k) {{\mathrm{polylog}}}\,\,n)\) time. We also show that scores that consist of a weighted combination of term proximity, term frequency, and document importance, can be handled using twice the space required to represent the text collection. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0167-2 Issue No:Vol. 78, No. 2 (2017)

Authors:Paola Bonizzoni; Gianluca Della Vedova; Yuri Pirola; Marco Previtali; Raffaella Rizzi Pages: 394 - 424 Abstract: Some recent results (Bauer et al. in Algorithms in bioinformatics, Springer, Berlin, pp 326–337, 2012; Cox et al. in Algorithms in bioinformatics, Springer, Berlin, pp. 214–224, 2012; Rosone and Sciortino in The nature of computation. Logic, algorithms, applications, Springer, Berlin, pp 353–364, 2013) have introduced external-memory algorithms to compute self-indexes of a set of strings, mainly via computing the Burrows–Wheeler transform of the input strings. The motivations for those results stem from Bioinformatics, where a large number of short strings (called reads) are routinely produced and analyzed. In that field, a fundamental problem is to assemble a genome from a large set of much shorter samples extracted from the unknown genome. The approaches that are currently used to tackle this problem are memory-intensive. This fact does not bode well with the ongoing increase in the availability of genomic data. A data structure that is used in genome assembly is the string graph, where vertices correspond to samples and arcs represent two overlapping samples. In this paper we address an open problem of Simpson and Durbin (Bioinformatics 26(12):i367–i373, 2010): to design an external-memory algorithm to compute the string graph. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0165-4 Issue No:Vol. 78, No. 2 (2017)

Authors:Allan Borodin; Mark Braverman; Brendan Lucier; Joel Oren Pages: 425 - 452 Abstract: Motivated by applications to word-of-mouth advertising, we consider a game-theoretic scenario in which competing advertisers want to target initial adopters in a social network. Each advertiser wishes to maximize the resulting cascade of influence, modeled by a general network diffusion process. However, competition between products may adversely impact the rate of adoption for any given firm. The resulting framework gives rise to complex preferences that depend on the specifics of the stochastic diffusion model and the network topology. We study this model from the perspective of a central mechanism, such as a social networking platform, that can optimize seed placement as a service for the advertisers. We ask: given the reported budgets of the competing firms, how should a mechanism choose seeds to maximize overall efficiency? Beyond the algorithmic problem, competition raises issues of strategic behaviour: rational agents should be incentivized to truthfully report their advertising budget. For a general class of influence spread models, we show that when there are two players, the social welfare can be \(\frac{e}{e-1}\) -approximated by a polynomial-time strategyproof mechanism. Our mechanism uses a dynamic programming procedure to randomize the order in which advertisers are allocated seeds according to a greedy method. For three or more players, we demonstrate that under an additional assumption (satisfied by many existing models of influence spread) there exists a simpler strategyproof \(\frac{e}{e-1}\) -approximation mechanism; notably, this natural greedy mechanism is not necessarily strategyproof when there are only two players. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0169-0 Issue No:Vol. 78, No. 2 (2017)

Authors:Timothy M. Chan; Meng He; J. Ian Munro; Gelin Zhou Pages: 453 - 491 Abstract: In the path minimum problem, we preprocess a tree on n weighted nodes, such that given an arbitrary path, the node with the smallest weight along this path can be located. We design novel succinct indices for this problem under the indexing model, for which weights of nodes are read-only and can be accessed with ranks of nodes in the preorder traversal sequence of the input tree. We present an index within O(m) bits of additional space that supports queries in \(O(\varvec{\alpha }(m, n))\) time and \(O(\varvec{\alpha }(m, n))\) accesses to the weights of nodes, for any integer \(m \ge n\) ; and an index within \(2n + o(n)\) bits of additional space that supports queries in \(O(\varvec{\alpha }(n))\) time and \(O(\varvec{\alpha }(n))\) accesses to the weights of nodes. Here \(\varvec{\alpha }(m, n)\) is the inverse-Ackermann function, and \(\varvec{\alpha }(n) = \varvec{\alpha }(n, n)\) . These indices give us the first succinct data structures for the path minimum problem. Following the same approach, we also develop succinct data structures for semigroup path sum queries, for which a query asks for the sum of weights along a given query path. One of our data structures requires \(n\lg \sigma + 2n + o(n\lg \sigma )\) bits of space and \(O(\varvec{\alpha }(n))\) query time, where \(\sigma \) is the size of the semigroup. In the path reporting problem, queries ask for the nodes along a query path whose weights are within a two-sided query range. Using the succinct indices for path minimum queries, we achieve three different time/space tradeoffs for path reporting by designing an O(n)-word data structure with \(O(\lg ^\epsilon n + occ \cdot \lg ^\epsilon n)\) query time; an \(O(n\lg \lg n)\) -word data structure with \(O(\lg \lg n + occ \cdot \lg \lg n)\) query time; and an \(O(n \lg ^\epsilon n)\) -word data structure with \(O(\lg \lg n + occ)\) query time. Here occ is the number of nodes reported and \(\epsilon \) is an arbitrary constant between 0 and 1. These tradeoffs match the state of the art of two-dimensional orthogonal range reporting queries (Chan et al. 2011), which can be treated as a special case of path reporting queries. When the number of distinct weights is much smaller than n, we further improve both the query time and the space cost of these three results. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0170-7 Issue No:Vol. 78, No. 2 (2017)

Authors:Susanne Albers; Matthias Hellwig Pages: 492 - 520 Abstract: Online makespan minimization is a classical problem in which a sequence of jobs \(\sigma = J_1, \ldots , J_n\) has to be scheduled on m identical parallel machines so as to minimize the maximum completion time of any job. In this paper we investigate the problem with an essentially new model of resource augmentation. More specifically, an online algorithm is allowed to build several schedules in parallel while processing \(\sigma \) . At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions. The setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions. As a main result we develop a \((4/3+\varepsilon )\) -competitive algorithm, for any \(0<\varepsilon \le 1\) , that uses a constant number of schedules. The constant is equal to \(1/\varepsilon ^{O(\log (1/\varepsilon ))}\) . We also give a \((1+\varepsilon )\) -competitive algorithm, for any \(0<\varepsilon \le 1\) , that builds a polynomial number of \((m/\varepsilon )^{O(\log (1/\varepsilon ) / \varepsilon )}\) schedules. This value depends on m but is independent of the input \(\sigma \) . The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4 / 3 must construct \(\Omega (m)\) schedules. Our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of a job sequence \(\sigma \) to within a factor of \(1+\varepsilon \) and (2) guess the job processing times and their frequencies in \(\sigma \) . In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant. We remark that the competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation. Furthermore they are at least as good and in most cases better than the ratios obtained with other means of resource augmentation for makespan minimization. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0172-5 Issue No:Vol. 78, No. 2 (2017)

Authors:Jelena Marašević; Clifford Stein; Gil Zussman Pages: 521 - 557 Abstract: This paper considers max-min fair rate allocation and routing in energy harvesting networks where fairness is required among both the nodes and the time slots. Unlike most previous work on fairness, we focus on multihop topologies and consider different routing methods. We assume a predictable energy profile and focus on the design of efficient and optimal algorithms that can serve as benchmarks for distributed and approximate algorithms. We first develop an algorithm that obtains a max-min fair rate assignment for any routing that is specified at the input. We then turn to the problem of determining a “good” routing. For time-invariable unsplittable routing, we develop an algorithm that finds routes that maximize the minimum rate assigned to any node in any slot. For fractional routing, we derive a combinatorial algorithm for the time-invariable case with constant rates. We show that the time-variable case is at least as hard as the 2-commodity feasible flow problem and design an FPTAS to combat the high running time. Finally, we show that finding an unsplittable routing or a routing tree that provides lexicographically maximum rate assignment (i.e., the best in the max-min fairness terms) is NP-hard, even for a time horizon of a single slot. Our analysis provides insights into the problem structure and can be applied to other related fairness problems. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0171-6 Issue No:Vol. 78, No. 2 (2017)

Authors:Benjamin Doerr; Frank Neumann; Andrew M. Sutton Pages: 561 - 586 Abstract: We contribute to the theoretical understanding of randomized search heuristics by investigating their optimization behavior on satisfiable random k-satisfiability instances both in the planted solution model and the uniform model conditional on satisfiability. Denoting the number of variables by n, our main technical result is that the simple ( \(1+1\) ) evolutionary algorithm with high probability finds a satisfying assignment in time \(O(n \log n)\) when the clause-variable density is at least logarithmic. For low density instances, evolutionary algorithms seem to be less effective, and all we can show is a subexponential upper bound on the runtime for densities below \(\frac{1}{k(k-1)}\) . We complement these mathematical results with numerical experiments on a broader density spectrum. They indicate that, indeed, the ( \(1+1\) ) EA is less efficient on lower densities. Our experiments also suggest that the implicit constants hidden in our main runtime guarantee are low. Our main result extends and considerably improves the result obtained by Sutton and Neumann (Lect Notes Comput Sci 8672:942–951, 2014) in terms of runtime, minimum density, and clause length. These improvements are made possible by establishing a close fitness-distance correlation in certain parts of the search space. This approach might be of independent interest and could be useful for other average-case analyses of randomized search heuristics. While the notion of a fitness-distance correlation has been around for a long time, to the best of our knowledge, this is the first time that fitness-distance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0190-3 Issue No:Vol. 78, No. 2 (2017)

Authors:Christian Gießen; Carsten Witt Pages: 587 - 609 Abstract: The ( \(1+\lambda \) ) EA with mutation probability c / n, where \(c>0\) is an arbitrary constant, is studied for the classical OneMax function. Its expected optimization time is analyzed exactly (up to lower order terms) as a function of c and \(\lambda \) . It turns out that 1 / n is the only optimal mutation probability if \(\lambda =o(\ln n\ln \ln n/\ln \ln \ln n)\) , which is the cut-off point for linear speed-up. However, if \(\lambda \) is above this cut-off point then the standard mutation probability 1 / n is no longer the only optimal choice. Instead, the expected number of generations is (up to lower order terms) independent of c, irrespectively of it being less than 1 or greater. The theoretical results are obtained by a careful study of order statistics of the binomial distribution and variable drift theorems for upper and lower bounds. Experimental supplements shed light on the optimal mutation probability for small problem sizes. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0214-z Issue No:Vol. 78, No. 2 (2017)

Authors:Carola Doerr; Johannes Lengler Pages: 610 - 640 Abstract: Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity of a problem. Our testbed are so-called OneMax functions which require to minimize the Hamming distance to an unknown target string \(z \in \{0,1\}^n\) . We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated (1+1) memory-restricted as well as its ranking-based black-box complexity for bit strings of length n is only of order \(n/\log n\) , the combined model does not allow for algorithms being faster than linear in n, as can easily be seen by standard information-theoretic considerations. Our main result is a matching upper bound. That is, we show that the (1+1) memory-restricted ranking-based black-box complexity of OneMax is linear. We also analyze its black-box complexity for memory sizes other than (1+1). Moreover, we show that these results apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model [Doerr/Lengler GECCO 2015] that combines the above-mentioned memory-restrictions and ranking-based decisions with an enforced usage of truncation selection. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0168-1 Issue No:Vol. 78, No. 2 (2017)

Authors:Andrei Lissovoi; Carsten Witt Pages: 641 - 659 Abstract: A simple island model with \(\lambda \) islands and migration occurring after every \(\tau \) iterations is studied on the dynamic fitness function Maze. This model is equivalent to a \((1+\lambda )\) EA if \(\tau =1\) , i. e., migration occurs during every iteration. It is proved that even for an increased offspring population size up to \(\lambda =O(n^{1-\epsilon })\) , the \((1+\lambda )\) EA is still not able to track the optimum of Maze. If the migration interval is chosen carefully, the algorithm is able to track the optimum even for logarithmic \(\lambda \) . The relationship of \(\tau , \lambda \) , and the ability of the island model to track the optimum is then investigated more closely. Finally, experiments are performed to supplement the asymptotic results, and investigate the impact of the migration topology. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0262-4 Issue No:Vol. 78, No. 2 (2017)

Authors:Duc-Cuong Dang; Thomas Jansen; Per Kristian Lehre Pages: 660 - 680 Abstract: Real-world optimisation problems are often dynamic. Previously good solutions must be updated or replaced due to changes in objectives and constraints. It is often claimed that evolutionary algorithms are particularly suitable for dynamic optimisation because a large population can contain different solutions that may be useful in the future. However, rigorous theoretical demonstrations for how populations in dynamic optimisation can be essential are sparse and restricted to special cases. This paper provides theoretical explanations of how populations can be essential in evolutionary dynamic optimisation in a general and natural setting. We describe a natural class of dynamic optimisation problems where a sufficiently large population is necessary to keep track of moving optima reliably. We establish a relationship between the population-size and the probability that the algorithm loses track of the optimum. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0187-y Issue No:Vol. 78, No. 2 (2017)

Authors:Tiago Paixão; Jorge Pérez Heredia; Dirk Sudholt; Barbora Trubenová Pages: 681 - 713 Abstract: Evolutionary algorithms (EAs) form a popular optimisation paradigm inspired by natural evolution. In recent years the field of evolutionary computation has developed a rigorous analytical theory to analyse the runtimes of EAs on many illustrative problems. Here we apply this theory to a simple model of natural evolution. In the Strong Selection Weak Mutation (SSWM) evolutionary regime the time between occurrences of new mutations is much longer than the time it takes for a mutated genotype to take over the population. In this situation, the population only contains copies of one genotype and evolution can be modelled as a stochastic process evolving one genotype by means of mutation and selection between the resident and the mutated genotype. The probability of accepting the mutated genotype then depends on the change in fitness. We study this process, SSWM, from an algorithmic perspective, quantifying its expected optimisation time for various parameters and investigating differences to a similar evolutionary algorithm, the well-known (1+1) EA. We show that SSWM can have a moderate advantage over the (1+1) EA at crossing fitness valleys and study an example where SSWM outperforms the (1+1) EA by taking advantage of information on the fitness gradient. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0212-1 Issue No:Vol. 78, No. 2 (2017)

Authors:Dogan Corus; Jun He; Thomas Jansen; Pietro S. Oliveto; Dirk Sudholt; Christine Zarges Pages: 714 - 740 Abstract: Understanding which function classes are easy and which are hard for a given algorithm is a fundamental question for the analysis and design of bio-inspired search heuristics. A natural starting point is to consider the easiest and hardest functions for an algorithm. For the (1+1) EA using standard bit mutation (SBM) it is well known that OneMax is an easiest function with unique optimum while Trap is a hardest. In this paper we extend the analysis of easiest function classes to the contiguous somatic hypermutation (CHM) operator used in artificial immune systems. We define a function MinBlocks and prove that it is an easiest function for the (1+1) EA using CHM, presenting both a runtime and a fixed budget analysis. Since MinBlocks is, up to a factor of 2, a hardest function for standard bit mutations, we consider the effects of combining both operators into a hybrid algorithm. We rigorously prove that by combining the advantages of k operators, several hybrid algorithmic schemes have optimal asymptotic performance on the easiest functions for each individual operator. In particular, the hybrid algorithms using CHM and SBM have optimal asymptotic performance on both OneMax and MinBlocks. We then investigate easiest functions for hybrid schemes and show that an easiest function for a hybrid algorithm is not just a trivial weighted combination of the respective easiest functions for each operator. PubDate: 2017-06-01 DOI: 10.1007/s00453-016-0201-4 Issue No:Vol. 78, No. 2 (2017)

Authors:Thomas Bläsius; Tobias Friedrich; Anton Krohmer Abstract: Most complex real world networks display scale-free features. This characteristic motivated the study of numerous random graph models with a power-law degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by Krioukov et al. (in Phys Rev E 82(3):036106, 2010) and has shown theoretically and empirically to fulfill all typical properties of real world networks, including power-law degree distribution and high clustering. We study cliques in hyperbolic random graphs G and present new results on the expected number of k-cliques \(\mathbb {E}\left[ K_k\right] \) and the size of the largest clique \(\omega (G)\) . We observe that there is a phase transition at power-law exponent \(\beta = 3\) . More precisely, for \(\beta \in (2,3)\) we prove \(\mathbb {E}\left[ K_k\right] =n^{k (3-\beta )/2} \varTheta (k)^{-k}\) and \(\omega (G)=\varTheta (n^{(3-\beta )/2})\) , while for \(\beta \geqslant 3\) we prove \(\mathbb {E}\left[ K_k\right] =n \, \varTheta (k)^{-k} \) and \(\omega (G)=\varTheta (\log (n)/ \log \log n)\) . Furthermore, we show that for \(\beta \geqslant 3\) , cliques in hyperbolic random graphs can be computed in time \(\mathcal {O}(n)\) . If the underlying geometry is known, cliques can be found with worst-case runtime \(\mathcal {O}(m \cdot n^{2.5})\) for all values of \(\beta \) . PubDate: 2017-05-17 DOI: 10.1007/s00453-017-0323-3

Authors:Till Bruckdorfer; Stefan Felsner; Michael Kaufmann Abstract: Bus graphs are used for the visualization of hypergraphs, for example in VLSI design. Formally, they are specified by bipartite graphs \(G=(B \cup V,E)\) . The bus vertices B are realized by horizontal and vertical segments, and the connector vertices V are realized by points and connected orthogonally to the bus segments without any bend; this is called bus realization. The decision whether a bipartite graph admits a bus realization, where connections may cross, is NP-complete. In this paper we show that in contrast the question whether a planar bipartite graph admits a planar bus realization can be answered in polynomial time. First we deal with plane instances, i.e., with the case where a planar embedding is prescribed. We identify three necessary conditions on the partition \(B=B_{\mathrm {V}}\cup B_{\mathrm {H}}\) of the bus vertices, here \(B_{\mathrm {V}}\) denotes the vertical and \(B_{\mathrm {H}}\) the horizontal buses. We provide a test whether a good partition, i.e., a partition obeying these conditions, exists. The test is based on the computation of a maximum matching on some auxiliary graph. Given a good partition we can construct a non-crossing realization of the bus graph on an \(O(n)\times O(n)\) grid in linear time. In the second part we use SPQR-trees to solve the problem for general planar bipartite graphs. PubDate: 2017-05-16 DOI: 10.1007/s00453-017-0321-5

Authors:Marcin Pilipczuk; Michał Pilipczuk; Marcin Wrochna Abstract: In the Edge Bipartization problem one is given an undirected graph G and an integer k, and the question is whether k edges can be deleted from G so that it becomes bipartite. Guo et al. (J Comput Syst Sci 72(8):1386–1396, 2006) proposed an algorithm solving this problem in time \(\mathcal {O}(2^k\cdot {m}^2)\) ; today, this algorithm is a textbook example of an application of the iterative compression technique. Despite extensive progress in the understanding of the parameterized complexity of graph separation problems in the recent years, no significant improvement upon this result has been yet reported. We present an algorithm for Edge Bipartization that works in time \(\mathcal {O}(1.977^k\cdot {nm})\) , which is the first algorithm with the running time dependence on the parameter better than \(2^k\) . To this end, we combine the general iterative compression strategy of Guo et al. (2006), the technique proposed by Wahlström (in: Proceedings of SODA’14, SIAM, 2014) of using a polynomial-time solvable relaxation in the form of a Valued Constraint Satisfaction Problem to guide a bounded-depth branching algorithm, and an involved Measure&Conquer analysis of the recursion tree. PubDate: 2017-05-12 DOI: 10.1007/s00453-017-0319-z

Authors:Samuel Fiorini; R. Krithika; N. S. Narayanaswamy; Venkatesh Raman Abstract: Given an undirected simple graph G, a set of vertices is an r-clique transversal if it has at least one vertex from every r-clique. Such sets generalize vertex covers as a vertex cover is a 2-clique transversal. Perfect graphs are a well-studied class of graphs on which a minimum weight vertex cover can be obtained in polynomial time. Further, an r-clique transversal in a perfect graph is also a set of vertices whose deletion results in an \((r-1)\) -colorable graph. In this work, we study the problem of finding a minimum weight r-clique transversal in a perfect graph. This problem is known to be \(\mathsf {NP}\) -hard for \(r \ge 3\) and admits a straightforward r-approximation algorithm. We describe two different \(\frac{r+1}{2}\) -approximation algorithms for the problem. Both the algorithms are based on (different) linear programming relaxations. The first algorithm employs the primal–dual method while the second uses rounding based on a threshold value. We also show that the problem is APX-hard and describe hardness results in the context of parameterized algorithms and kernelization. PubDate: 2017-05-08 DOI: 10.1007/s00453-017-0315-3

Authors:Magnus Bordewich; Charles Semple; Nihan Tokac Abstract: A tree-child network is a phylogenetic network with the property that each non-leaf vertex is the parent of a tree vertex or a leaf. In this paper, we show that a tree-child network on taxa (leaf) set X with an outgroup and a positive real-valued weighting of its edges is essentially determined by the multi-set of all path-length distances between elements in X provided, for each reticulation, the edges directed into it have equal weight. Furthermore, we give a polynomial-time algorithm for reconstructing such a network from this inter-taxa distance information. Such constructions are of central importance in evolutionary biology where phylogenetic networks represent the ancestral history of a collection of present-day taxa. PubDate: 2017-05-08 DOI: 10.1007/s00453-017-0320-6

Authors:Sudeshna Kolay; Daniel Lokshtanov; Fahad Panolan; Saket Saurabh Abstract: Let \({\mathcal {F}}\) be a family of graphs. Given an n-vertex input graph G and a positive integer k, testing whether G has a vertex subset S of size at most k, such that \(G-S\) belongs to \({\mathcal {F}}\) , is a prototype vertex deletion problem. These type of problems have attracted a lot of attention in recent times in the domain of parameterized complexity. In this paper, we study two such problems; when \({\mathcal {F}}\) is either the family of forests of cacti or the family of forests of odd-cacti. A graph H is called a forest of cacti if every pair of cycles in H intersect on at most one vertex. Furthermore, a forest of cacti H is called a forest of odd cacti, if every cycle of H is of odd length. Let us denote by \({\mathcal {C}}\) and \({{\mathcal {C}}}_\mathsf{odd}\) , the families of forests of cacti and forests of odd cacti, respectively. The vertex deletion problems corresponding to \({\mathcal {C}}\) and \({{\mathcal {C}}}_\mathsf{odd}\) are called Diamond Hitting Set and Even Cycle Transversal, respectively. In this paper we design randomized algorithms with worst case run time \(12^k n^{\mathcal {O}(1)}\) for both these problems. Our algorithms considerably improve the running time for Diamond Hitting Set and Even Cycle Transversal, compared to what is known about them. PubDate: 2017-05-08 DOI: 10.1007/s00453-017-0317-1