Authors:Yasushi Kawase; Atsushi Miyauchi Pages: 3461 - 3480 Abstract: In the densest subgraph problem, given an edge-weighted undirected graph \(G=(V,E,w)\) , we are asked to find \(S\subseteq V\) that maximizes the density, i.e., w(S) / S , where w(S) is the sum of weights of the edges in the subgraph induced by S. This problem has often been employed in a wide variety of graph mining applications. However, the problem has a drawback; it may happen that the obtained subset is too large or too small in comparison with the size desired in the application at hand. In this study, we address the size issue of the densest subgraph problem by generalizing the density of \(S\subseteq V\) . Specifically, we introduce the f-density of \(S\subseteq V\) , which is defined as w(S) / f( S ), where \(f:\mathbb {Z}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}\) is a monotonically non-decreasing function. In the f-densest subgraph problem (f-DS), we aim to find \(S\subseteq V\) that maximizes the f-density w(S) / f( S ). Although f-DS does not explicitly specify the size of the output subset of vertices, we can handle the above size issue using a convex/concave size function f appropriately. For f-DS with convex function f, we propose a nearly-linear-time algorithm with a provable approximation guarantee. On the other hand, for f-DS with concave function f, we propose an LP-based exact algorithm, a flow-based \(O( V ^3)\) -time exact algorithm for unweighted graphs, and a nearly-linear-time approximation algorithm. PubDate: 2018-12-01 DOI: 10.1007/s00453-017-0400-7 Issue No:Vol. 80, No. 12 (2018)

Authors:Pål Grønås Drange; Michał Pilipczuk Pages: 3481 - 3524 Abstract: We give a kernel with \(O(k^7)\) vertices for Trivially Perfect Editing, the problem of adding or removing at most k edges in order to make a given graph trivially perfect. This answers in affirmative an open question posed by Nastos and Gao (Soc Netw 35(3):439–450, 2013), and by Liu et al. (Tsinghua Sci Technol 19(4):346–357, 2014). Our general technique implies also the existence of kernels of the same size for related Trivially Perfect Completion and Trivially Perfect Deletion problems. Whereas for the former an \(O(k^3)\) kernel was given by Guo (in: ISAAC 2007, LNCS, vol 4835, Springer, pp 915–926, 2007), for the latter no polynomial kernel was known. We complement our study of Trivially Perfect Editing by proving that, contrary to Trivially Perfect Completion, it cannot be solved in time \(2^{o(k)}\cdot n^{O(1)}\) unless the exponential time hypothesis fails. In this manner we complete the picture of the parameterized and kernelization complexity of the classic edge modification problems for the class of trivially perfect graphs. PubDate: 2018-12-01 DOI: 10.1007/s00453-017-0401-6 Issue No:Vol. 80, No. 12 (2018)

Authors:Carlo Comin; Romeo Rizzi Pages: 3525 - 3562 Abstract: The first output-sensitive algorithm for the Maximal Clique Listing problem was given by Tsukiyama et al. (SIAM J Comput 6(3):505–517, 1977). As any algorithm falling within the Reverse Search paradigm, it performs a DFS visit of a directed tree (the RS-tree) having the objects to be listed (i.e., maximal cliques) as its nodes. In a recursive implementation, the RS-tree corresponds to the recursion tree of the algorithm. The time delay is given by the cost of generating the next child of a node, and Tsukiyama et al. showed it is O(mn). Makino and Uno (in: Hagerup, Katajainen (eds) Algorithm theory: SWAT 2004. Lecture notes in computer science, Springer, Berlin, pp 260–272, 2004) sharpened the time delay to \(O(n^{\omega })\) by generating all the children of a node in one single shot, which is performed by computing a square fast matrix multiplication. In this paper we further improve the asymptotics for the exploration of the same RS-tree by grouping the offsprings’ computation even further. Our idea is to rely on rectangular fast matrix multiplication in order to compute all children of \(n^2\) nodes in one single shot. According to the current upper bounds on square and rectangular fast matrix multiplication, with this the time delay improves from \(O(n^{2.3728639})\) to \(O(n^{2.093362})\) , keeping a polynomial work space. PubDate: 2018-12-01 DOI: 10.1007/s00453-017-0402-5 Issue No:Vol. 80, No. 12 (2018)

Authors:Susanne Albers; Dario Frascaria Pages: 3563 - 3596 Abstract: The classical paging problem is to maintain a two-level memory system so that a sequence of requests to memory pages can be served with a small number of faults. Standard competitive analysis gives overly pessimistic results as it ignores the fact that real-world input sequences exhibit locality of reference. Initiated by a paper of Borodin et al. (J Comput Syst Sci 50:244–258, 1995) there has been considerable research interest in paging with locality of reference. In this paper we study the paging problem using an intuitive and simple locality model that records inter-request distances in the input. A characteristic vector \(\mathcal{C}\) defines a class of request sequences with certain properties on these distances. The concept was introduced by Panagiotou and Souza (In: Proceedings of 38th annual ACM symposium on theory of computing (STOC), 2006). As a main contribution we develop new and improved bounds on the performance of important paging algorithms. A strength and novelty of the results is that they express algorithm performance in terms of locality parameters. In a first step we develop a new lower bound on the number of page faults incurred by an optimal offline algorithm opt. The bound is tight up to a small additive constant. Technically, the result relies on a new approach of relating the number of page faults to the number of memory hits and amortizing suitably. Based on these expressions for opt’s cost, we obtain nearly tight upper and lower bounds on lru’s competitiveness, given any characteristic vector \(\mathcal{C}\) . Furthermore, we compare lru to fifo and fwf. For the first time we show bounds that quantify the difference between lru’s performance and that of the other two strategies. The results imply that lru is strictly superior on inputs with a high degree of locality of reference. There exist general input families for which lru achieves constant competitive ratios whereas the guarantees of fifo and fwf tend to k, the size of the fast memory. Finally, we report on an experimental study that demonstrates that our theoretical bounds are very close to the experimentally observed ones. Hence our contributions bring competitive paging again closer to practice. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0406-9 Issue No:Vol. 80, No. 12 (2018)

Authors:Pavel Dvořák; Dušan Knop Pages: 3597 - 3617 Abstract: We study the Minimum Length-Bounded Cut problem where the task is to find a set of edges of a graph such that after removal of this set, the shortest path between two prescribed vertices is at least \(L + 1\) long. We show the problem can be computed in \(\mathsf {FPT}\) time with respect to L and the tree-width of the input graph G as parameters and with linear dependence of V(G) (i.e., in time \(f (L, {\text {tw}}(G)) V(G) \) for a computable function f). We derive an \(\mathsf {FPT}\) algorithm for a more general multi-commodity length-bounded cut problem when additionally parameterized by the number of terminals. For the former problem we show a \(\mathsf {W}[1]\) -hardness result when the parameterization is done by the path-width only (instead of the tree-width) and that this problem does not admit polynomial kernel when parameterized by path-width and L. We also derive an \(\mathsf {FPT}\) algorithm for the Minimum Length-Bounded Cut problem when parameterized by the tree-depth, thus showing an interesting behavior for this problem and parameters tree-depth and path-width. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0408-7 Issue No:Vol. 80, No. 12 (2018)

Authors:Maria Chudnovsky; Celina M. H. de Figueiredo; Sophie Spirkl Pages: 3618 - 3645 Abstract: We consider the graph sandwich problem and introduce almost monotone properties, for which the sandwich problem can be reduced to the recognition problem. We show that the property of containing a graph in \({\mathcal {C}}\) as an induced subgraph is almost monotone if \({\mathcal {C}}\) is the set of thetas, the set of pyramids, or the set of prisms and thetas. We show that the property of containing a hole of length \(\equiv j \mod n\) is almost monotone if and only if \(j \equiv 2 \mod n\) or \(n \le 2\) . Moreover, we show that the imperfect graph sandwich problem, also known as the Berge trigraph recognition problem, can be solved in polynomial time. We also study the complexity of several graph decompositions related to perfect graphs, namely clique cutset, (full) star cutset, homogeneous set, homogeneous pair, and 1-join, with respect to the partitioned and unpartitioned probe problems. We show that the clique cutset and full star cutset unpartitioned probe problems are NP-hard. We show that for these five decompositions, the partitioned probe problem is in P, and the homogeneous set, 1-join, 1-join in the complement, and full star cutset in the complement unpartitioned probe problems can be solved in polynomial time as well. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0409-6 Issue No:Vol. 80, No. 12 (2018)

Authors:Maurice Chandoo Pages: 3646 - 3672 Abstract: We show that computing canonical representations for circular-arc (CA) graphs reduces to computing certain subsets of vertices called flip sets. For a broad class of CA graphs, which we call uniform, it suffices to compute a CA representation to find such flip sets. As a consequence canonical representations for uniform CA graphs can be obtained in polynomial-time. We then investigate what kind of CA graphs pose a challenge to this approach. This leads us to introduce the notion of restricted CA matrices and show that the canonical representation problem for CA graphs is logspace-reducible to that of restricted CA matrices. As a byproduct, we obtain the result that CA graphs without induced 4-cycles can be canonized in logspace. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0410-0 Issue No:Vol. 80, No. 12 (2018)

Authors:Petra Berenbrink; Tom Friedetzky; Peter Kling; Frederik Mallmann-Trenn; Lars Nagel; Chris Wastell Pages: 3673 - 3703 Abstract: A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modeled as static balls into bins processes, where m balls (tasks) are to be distributed among n bins (servers). In a seminal work, Azar et al. (SIAM J Comput 29(1):180–200, 1999. https://doi.org/10.1137/S0097539795288490) proposed the sequential strategy \(\textsc {Greedy}[{d}]\) for \(n=m\) . Each ball queries the load of d random bins and is allocated to a least loaded of them. Azar et al. (1999) showed that \(d=2\) yields an exponential improvement compared to \(d=1\) . Berenbrink et al. (SIAM J Comput 35(6):1350–1385, 2006. https://doi.org/10.1137/S009753970444435X) extended this to \(m\gg n\) , showing that for \(d=2\) the maximal load difference is independent of m (in contrast to the \(d=1\) case). We propose a new variant of an infinite balls-into-bins process. In each round an expected number of \(\lambda n\) new balls arrive and are distributed (in parallel) to the bins. Subsequently, each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server’s current load but receive no information about parallel requests. We study the \(\textsc {Greedy}[{d}]\) distribution scheme in this setting and show a strong self-stabilizing property: for any arrival rate \(\lambda =\lambda (n)<1\) , the system load is time-invariant. Moreover, for any (even super-exponential) round t, the maximum system load is (w.h.p.) for \(d=1\) and for \(d=2\) . In particular, \(\textsc {Greedy}[{2}]\) has an exponentially smaller system load for high arrival rates. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0411-z Issue No:Vol. 80, No. 12 (2018)

Authors:Ali Dehghan; Mohammad-Reza Sadeghi; Arash Ahadi Pages: 3704 - 3727 Abstract: A Not-All-Equal decomposition of a graph G is a decomposition of the vertices of G into two parts such that each vertex in G has at least one neighbor in each part. Also, a 1-in-Degree decomposition of a graph G is a decomposition of the vertices of G into two parts A and B such that each vertex in the graph G has exactly one neighbor in part A. Among our results, we show that for a given graph G, if G does not have any cycle of length congruent to 2 mod 4, then there is a polynomial time algorithm to decide whether G has a 1-in-Degree decomposition. In sharp contrast, we prove that for every r, \(r\ge 3\) , for a given r-regular bipartite graph G determining whether G has a 1-in-Degree decomposition is \( \mathbf {NP} \) -complete. These complexity results have been especially useful in proving \( \mathbf {NP} \) -completeness of various graph related problems for restricted classes of graphs. In consequence of these results we show that for a given bipartite 3-regular graph G determining whether there is a vector in the null-space of the 0,1-adjacency matrix of G such that its entries belong to \(\{ \pm \, 1,\pm \,2\}\) is \(\mathbf {NP} \) -complete. Among other results, we introduce a new version of Planar 1-in-3 SAT and we prove that this version is also \( \mathbf {NP} \) -complete. In consequence of this result, we show that for a given planar (3, 4)-semiregular graph G determining whether there is a vector in the null-space of the 0,1-incidence matrix of G such that its entries belong to \(\{ \pm \,1,\pm \,2\}\) is \(\mathbf {NP} \) -complete. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0412-y Issue No:Vol. 80, No. 12 (2018)

Authors:Meng He; J. Ian Munro; Gelin Zhou Pages: 3728 - 3765 Abstract: In the path reporting problem, we preprocess a tree on n nodes each of which is assigned a weight, such that given an arbitrary path and a weight range, we can report the nodes whose weights are within the range. We consider this problem in dynamic settings, and propose the first non-trivial linear-space solution that supports path reporting in \(O((\lg n{/}\lg \lg n)^2 + occ \lg n{/}\lg \lg n)\) time, where occ is the output size, and the insertion and deletion of a node of an arbitrary degree in \(O(\lg ^{2+\epsilon } n)\) amortized time, for any constant \(\epsilon \in (0, 1)\) . Obvious solutions based on directly dynamizing solutions to the static version of this problem all require \(\Omega ((\lg n{/}\lg \lg n)^2)\) time for each node reported, and thus our query time is much faster. We also design data structures that support path counting and path reporting queries in \(O((\lg n{/}\lg \lg n)^2)\) time, and insertions and deletions in \(O((\lg n{/}\lg \lg n)^2)\) amortized time. This matches the best known results for dynamic two-dimensional range counting (He and Munro in Comput Geom 47(2):268–281, 2014) and range selection (He et al., in: Proceedings of the 22nd international symposium on algorithms and computation, ISAAC, Yokohama, Japan, 2011), which can be viewed as special cases of path counting and path selection. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0413-x Issue No:Vol. 80, No. 12 (2018)

Authors:Amin Gheibi; Anil Maheshwari; Jörg-Rüdiger Sack; Christian Scheffer Pages: 3766 - 3802 Abstract: In this paper, we study the weighted region problem (WRP) which is to compute a shortest path in a weighted partitioning of a plane. Recent results show that WRP is not solvable in any algebraic computation model over the rational numbers. Therefore, it is unlikely that WRP can be solved in polynomial time. Research has thus focused on determining approximate solutions for WRP. Approximate solutions for WRP typically show qualitatively different behaviors. We first formulate two qualitative criteria for weighted shortest paths. Then, we show how to produce a path that is quantitatively close-to-optimal and qualitatively satisfactory. More precisely, we propose an algorithm to transform any given approximate linear path into a linear path with the same (or shorter) weighted length for which we can prove that it satisfies the required qualitative criteria. This algorithm has a linear time complexity in the size of the given path. At the end, we explain our experiments on some triangular irregular networks (TINs) from Earth’s terrain. The results show that using the proposed algorithm, on average, 51% in query time and 69% in memory usage could be saved, in comparison with the existing method. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0414-9 Issue No:Vol. 80, No. 12 (2018)

Authors:Facundo Mémoli; Anastasios Sidiropoulos; Vijay Sridhar Pages: 3803 - 3824 Abstract: We study generalizations of classical metric embedding results to the case of quasimetric spaces; that is, spaces that do not necessarily satisfy symmetry. Quasimetric spaces arise naturally from the shortest-path distances on directed graphs. Perhaps surprisingly, very little is known about low-distortion embeddings for quasimetric spaces.Random embeddings into ultrametric spaces are arguably one of the most successful geometric tools in the context of algorithm design. We extend this to the quasimetric case as follows. We show that any n-point quasimetric space supported on a graph of treewidth t admits a random embedding into quasiultrametric spaces with distortion \(O(t \log ^2 n)\) , where quasiultrametrics are a natural generalization of ultrametrics. This result allows us to obtain \(t\log ^{O(1)} n\) -approximation algorithms for the Directed Non-Bipartite Sparsest Cut and the Directed Multicut problems on n-vertex graphs of treewidth t, with running time polynomial in both n and t. The above results are obtained by considering a generalization of random partitions to the quasimetric case, which we refer to as random quasipartitions. Using this definition and a construction of Chuzhoy and Khanna (JACM 56(2):6, 2009) we derive a polynomial lower bound on the distortion of random embeddings of general quasimetric spaces into quasiultrametric spaces. Finally, we establish a lower bound for embedding the shortest-path quasimetric of a graph G into graphs that exclude G as a minor. This lower bound is used to show that several embedding results from the metric case do not have natural analogues in the quasimetric setting. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0415-8 Issue No:Vol. 80, No. 12 (2018)

Authors:Imed Kacem; Hans Kellerer Pages: 3825 - 3843 Abstract: In this paper, we consider four single-machine scheduling problems with release times, with the aim of minimizing the maximum lateness. In the first problem we have a common deadline for all the jobs. The second problem looks for the Pareto frontier with respect to the two objective functions maximum lateness and makespan. The third problem is associated with a non-availability constraint. In the fourth one, the non-availability interval is related to the operator who is organizing the execution of jobs on the machine (no job can start, and neither can complete during the operator non-availability period). For each of the four problems, we establish the existence of a polynomial time approximation scheme. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0417-6 Issue No:Vol. 80, No. 12 (2018)

Authors:Saket Saurabh; Meirav Zehavi Pages: 3844 - 3860 Abstract: Max-Cut is a well-known classical NP-hard problem. This problem asks whether the vertex-set of a given graph \(G=(V,E)\) can be partitioned into two disjoint subsets, A and B, such that there exist at least p edges with one endpoint in A and the other endpoint in B. It is well known that if \(p\le E /2\) , the answer is necessarily positive. A widely-studied variant of particular interest to parameterized complexity, called \((k,n-k)\) -Max-Cut, restricts the size of the subset A to be exactly k. For the \((k,n-k)\) -Max-Cut problem, we obtain an \(\mathcal{O}^*(2^p)\) -time algorithm, improving upon the previous best \(\mathcal{O}^*(4^{p+o(p)})\) -time algorithm, as well as the first polynomial kernel. Our algorithm relies on a delicate combination of methods and notions, including independent sets, depth-search trees, bounded search trees, dynamic programming and treewidth, while our kernel relies on examination of the closed neighborhood of the neighborhood of a certain independent set of the graph G. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0418-5 Issue No:Vol. 80, No. 12 (2018)

Authors:Katherine Edwards; W. Sean Kennedy; Iraj Saniee Pages: 3889 - 3907 Abstract: We provide a quasilinear time algorithm for the p-center problem with an additive error less than or equal to 3 times the input graph’s hyperbolic constant. Specifically, for the graph \(G=(V,E)\) with n vertices, m edges and hyperbolic constant \(\delta \) , we construct an algorithm for p-centers in time \(O(p(\delta +1)(n+m)\log (n))\) with radius not exceeding \(r_p + \delta \) when \(p \le 2\) and \(r_p + 3\delta \) when \(p \ge 3\) , where \(r_p\) are the optimal radii. Prior work identified p-centers with accuracy \(r_p+\delta \) but with time complexity \(O((n^3\log n + n^2m)\log ({{\mathrm{diam}}}(G)))\) which is impractical for large graphs. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0425-6 Issue No:Vol. 80, No. 12 (2018)

Authors:Koyo Hayashi; Satoru Iwata Pages: 3908 - 3919 Abstract: In a directed graph \(D = (V, A)\) with a specified vertex \(r \in V\) , an arc subset \(B \subseteq A\) is called an r-arborescence if B has no arc entering r and there is a unique path from r to v in (V, B) for each \(v \in V \backslash \{ r \}\) . The problem of finding a minimum weight r-arborescence in a weighted digraph has been studied for decades starting with Chu and Liu (Sci Sin 14:1396–1400, 1965), Edmonds (J Res Natl Bur Stand 71B:233–240, 1967) and Bock (Developments in operations research, Gordon and Breach, New York, pp 29–44, 1971). In this paper, we focus on the number of minimum weight arborescences. We present an algorithm for counting minimum weight r-arborescences in \(O(n^{\omega })\) time, where n is the number of vertices of an input digraph and \(\omega \) is the matrix multiplication exponent. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0426-5 Issue No:Vol. 80, No. 12 (2018)

Authors:Eden Chlamtáč; Zachary Friggstad; Konstantinos Georgiou Pages: 3920 - 3942 Abstract: We study the applicability of lift-and-project methods to the Set Cover and Knapsack problems. Inspired by recent work of Karlin et al. (IPCO 2011), who examined this connection for Knapsack, we consider the applicability and limitations of these methods for Set Cover, as well as extend the existing results for Knapsack. For the Set Cover problem, Cygan et al. (IPL 2009) gave sub-exponential-time approximation algorithms with approximation ratios better than \(\ln n\) . We present a very simple combinatorial algorithm which has nearly the same time-approximation tradeoff as the algorithm of Cygan et al. We then adapt this to an LP-based algorithm using the LP hierarchy of Lovász and Schrijver. However, our approach involves the trick of “lifting the objective function”. We show that this trick is essential, by demonstrating an integrality gap of \((1-\varepsilon )\ln n\) at level \(\Omega (n)\) of the stronger LP hierarchy of Sherali and Adams (when the objective function is not lifted). Finally, we show that the SDP hierarchy of Lovász and Schrijver ( \(\mathrm{LS}_+\) ) reduces the integrality gap for Knapsack to \((1+\varepsilon )\) at level O(1). This stands in contrast to Set Cover (where the work of Aleknovich et al. (STOC 2005) rules out any improvement using \(\mathrm{LS}_+\) ), and extends the work of Karlin et al., who demonstrated such an improvement only for the more powerful SDP hierarchy of Lasserre. Our \(\mathrm{LS}_+\) -based rounding and analysis are quite different from theirs (in particular, not relying on the decomposition theorem they prove for the Lasserre hierarchy), and to the best of our knowledge represents the first explicit demonstration of such a reduction in the integrality gap of \(\mathrm{LS}_+\) relaxations after a constant number of rounds. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0427-4 Issue No:Vol. 80, No. 12 (2018)

Authors:Andrzej Lingas; Mia Persson Pages: 3943 - 3957 Abstract: We study the problem of computing the so called minimum and maximum witnesses for Boolean vector convolution. We also consider a generalization of the problem which is to determine for each positive value at a coordinate of the convolution vector, q smallest (largest) witnesses, where q is the minimum of a parameter k and the number of witnesses for this coordinate. We term this problem the smallest k-witness problem or the largest k-witness problem, respectively. We also study the corresponding smallest and largest k-witness problems for Boolean matrix product. First, we present an \(\tilde{O}(n^{1.5}k^{0.5})\) -time algorithm for the smallest or largest k-witness problem for the Boolean convolution of two n-dimensional vectors, where the notation \(\tilde{O}(\ )\) suppresses polylogarithmic in n factors. In consequence, we obtain new upper time bounds on reporting positions of mismatches in potential string alignments and on computing restricted cases of the \((\min , +)\) vector convolution. Next, we present a fast (substantially subcubic in n and linear in k) algorithm for the smallest or largest k-witness problem for the Boolean matrix product of two \(n\times n\) Boolean matrices. It yields fast algorithms for reporting k lightest (heaviest) triangles in a vertex-weighted graph. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0492-8 Issue No:Vol. 80, No. 12 (2018)

Authors:Yuan Xue; Boting Yang; Farong Zhong; Sandra Zilles Pages: 3959 - 3981 Abstract: Research on graph searching has recently gained interest in computer science, mathematics, and physics. This paper studies fast searching of a fugitive in a graph, a model that was introduced by Dyer et al. (in: Fleischer, Xu (eds.) Algorithmic aspects in information and management, Springer, New York, 2008). We provide lower bounds and upper bounds on the fast search number (i.e., the minimum number of searchers required for capturing the fugitive) of complete k-partite graphs. We also investigate some special classes of complete k-partite graphs, such as complete bipartite graphs and complete split graphs. We solve the open problem of determining the fast search number of complete bipartite graphs, and present upper and lower bounds on the fast search number of complete split graphs. PubDate: 2018-12-01 DOI: 10.1007/s00453-018-0456-z Issue No:Vol. 80, No. 12 (2018)