Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Uwe Schauz As starting point, we formulate a corollary to the Quantitative Combinatorial Nullstellensatz. This corollary does not require the consideration of any coefficients of polynomials, only evaluations of polynomial functions. In certain situations, our corollary is more directly applicable and more ready-to-go than the Combinatorial Nullstellensatz itself. It is also of interest from a numerical point of view. We use it to explain a well-known connection between the sign of 1-factorizations (edge colorings) and the List Edge Coloring Conjecture. For efficient calculations and a better understanding of the sign, we then introduce and characterize the sign of single 1-factors. We show that the product over all signs of all the 1-factors in a 1-factorization is the sign of that 1-factorization. Using this result in an algorithm, we attempt to prove the List Edge Coloring Conjecture for all graphs with up to 10 vertices. This leaves us with some exceptional cases that need to be attacked with other methods.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Narayan Vikas In this paper, we show a very close relationship between the compaction, vertex-compaction, and retraction problems for reflexive and bipartite graphs. Similar to a long-standing open problem concerning whether the compaction and retraction problems are polynomially equivalent, the relationships that we present relate to our problems concerning whether the compaction and vertex-compaction problems are polynomially equivalent, and whether the vertex-compaction and retraction problems are polynomially equivalent. The relationships that we present also relate to the constraint satisfaction problem, providing evidence that similar to the compaction and retraction problems, it is also likely to be difficult to give a complete computational complexity classification of the vertex-compaction problem for every fixed reflexive or bipartite graph. In this paper, we however give a complete computational complexity classification of the vertex-compaction problem for all graphs, including even partially reflexive graphs, with four or fewer vertices, by giving proofs based on mostly just knowing the computational complexity classification results of the compaction problem for such graphs determined earlier by the author. Our results show that the compaction, vertex-compaction, and retraction problems are polynomially equivalent for every graph with four or fewer vertices.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Sven Mallach We consider the task to compute the pathwidth of a graph which is known to be equivalent to solving the vertex separation problem. The latter is naturally modeled as a linear ordering problem with respect to the vertices of the graph. Present mixed-integer programs for the vertex separation problem express linear orders using either position or set assignment variables. However, as we show, the lower bound on the pathwidth obtained when solving their linear programming relaxations is zero for any directed graph. This is one reason for their limited utility in solving larger instances to optimality. We then present a new formulation that is based on conventional linear ordering variables and a slightly different perspective on the problem. Its relaxation provably delivers non-zero lower bounds for any graph whose pathwidth is non-zero. Further structural results for and extensions to this formulation are discussed. Finally, an experimental evaluation of three mixed-integer programs, each representing one of the different yet existing modeling approaches, displays their potentials and limitations when used to solve the problem to optimality in practice.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Hanyu Gu, Alexander Kononov, Julia Memar, Yakov Zinder The paper is concerned with minimisation of the total weighted completion time for the two-stage flow shop with a buffer. In contrast to the vast literature on this topic, the buffer requirement varies from job to job and a job occupies the buffer continuously from the start of its first operation till the completion of its second operation rather than only between operations. Such problems arise in supply chains requiring unloading and loading of minerals and in some multimedia systems. The problem is NP-hard in the strong sense, and we prove that if the order of jobs is fixed for one of the stages, then even for the criteria of the maximum completion time or the total completion time the problem remains NP-hard in the strong sense. Straightforward integer programming approach is impossible even for modest problem sizes. The paper presents a Lagrangian relaxation based decomposition approach that allows to use for each problem, obtained by this decomposition, a very fast algorithm. Several Lagrangian heuristics are evaluated by means of computational experiments.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Hiroe Inoue, Yuto Nakashima, Takuya Mieno, Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda A palindrome is a string that reads the same forward and backward. A palindromic substring P of a string S is called a shortest unique palindromic substring (SUPS) for an interval [s,t] in S, if P occurs exactly once in S, this occurrence of P contains interval [s,t], and every palindromic substring of S which contains interval [s,t] and is shorter than P occurs at least twice in S. The SUPS problem is, given a string S, to preprocess S so that for any subsequent query interval [s,t] all the SUPSs for interval [s,t] can be answered quickly. We present an optimal solution to this problem. Namely, we show how to preprocess a given string S of length n in O(n) time and space so that all SUPSs for any subsequent query interval can be answered in O(α+1) time, where α is the number of outputs. We also discuss the number of SUPSs in a string.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Andrei Kelarev, Joe Ryan, Leanne Rylands, Jennifer Seberry, Xun Yi This article gives a survey of discrete and combinatorial algorithms and methods for database security related to the work of Mirka Miller. The main contributions of Mirka Miller and coauthors to the security of statistical databases include the introduction of Static Audit Expert and theorems determining time complexity of its combinatorial algorithms, a polynomial time algorithm for deciding whether the maximum possible usability can be achieved in a statistical database with a special class of answerable statistics, NP-completeness of similar problems concerning several other types of statistical databases, sharp upper bounds on the number of compromise-free queries in certain categories of statistical databases, and analogous results on applications of Static Audit Expert for the prevention of relative compromise.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Robin Milosz, Sylvie Hamel Given a set A⊆Sn of m permutations of {1,2,…,n} and a distance function d, the median problem consists of finding a permutation π⁎ that is the “closest” of the m given permutations. Here, we study the problem under the Kendall-τ distance which counts the number of order disagreements between pairs of elements of permutations. In this article, which is an extended version of the conference paper [28], we explore this NP-complete problem using four different approaches: a well parameterized heuristic, an improved search space reduction technique, a refined branch-and-bound solver and an integer linear programming approach using IBM CPLEX. All approaches are tested on synthetic and real life data sets. Combining those approaches, we were able to solve the “ED-00015-00000004.soc” instance from PrefLib which, to our knowledge, was never achieved before.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Benoît Darties, Nicolas Champseix, Annie Chateau, Rodolphe Giroudeau, Mathias Weller This article is devoted to the study of the complexity of Power Edge Set (PES), a problem occurring when monitoring power networks. We show that PES remains NP-hard in bipartite planar graphs with bounded degree. This result is extended to unit disk graphs and grids with maximum degree three. To the best of our knowledge, this is the most restricted class of graphs known so far on which Power Edge Set is NP-complete. We also show that there is no 2o(n)-time algorithm for Power Edge Set, and there is no 2o(k)nO(1)-time algorithm for its parameterized version, even in bipartite planar subcubic graphs and unit disk graphs, assuming ETH. Finally, we show a polynomial-time algorithm for graphs of bounded treewidth (XP wrt. treewidth), such as series-parallel graphs or outerplanar graphs.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Federico Della Croce, Ulrich Pferschy, Rosario Scatamacchia The 0–1 Incremental Knapsack Problem (IKP) is a generalization of the standard 0–1 Knapsack Problem (KP) where the capacity grows over time periods and if an item is placed in the knapsack in a certain period, it cannot be removed afterwards. The problem calls for maximizing the sum of the profits over the whole time horizon. In this work, we consider the case with three time periods where we assume that each item can be packed also in the first time period. We propose an approximation algorithm with a tight approximation ratio of 3037. We strongly rely on Linear Programming (LP) to derive this bound showing how the proposed LP-based analysis can be seen as a valid alternative to more formal proof systems.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Alexandre Blondin Massé, Julien de Carufel, Alain Goupil We present recursive formulas giving the maximal number of leaves in tree-like polyforms living in two-dimensional regular lattices and in tree-like polycubes in the three-dimensional cubic lattice. We call these tree-like polyforms and polycubes fully leafed. The proof relies on a combinatorial algorithm that enumerates rooted directed trees that we call abundant. In the last part, we concentrate on the particular case of polyforms and polycubes, that we call saturated, which is the family of fully leafed structures that maximize the ratio (number of leaves)/(number of cells). In the polyomino case, we present a bijection between the set of saturated tree-like polyominoes of size 4k+1 and the set of tree-like polyominoes of size k. We exhibit a similar bijection between the set of saturated tree-like polycubes of size 41k+28 and a family of polycubes, called 4-trees, of size 3k+2.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Matěj Konečný, Stanislav Kučera, Jana Novotná, Jakub Pekárek, Štěpán Šimsa, Martin Töpfer A graph G is called a sum graph if there is a sum labeling of G, i.e., an injective function ℓ:V(G)→N such that for every u,v∈V(G) it holds that uv∈E(G) if and only if there exists a vertex w∈V(G) such that ℓ(u)+ℓ(v)=ℓ(w). We say that sum labeling ℓ is minimal if there is a vertex u∈V(G) such that ℓ(u)=1. In this paper, we show that if we relax the conditions (either allow non-injective labelings or consider graphs with loops) then there are sum graphs without a minimal labeling, which partially answers the question posed by Miller, Ryan and Smyth.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): Tatsuya Ohno, Kensuke Sakai, Yoshimasa Takabatake, Tomohiro I, Hiroshi Sakamoto Run-length encoded Burrows–Wheeler Transformed strings, resulting in Run-Length BWT (RLBWT), is a powerful tool for processing highly repetitive strings. We propose a new algorithm for online RLBWT working in run-compressed space, which runs in O(nlgr) time and O(rlgn) bits of space, where n is the length of input string S received so far and r is the number of runs in the BWT of the reversed S. We improve the state-of-the-art algorithm for online RLBWT in terms of empirical construction time. Adopting the dynamic list for maintaining a total order, we can replace rank queries in a dynamic wavelet tree on a run-length compressed string by the direct comparison of labels in a dynamic list. Enlisting the proposed online RLBWT, we can efficiently compute the LZ77 factorization in run-compressed space. The empirical results show the efficiencies of both our online RLBWT and LZ77 parsing, especially for highly repetitive strings.

Abstract: Publication date: September 2018Source: Journal of Discrete Algorithms, Volumes 52–53Author(s): R. Lewis, K. Smith-Miles This paper proposes a heuristic algorithm for designing real-world school transport schedules. It extends previously considered problem models by considering some important but hitherto overlooked features including the splitting and merging of routes, gauging vehicle dwell times, the selection of stopping points, and the minimisation of walking distances. We show that this formulation contains a number of interacting combinatorial subproblems including the time-constrained vehicle routing problem, set covering, and bin packing. As a result, a number of new and necessary algorithmic operators are proposed for this problem which are then used alongside other recognised heuristics. Primarily, the aim of this algorithm is to minimise the number of vehicles used by each school, though secondary issues concerning journey lengths and walking distances are also considered through the employment of suitable multiobjective techniques.

Abstract: Publication date: July 2018Source: Journal of Discrete Algorithms, Volume 51Author(s): Dario Catalano, Mario Di Raimondo, Simone Faro In this paper we consider a scenario where a user wants to outsource her documents to the cloud, so that she can later reliably delegate (to the cloud) pattern matching operations on these documents. We propose an efficient solution to this problem that relies on the homomorphic MAC for polynomials proposed by Catalano and Fiore (EuroCrypt 2013). Our main contribution are new methods to express pattern matching operations (both in their exact and approximate variants) as low degree polynomials, i.e. polynomials whose degree solely depends on the size of the pattern. To better assess the practicality of our schemes, we propose a concrete implementation that further optimizes the efficiency of the homomorphic MAC of Catalano and Fiore. Our implementation shows that the proposed protocols are extremely efficient for the client, while remaining feasible at server side.

Abstract: Publication date: July 2018Source: Journal of Discrete Algorithms, Volume 51Author(s): Neerja Mhaskar, W.F. Smyth In this paper we introduce the notion of an optimal cover. Let M denote the maximum number of positions in w covered by any repeating substring of w. Then a longest (shortest) optimal cover u is a longest (shortest) repeating substring of w that covers M positions. The advantage of this notion is that it is not only applicable to all strings, but also that it does not share the deficiencies of the existing definitions of covers. We show that both the longest and the shortest optimal covers for a given string w of length n can be computed easily and efficiently in O(nlogn) time and O(n) space. We further show that the data structures used to compute optimal covers also compute the α-partial covers introduced in [10].

Abstract: Publication date: July 2018Source: Journal of Discrete Algorithms, Volume 51Author(s): Ghada Ben Hassine, Pierre Bergé, Arpad Rimmel, Joanna Tomasik Given a positive integer K, the weak Schur problem is to find the largest weakly sum-free K-partition of positive integers. Our goal is to increase known values of n for which the set {1,…,n} can be divided into sum-free sets, i.e. sets for which there are no three distinct elements x, y, z in the same set such that x+y=z.Our main contribution consists in designing a recursive deterministic algorithm which improves the results reported in the literature. It constructs a solution out of “perfect” or, at times, “perforated” series of integers using arbitrary rules. We call it the Deterministic Weak Schur Series (DWSS) algorithm. When analyzing our algorithm, we use the Gros sequence to bound from above the number of operations it executes. Consequently, we prove that its complexity is in O(33K). With DWSS, we discovered larger weakly sum-free partitions, breaking the literature records for K=8 to K=12.We next hybridize DWSS by stringing it together with a Monte Carlo-based method which explores the neighborhood of the solution obtained deterministically. We call it the Randomized Weak Schur Series (RWSS). As this algorithm requires more computational effort than DWSS, we are not able to compute partitions for K>10. We exceed our own DWSS results for K=8 to K=10.

Abstract: Publication date: July 2018Source: Journal of Discrete Algorithms, Volume 51Author(s): Jacqueline Banks, Scott M. Garrabrant, Mark L. Huber, Anne Perizzolo A linear extension of a poset P is a permutation of the elements of the set that respects the partial order. Let L(P) be the set of linear extensions. It is a #P complete problem to determine the size of L(P) exactly for an arbitrary poset, and so randomized approximation algorithms are used that employ samples that are uniformly drawn from the set of linear extensions. In this work, a new randomized approximation algorithm is proposed where the linear extensions are not uniformly drawn, but instead come from a distribution indexed by a continuous parameter β. The introduction of β allows for the use of a more efficient method for approximating #L(P) called TPA. Our primary result is that it is possible to sample from this continuous embedding in time that as fast or faster than the best known methods for sampling uniformly from linear extensions. For a poset containing n elements, this means we can approximate L(P) to within a factor of 1+ϵ with probability at least 1−δ using O(n3(lnn)(ln#L(P))2ϵ−2lnδ−1) expected number of random uniforms and comparisons.