Algorithmica
Journal Prestige (SJR): 0.56 Citation Impact (citeScore): 1 Number of Followers: 9 Hybrid journal (It can contain Open Access articles) ISSN (Print) 14320541  ISSN (Online) 01784617 Published by SpringerVerlag [2468 journals] 
 Upward Book Embeddability of stGraphs: Complexity and Algorithms

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract A kpage upward book embedding (kUBE) of a directed acyclic graph G is a book embeddings of G on k pages with the additional requirement that the vertices appear in a topological ordering along the spine of the book. The k UBE Testing problem, which asks whether a graph admits a kUBE, was introduced in 1999 by Heath, Pemmaraju, and Trenk (SIAM J Comput 28(4), 1999). In a companion paper, Heath and Pemmaraju (SIAM J Comput 28(5), 1999) proved that the problem is lineartime solvable for \(k=1\) and NPcomplete for \(k = 6\) . Closing this gap has been a central question in algorithmic graph theory since then. In this paper, we make a major contribution towards a definitive answer to the above question by showing that k UBE Testing is NPcomplete for \(k\ge 3\) , even for stgraphs, i.e., acyclic directed graphs with a single source and a single sink. Indeed, our result, together with a recent work of Bekos et al. (Theor Comput Sci 946, 2023) that proves the NPcompleteness of 2UBE for planar stgraphs, closes the question about the complexity of the kUBE problem for any k. Motivated by this hardness result, we then focus on the 2UBE Testing for planar stgraphs. On the algorithmic side, we present an \(O(f(\beta )\cdot n+n^3)\) time algorithm for 2UBE Testing, where \(\beta \) is the branchwidth of the input graph and f is a singlyexponential function on \(\beta \) . Since the treewidth and the branchwidth of a graph are within a constant factor from each other, this result immediately yields an FPT algorithm for stgraphs of bounded treewidth. Furthermore, we describe an O(n)time algorithm to test whether a plane stgraph whose faces have a special structure admits a 2UBE that additionally preserves the plane embedding of the input stgraph. On the combinatorial side, we present two notable families of plane stgraphs that always admit an embeddingpreserving \(2\) UBE.
PubDate: 20231201

 The Online Broadcast RangeAssignment Problem

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Let \(P=\{p_0,\ldots ,p_{n1}\}\) be a set of points in \({\mathbb R}^d\) , modeling devices in a wireless network. A range assignment assigns a range \(r(p_i)\) to each point \(p_i\in P\) , thus inducing a directed communication graph \(\mathcal {G}_r\) in which there is a directed edge \((p_i,p_j)\) iff \({{\,\textrm{dist}\,}}(p_i, p_j) \leqslant r(p_i)\) , where \({{\,\textrm{dist}\,}}(p_i,p_j)\) denotes the distance between \(p_i\) and \(p_j\) . The rangeassignment problem is to assign the transmission ranges such that \(\mathcal {G}_r\) has a certain desirable property, while minimizing the cost of the assignment; here the cost is given by \(\sum _{p_i\in P} r(p_i)^{\alpha }\) , for some constant \(\alpha >1\) called the distancepower gradient. We introduce the online version of the rangeassignment problem, where the points \(p_j\) arrive one by one, and the range assignment has to be updated at each arrival. Following the standard in online algorithms, resources given out cannot be taken away—in our case this means that the transmission ranges will never decrease. The property we want to maintain is that \(\mathcal {G}_r\) has a broadcast tree rooted at the first point \(p_0\) . Our results include the following. We prove that already in \({\mathbb R}^1\) , a 1competitive algorithm does not exist. In particular, for distancepower gradient \(\alpha =2\) any online algorithm has competitive ratio at least 1.57. For points in \({\mathbb R}^1\) and \({\mathbb R}^2\) , we analyze two natural strategies for updating the range assignment upon the arrival of a new point \(p_j\) . The strategies do not change the assignment if \(p_j\) is already within range of an existing point, otherwise they increase the range of a single point, as follows: NearestNeighbor (nn) increases the range of \({{\,\textrm{nn}\,}}(p_j)\) , the nearest neighbor of \(p_j\) , to \({{\,\textrm{dist}\,}}(p_j, {{\,\textrm{nn}\,}}(p_j))\) ...
PubDate: 20231201

 A Combinatorial CutToggling Algorithm for Solving Laplacian Linear
Systems
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form \({{\textbf {L}}}{{\textbf {x}}} = {{\textbf {b}}}\) , where \({{\textbf {L}}}\) is the Laplacian matrix of a weighted graph with weights \({w(i,j)}>{0}\) on the edges. The solution \({{\textbf {x}}}\) of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge (i, j) is 1/w(i, j). Kelner et al. (in: Proceedings of the 45th Annual ACM Symposium on the Theory of Computing, pp 911–920, 2013. https://doi.org/10.1145/2488608.2488724) give a combinatorial, nearlinear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles (cycle toggling). In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by cut toggling: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a nearlinear number of iterations. We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector–matrixvector problem, which has been conjectured to be difficult for dynamic algorithms (Henzinger et al., in: Proceedings of the 47th Annual ACM Symposium on the Theory of Computing, pp 21–30, 2015. https://doi.org/10.1145/2746539.2746609). The conjecture implies that the data structure does not have an \(O(n^{1\epsilon })\) time algorithm for any \(\epsilon > 0\) , and thus a straightforward implementation of the cuttoggling algorithm requires essentially linear time per iteration. To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an \({\widetilde{O}}(m^{1.5})\) time cuttoggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to \(O(m^{1 + \epsilon })\) for any \(\epsilon > 0\) . Thus, the dual cuttoggling algorithm can achieve (almost) the same running time as its primal cycletoggling counterpart.
PubDate: 20231201

 The Subfield and Extended Codes of a Subclass of Optimal ThreeWeight
Cyclic Codes
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract A class of optimal threeweight \([q^k1,k+1,q^{k1}(q1)1]\) cyclic codes over \({\mathrm{I\!F}}_q\) , with \(k\ge 2\) , achieving the Griesmer bound, was presented by Heng and Yue (IEEE Trans Inf Theory 62(8):4501–4513, 2016. https://doi.org/10.1109/TIT.2016.2550029). In this paper we study some of the subfield codes of this class of optimal cyclic codes when \(k=2\) . The weight distributions of the subfield codes are settled. It turns out that some of these codes are optimal and others have the best known parameters. The duals of the subfield codes are also investigated and found to be almost optimal with respect to the spherepacking bound. In addition, the covering structure for the studied subfield codes is determined. Some of these codes are found to have the important property that any nonzero codeword is minimal, which is a desirable property that is useful in the design of a secret sharing scheme based on a linear code. Moreover, a specific example of a secret sharing scheme based on one of these subfield codes is given. Finally, a class of optimal twoweight linear codes over \({\mathrm{I\!F}}_q\) , achieving the Griesmer bound, whose duals are almost optimal with respect to the spherepacking bound is presented. Through a different approach, this class of optimal twoweight linear codes was reported very recently by Heng (IEEE Trans Inf Theory 69(2):978–994, 2023. https://doi.org/10.1109/TIT.2022.3203380). Furthermore, it is shown that these optimal codes can be used to construct strongly regular graphs.
PubDate: 20231201

 Inapproximability of Positive Semidefinite Permanents and Quantum State
Tomography
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Matrix permanents are hard to compute or even estimate in general. It had been previously suggested that the permanents of Positive Semidefinite (PSD) matrices may have efficient approximations. By relating PSD permanents to a task in quantum state tomography, we show that PSD permanents are NPhard to approximate within a constant factor, and so admit no polynomialtime approximation scheme (unless P = NP). We also establish that several natural tasks in quantum state tomography, even approximately, are NPhard in the dimension of the Hilbert space. These state tomography tasks therefore remain hard even with only logarithmically few qubits.
PubDate: 20231201

 Peak Demand Minimization via Sliced Strip Packing

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We study the Nonpreemptive Peak Demand Minimization (NPDM) problem, where we are given a set of jobs, specified by their processing times and energy requirements. The goal is to schedule all jobs within a fixed time period such that the peak load (the maximum total energy requirement at any time) is minimized. This problem has recently received significant attention due to its relevance in smartgrids. Theoretically, the problem is related to the classical strip packing problem (SP). In SP, a given set of axisaligned rectangles must be packed into a fixedwidth strip, such that the height of the strip is minimized. NPDM can be modeled as strip packing with slicing and stacking constraint: each rectangle may be cut vertically into multiple slices and the slices may be packed into the strip as individual pieces. The stacking constraint forbids solutions where two slices of the same rectangle are intersected by the same vertical line. Nonpreemption enforces the slices to be placed in contiguous horizontal locations (but may be placed at different vertical locations). We obtain a \((5/3+\varepsilon )\) approximation algorithm for the problem. We also provide an asymptotic efficient polynomialtime approximation scheme (AEPTAS) which generates a schedule for almost all jobs with energy consumption \((1+\varepsilon ) {\textrm{OPT}}\) . The remaining jobs fit into a thin container of height 1. This AEPTAS is used as a subroutine to acquire the \((5/3+\varepsilon )\) approximation algorithm. The previous best result for NPDM was a 2.7approximation based on FFDH (Ranjan et al., in: 2015 IEEE symposium on computers and communication (ISCC), pp 758–763, IEEE, 2015). One of our key ideas is providing several new lower bounds on the optimal solution of a geometric packing, which could be useful in other related problems. These lower bounds help us to obtain approximative solutions based on Steinberg’s algorithm in many cases. In addition, we show how to split schedules generated by the AEPTAS into few segments and to rearrange the corresponding jobs to insert the thin container mentioned above, such that it does not exceed the bound of \((5/3+\varepsilon ) {\textrm{OPT}}\) .
PubDate: 20231201

 Opinion Dynamics with Limited Information

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We study opinion formation games based on the famous model proposed by Friedkin and Johsen (FJ model). In today’s huge social networks the assumption that in each round agents update their opinions by taking into account the opinions of all their friends is unrealistic. So, we are interested in the convergence properties of simple and natural variants of the FJ model that use limited information exchange in each round and converge to the same stable point. As in the FJ model, we assume that each agent i has an intrinsic opinion \(s_i \in [0,1]\) and maintains an expressed opinion \(x_i(t) \in [0,1]\) in each round t. To model limited information exchange, we consider an opinion formation process where each agent i meets with one random friend j at each round t and learns only her current opinion \(x_j(t)\) . The amount of influence j imposes on i is reflected by the probability \(p_{ij}\) with which i meets j. Then, agent i suffers a disagreement cost that is a convex combination of \((x_i(t)  s_i)^2\) and \((x_i(t)  x_j(t))^2\) . An important class of dynamics in this setting are no regret dynamics, i.e. dynamics that ensure vanishing regret against the experienced disagreement cost to the agents. We show an exponential gap between the convergence rate of no regret dynamics and of more general dynamics that do not ensure no regret. We prove that no regret dynamics require roughly \(\varOmega (1/\varepsilon )\) rounds to be within distance \(\varepsilon \) from the stable point of the FJ model. On the other hand, we provide an opinion update rule that does not ensure no regret and converges to \(x^*\) in \(\tilde{O}(\log ^2(1/\varepsilon ))\) rounds. Finally, in our variant of the FJ model, we show that the agents can adopt a simple opinion update rule that ensures no regret to the experienced disagreement cost and results in an opinion vector that converges to the stable point \(x^*\) of the FJ model within distance \(\varepsilon \) in \(\textrm{poly}(1/\varepsilon )\) rounds. In view of our lower bound for no regret dynamics this rate of convergence is close to best possible.
PubDate: 20231201

 On Colorful Vertex and Edge Cover Problems

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In this paper, we study two generalizations of Vertex Cover and Edge Cover, namely Colorful Vertex Cover and Colorful Edge Cover. In the Colorful Vertex Cover problem, given an nvertex edgecolored graph G with colors from \(\{1, \ldots , \omega \}\) and coverage requirements \(r_1, r_2, \ldots , r_\omega \) , the goal is to find a minimumsized set of vertices that are incident on at least \(r_i\) edges of color i, for each \(1 \le i \le \omega \) , i.e., we need to cover at least \(r_i\) edges of color i. Colorful Edge Cover is similar to Colorful Vertex Cover, except here we are given a vertexcolored graph and the goal is to cover at least \(r_i\) vertices of color i, for each \(1 \le i \le \omega \) , by a minimumsized set of edges. These problems have several applications in fair covering and hitting of geometric set systems involving points and lines that are divided into multiple groups. Here, “fairness” ensures that the coverage (resp. hitting) requirement of every group is fully satisfied. We obtain a \((2+\epsilon )\) approximation for the Colorful Vertex Cover problem in time \(n^{O(\omega /\epsilon )}\) . Thus, for a constant number of colors, the problem admits a \((2+\epsilon )\) approximation in polynomial time. Next, for the Colorful Edge Cover problem, we design an \(O(\omega n^3)\) time exact algorithm, via a chain of reductions to a matching problem. For all intermediate problems in this chain of reductions, we design polynomialtime algorithms, which might be of independent interest.
PubDate: 20231201

 Unique Response Roman Domination: Complexity and Algorithms

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract A function \(f :V(G) \rightarrow \{0, 1, 2\}\) is called a Roman dominating function on \(G=(V(G),E(G))\) if for every vertex v with \(f(v) = 0\) , there exists a vertex \(u\in N_G(v)\) such that \(f(u) = 2\) . A function \(f :V(G) \rightarrow \{0, 1, 2\}\) induces an ordered partition \((V_0,V_1,V_2)\) of V(G), where \(V_i=\{v\in V(G):f(v)=i\}\) for \(i\in \{0,1,2\}\) . A function \(f :V(G) \rightarrow \{0, 1, 2\}\) with ordered partition \((V_0, V_1, V_2)\) is called a unique response Roman function if for every vertex v with \(f(v)=0\) , \( N_G(v)\cap V_2 \le 1\) , and for every vertex v with \(f(v)=1\) or 2, \( N_G(v)\cap V_2 = 0\) . A function \(f :V(G) \rightarrow \{0, 1, 2\}\) is called a unique response Roman dominating function (URRDF) on G if it is a unique response Roman function as well as a Roman dominating function on G. The weight of a unique response Roman dominating function f is the sum \(f(V(G))=\sum _{v\in V(G)}f(v)\) , and the minimum weight of a unique response Roman dominating function on G is called the unique response Roman domination number of G and is denoted by \(u_{R}(G)\) . Given a graph G, the MinURRDF problem asks to find a unique response Roman dominating function of minimum weight on G. In this paper, we study the algorithmic aspects of MinURRDF. We show that the decision version of MinURRDF remains NPcomplete for chordal graphs and bipartite graphs. We show that for a given graph with n vertices, MinURRDF cannot be approximated within a ratio of \(n^{1\varepsilon } \) for any \( \varepsilon >0 \) unless \(\mathsf {P=NP}\) . We also show that MinURRDF can be approximated within a factor of \(\varDelta +1\) for graphs having maximum degree \(\varDelta \) . On the positive side, we design a line...
PubDate: 20231201

 Efficiently Approximating Vertex Cover on ScaleFree Networks with
Underlying Hyperbolic Geometry
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Finding a minimum vertex cover in a network is a fundamental NPcomplete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing nonoptimal outputs) for an improved running time. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NPhard to approximate a minimum vertex cover within a factor of \(\sqrt{2}\) . On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worstcase instances and realworld networks. Following this direction, we narrow the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles realworld networks in terms of degree distribution, clustering, and the smallworld property. More precisely, our algorithm computes a \((1 + o(1))\) approximation, asymptotically almost surely, and has a running time of \({\mathcal {O}}(m \log (n))\) . The proposed algorithm is an adaptation of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on realworld networks shows that this allows for improving over the nearoptimal results of the greedy approach.
PubDate: 20231201

 Probabilistic Analysis of Optimization Problems on Sparse Random Shortest
Path Metrics
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Simple heuristics for (combinatorial) optimization problems often show a remarkable performance in practice. Worstcase analysis often falls short of explaining this performance. Because of this, “beyond worstcase analysis” of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many (combinatorial) optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained in recent years, where random shortest path metrics generated from dense graphs (either complete graphs or Erdős–Rényi random graphs) have been used so far. In this paper we extend these findings to sparse graphs, with a focus on sparse graphs with ‘fast growing cut sizes’, i.e. graphs for which \( \delta (U) =\Omega ( U ^\varepsilon )\) for some constant \(\varepsilon \in (0,1)\) for all subsets U of the vertices, where \(\delta (U)\) is the set of edges connecting U to the remaining vertices. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances generated from a sparse graph with fast growing cut sizes, we prove that the greedy heuristic for the minimum distance maximum matching problem, and the nearest neighbor and insertion heuristics for the traveling salesman problem all achieve a constant expected approximation ratio. Additionally, for instances generated from an arbitrary sparse graph, we show that the 2opt heuristic for the traveling salesman problem also achieves a constant expected approximation ratio.
PubDate: 20231201

 Algebraic Restriction Codes and Their Applications

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Consider the following problem: You have a device that is supposed to compute a linear combination of its inputs, which are taken from some finite field. However, the device may be faulty and compute arbitrary functions of its inputs. Is it possible to encode the inputs in such a way that only linear functions can be evaluated over the encodings' I.e., learning an arbitrary function of the encodings will not reveal more information about the inputs than a linear combination. In this work, we introduce the notion of algebraic restriction codes (AR codes), which constrain adversaries who might compute any function to computing a linear function. Our main result is an informationtheoretic construction AR codes that restrict any class of function with a bounded number of output bits to linear functions. Our construction relies on a seed which is not provided to the adversary. While interesting and natural on its own, we show an application of this notion in cryptography. In particular, we show that AR codes lead to the first construction of rate1 oblivious transfer with statistical sender security from the Decisional Diffie–Hellman assumption, and the firstever construction that makes blackbox use of cryptography. Previously, such protocols were known only from the LWE assumption, using nonblackbox cryptographic techniques. We expect our new notion of AR codes to find further applications, e.g., in the context of nonmalleability, in the future.
PubDate: 20231201

 Even More Effort Towards Improved Bounds and FixedParameter Tractability
for Multiwinner Rules
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Multiwinner elections have proven to be a fruitful research topic with many realworld applications. We contribute to this line of research by improving the state of the art regarding the computational complexity of computing good committees. More formally, given a set of candidates \(\mathcal{C}\) , a set of voters \(\mathcal{V}\) —each ranking the candidates according to their preferences, and an integer k; a multiwinner voting rule identifies a ksized committee, based on these given voter preferences. In this paper we consider several utilitarian and egailitarian ordered weighted average scoring rules, which are an extensivelyresearched family of rules (and a subfamily of the family of committee scoring rules). First, we improve the result of Betzler et al. (JAIR 47:475–519, 2013), which gave a \({\mathcal {O}}(n^n)\) algorithm for computing winner under the Chamberlin–Courant rule, where n is the number of voters; to a running time of \({\mathcal {O}}(2^n)\) , which is optimal. Furthermore, we study the parameterized complexity of the Pessimist voting rule and describe a few tractable and intractable cases. Apart from such utilitarian voting rules, we extend our study and consider egalitarian median and egalitarian mean (both committee scoring rules), showing some tractable and intractable results, based on nontrivial structural observations.
PubDate: 20231201

 BestofBothWorlds Analysis of Online Search

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In search problems, a mobile searcher seeks to locate a target that hides in some unknown position of the environment. Such problems are typically considered to be of an online nature, in that the target’s position is unknown to the searcher, and the performance of a search strategy is usually analyzed by means of the standard framework of the competitive ratio, which compares the cost incurred by the searcher to an optimal strategy that knows the location of the target. However, one can argue that even for simple search problems, competitive analysis fails to distinguish between strategies which, intuitively, should have different performance in practice. Motivated by the above observation, in this work we introduce and study measures supplementary to competitive analysis in the context of search problems. In particular, we focus on the wellknown problem of linear search, informally known as the cowpath problem, for which there is an infinite number of strategies that achieve an optimal competitive ratio equal to 9. We propose a measure that reflects the rate at which the line is being explored by the searcher, and which can be seen as an extension of the bijective ratio over an uncountable set of requests. Using this measure we show that a natural strategy that explores the line aggressively is optimal among all 9competitive strategies. This provides, in particular, a strict separation from the competitively optimal doubling strategy, which is much more conservative in terms of exploration. We also provide evidence that this aggressiveness is requisite for optimality, by showing that any optimal strategy must mimic the aggressive strategy in its first few explorations.
PubDate: 20231201

 Comparison of Matrix Norm Sparsification

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract A wellknown approach in the design of efficient algorithms, called matrix sparsification, approximates a matrix A with a sparse matrix \(A'\) . Achlioptas and McSherry (J ACM 54(2):9es, 2007) initiated a long line of work on spectralnorm sparsification, which aims to guarantee that \(\Vert A'A\Vert \le \epsilon \Vert A\Vert \) for error parameter \(\epsilon >0\) . Various forms of matrix approximation motivate considering this problem with a guarantee according to the Schatten pnorm for general p, which includes the spectral norm as the special case \(p=\infty \) . We investigate the relation between fixed but different \(p\ne q\) , that is, whether sparsification in the Schatten pnorm implies (existentially and/or algorithmically) sparsification in the Schatten \(q\text {norm}\) with similar sparsity. An affirmative answer could be tremendously useful, as it will identify which value of p to focus on. Our main finding is a surprising contrast between this question and the analogous case of \(\ell _p\) norm sparsification for vectors: For vectors, the answer is affirmative for \(p<q\) and negative for \(p>q\) , but for matrices we answer negatively for almost all sufficiently distinct \(p\ne q\) . In addition, our explicit constructions may be of independent interest.
PubDate: 20231201

 Deterministic Dynamic Matching in WorstCase Update Time

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We present deterministic algorithms for maintaining a \((3/2 + \epsilon )\) and \((2 + \epsilon )\) approximate maximum matching in a fully dynamic graph with worstcase update times \({\hat{O}}(\sqrt{n})\) and \({\tilde{O}}(1)\) respectively. The fastest known deterministic worstcase update time algorithms for achieving approximation ratio \((2  \delta )\) (for any \(\delta > 0\) ) and \((2 + \epsilon )\) were both shown by Roghani et al. (Beating the folklore algorithm for dynamic matching, 2021) with update times \(O(n^{3/4})\) and \(O_\epsilon (\sqrt{n})\) respectively. We close the gap between worstcase and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are \(O_\epsilon (\sqrt{n})\) and \({\tilde{O}}(1)\) which were shown in Bernstein and Stein (in: Proceedings of the twentyseventh annual ACMSIAM symposium on discrete algorithms, 2016) and Bhattacharya and Kiss (in: 48th international colloquium on automata, languages, and programming, ICALP 2021, 12–16 July, Glasgow, 2021) respectively. The algorithm achieving \((3/2 + \epsilon )\) approximation builds on the EDCS concept introduced by the influential paper of Bernstein and Stein (in: International colloquium on automata, languages, and programming, Springer, Berlin, 2015). Say that H is a \((\alpha , \delta )\) approximate matching sparsifier if at all times H satisfies that \(\mu (H) \cdot \alpha + \delta \cdot n \ge \mu (G)\) (define \((\alpha , \delta )\) approximation similarly for matchings). We show how to maintain a locally damaged version of the EDCS which is a \((3/2 + \epsilon , \delta )\) approximate matching sparsifier. We further show how to reduce the maintenance of an \(\alpha \) approximate maximum matching to the maintenance of an \((\alpha , \delta )\) approximate maximum matching building based on an observation of Assadi et al. (in: Proceedings of the twentyseventh annual (ACMSIAM) symposium on discrete algorithms, (SODA) 2016, Arlington, VA, USA, January 10–12, 2016). Our reduction requires an update time blowup of \({\hat{O}}(1)\) or \({\tilde{O}}(1)\) and is deterministic or randomized against an adaptive adversary respectively. To achieve \((2 + \epsilon )\) approximation we improve on the update time guarantee of an algorithm of Bhattacharya and Kiss (in: 48th International colloquium on automata, languages, and programming, ICALP 2021, 12–16 July, Glasgow, 2021). In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak (in: Proceedings of the twentyseventh annual ACM symposium on theory of computing, 2017) and Bernstein et al. (Fullydynamic graph sparsifiers against an adaptive adversary, 2020) which allows to transform dynamic algorithms capable of proce...
PubDate: 20231201

 Finding Geometric Facilities with Location Privacy

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We examine the problem of discovering the set P of points in a given topology that constitutes a kmedian set for that topology, while maintaining location privacy. That is, there exists a set U of points in a ddimensional topology for which a kmedian set must be found by some algorithm A, without disclosing the location of points in U to the executor of A. We define a privacy preserving data model for a coordinate system we call a "Topology Descriptor Grid", and show how it can be used to find the rectilinear 1median of the system and a constant factor approximation for the Euclidean 1median. We achieve a constant factor approximation for the rectilinear 2median of a grid topology. Additionally we show upper and lower bounds for the kcenter problem.
PubDate: 20231201

 Approximation Algorithms for the Min–Max Mixed Rural Postmen Cover
Problem and Its Variants
Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract In this work, we introduce a multivehicle (or multipostman) extension of the classical Mixed Rural Postman Problem, which we call the Min–Max Mixed Rural Postmen Cover Problem (MRPCP). The MRPCP is defined on a mixed graph \(G=(V,E,A)\) , where V is the vertex set, E denotes the (undirected) edge set and A represents the (directed) arc set. Let \(F\subseteq E\) ( \(H\subseteq A\) ) be the set of required edges (required arcs). There is a nonnegative weight associated with each edge and arc. The objective is to determine no more than k closed walks to cover all the required edges in F and all the required arcs in H such that the weight of the maximum weight closed walk is minimized. By replacing closed walks with (open) walks in the MRPCP, we obtain the Min–Max Mixed Rural Postmen Walk Cover Problem (MRPWCP). The Min–Max Mixed Chinese Postmen Cover Problem (MCPCP) is a special case of the MRPCP where \(F=E\) and \(H=A\) . The Min–Max Stacker Crane Cover Problem (SCCP) is another special case of the MRPCP where \(F=\emptyset \) and \(H=A\) For the MRPCP with the input graph satisfying the weakly symmetric condition, i.e., for each arc there exists a parallel edge whose weight is not greater than this arc, we devise a \(\frac{27}{4}\) approximation algorithm. This algorithm achieves an approximation ratio of \(\frac{33}{5}\) for the SCCP with the weakly symmetric condition. Moreover, we obtain the first 5approximation algorithm (4approximation algorithm) for the MRPWCP (MCPCP) with the weakly symmetric condition.
PubDate: 20231123

 On Maximizing Sums of Nonmonotone Submodular and Linear Functions

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract We study the problem of Regularized Unconstrained Submodular Maximization (RegularizedUSM) as defined by Bodek and Feldman (Maximizing sums of nonmonotone submodular and linear functions: understanding the unconstrained case, arXiv:2204.03412, 2022): given query access to a nonnegative submodular function \(f:2^{{\mathcal {N}}}\rightarrow {\mathbb {R}}_{\ge 0}\) and a linear function \(\ell :2^{{\mathcal {N}}}\rightarrow {\mathbb {R}}\) over the same ground set \({\mathcal {N}}\) , output a set \(T\subseteq {\mathcal {N}}\) approximately maximizing the sum \(f(T)+\ell (T)\) . An algorithm is said to provide an \((\alpha ,\beta )\) approximation for RegularizedUSM if it outputs a set T such that \({\mathbb {E}}[f(T)+\ell (T)]\ge \max _{S\subseteq {\mathcal {N}}}[\alpha \cdot f(S)+\beta \cdot \ell (S)]\) . We also consider the setting where S and T are constrained to be independent in a given matroid, which we refer to as Regularized Constrained Submodular Maximization (RegularizedCSM). The special case of RegularizedCSM with monotone f has been extensively studied (Sviridenko et al. in Math Oper Res 42(4):1197–1218, 2017; Feldman in Algorithmica 83(3):853–878, 2021; Harshaw et al., in: International conference on machine learning, PMLR, 2634–2643, 2019), whereas we are aware of only one prior work that studies RegularizedCSM with nonmonotone f (Lu et al. in Optimization 1–27, 2023), and that work constrains \(\ell \) to be nonpositive. In this work, we provide improved \((\alpha ,\beta )\) approximation algorithms for both RegularizedUSM and RegularizedCSM with nonmonotone f. Specifically, we are the first to provide nontrivial \((\alpha ,\beta )\) approximations for RegularizedCSM where the sign of \(\ell \) is unconstrained, and the \(\alpha \) we obtain for RegularizedUSM improves over (Bodek and Feldman in Maximizing sums of nonmonotone submodular and linear functions: understanding the unconstrained case, arXiv:2204.03412, 2022) for all \(\beta \in (0,1)\) . We also prove new inapproximability results for RegularizedUSM and RegularizedCSM, as well as 0.478inapproximability for maximizing a submodular function where S and T are subject to a cardinality constraint, improving a 0.491inapproximability result due to Oveis Gharan and Vondrak (in: Proceedings of the twentysecond annual ACMSIAM symposium on discrete algorithms, SIAM, pp 1098–111...
PubDate: 20231113

 NearOptimal Search Time in $$\delta $$ Optimal Space, and Vice Versa

Free preprint version: Loading...Rate this result: What is this?Please help us test our new preprint finding feature by giving the preprint link a rating.
A 5 star rating indicates the linked preprint has the exact same content as the published article.
Abstract: Abstract Two recent lower bounds on the compressibility of repetitive sequences, \(\delta \le \gamma \) , have received much attention. It has been shown that a lengthn string S over an alphabet of size \(\sigma \) can be represented within the optimal \(O(\delta \log \tfrac{n\log \sigma }{\delta \log n})\) space, and further, that within that space one can find all the occ occurrences in S of any lengthm pattern in time \(O(m\log n + occ \log ^\epsilon n)\) for any constant \(\epsilon >0\) . Instead, the nearoptimal search time \(O(m+({occ+1})\log ^\epsilon n)\) has been achieved only within \(O(\gamma \log \frac{n}{\gamma })\) space. Both results are based on considerably different locally consistent parsing techniques. The question of whether the better search time could be supported within the \(\delta \) optimal space remained open. In this paper, we prove that both techniques can indeed be combined to obtain the best of both worlds: \(O(m+({occ+1})\log ^\epsilon n)\) search time within \(O(\delta \log \tfrac{n\log \sigma }{\delta \log n})\) space. Moreover, the number of occurrences can be computed in \(O(m+\log ^{2+\epsilon }n)\) time within \(O(\delta \log \tfrac{n\log \sigma }{\delta \log n})\) space. We also show that an extra sublogarithmic factor on top of this space enables optimal \(O(m+occ)\) search time, whereas an extra logarithmic factor enables optimal O(m) counting time.
PubDate: 20231106
