Abstract: Comparative genomics is a field of biology that aims at comparing genomes of different species. One major question of this field is to find the evolutionary distance between two given genomes. One way to estimate such distance is to use the rearrangement distance, which consists in finding a minimum cost sequence of rearrangements that transforms one genome into another. We use permutations to model the genomes being compared and, in this way, we can treat this problem as the problem of sorting a permutation with a minimum cost sequence of rearrangements. In the early works with rearrangement distance, it was considered that all rearrangements are equally likely to occur and, consequently, they use a unitary cost for all rearrangements. Some variations of the problem were motivated by the observation that rearrangements involving large segments of a genome rarely occur. One of these variants also uses a unitary cost, however it adds a constraint in the length of the operations allowed to estimate the distance. Another variant uses a cost function based on the rearrangement’s length. In this work, we study problems that combine both variants, that is, problems with a constraint in the rearrangement’s length and with a cost function based on the rearrangement’s length. We present approximation algorithms for five such problems involving reversals and/or transpositions for sorting signed and unsigned permutations. We also analyze the problems for specific parameters of the length restriction and for when the cost function is equal to \(\ell ^\alpha \) , where \(\ell \) is the rearrangement’s length and \(\alpha \ge 1\) is a real value parameter. PubDate: 2021-04-01

Abstract: This paper considers two graph covering problems, the Minimum Constellation Cover (CC) and the Minimum k-Split Constellation Cover (k-SCC). The input of these problems consists on a graph \(G=\left( V,E\right) \) and a set \({\mathcal {C}}\) of stars of G, and the output is a minimum cardinality set of stars C, such that any two different stars of C are edge-disjoint and the union of the stars of C covers all edges of G. For CC, the set C must be compound by edges of G or stars of \({\mathcal {C}}\) while, for k-SCC, an integer k is given and the elements of C must be k-stars obtained by splitting stars of \({\mathcal {C}}\) . This work proves that unless \(P=NP\) , CC does not admit polynomial time \(\left {\mathcal {C}}\right ^{{\mathcal {O}}\left( 1\right) }\) -approximation algorithms and k-SCC cannot be \(\left( \left( 1-\epsilon \right) \ln \left E\right \right) \) -approximated in polynomial time, for any \(\epsilon >0\) . Additionally, approximation ratios are given for the worst feasible solutions of the problems and, for k-SCC, polynomial time approximation algorithms are proposed, achieving a \(\left( \ln \left E\right -\ln \ln \left E\right +\varTheta \left( 1\right) \right) \) approximation ratio. Also, polynomial time algorithms are presented for special classes of graphs that include bounded degree trees and cacti. PubDate: 2021-04-01

Abstract: The Delivery Man Problem with Time Windows (DMPTW) is an extension of the Delivery Man Problem. The objective of DMPTW is to minimize the sum of customers’ arrival time while the deliveries are made during a specific time window given by the customers. Another close variant of objective is a travel duration. In the case, the problem minimizes the sum of travel durations between a depot and customer locations. It has many practical applications to network problems, e.g., whenever servers have to accommodate a set of requests to minimize clients’ total (or average) waiting time. To solve medium to large-sized instances, a two-phase metaheuristic algorithm is proposed. A construction stage generates a feasible solution using Neighborhood Descent with Random neighborhood ordering (RVND), and the optimization stage improves the feasible solution with an Iterated Local Search. Moreover, Tabu Search (TS) is incorporated in the proposed algorithm to prevent it from getting trapped into cycles. Therefore, our algorithm is prevented from becoming stuck at local optima. The results of experimental simulations are compared with well-known and successful metaheuristic algorithms. These results show that the proposed algorithm reaches better solutions in many cases. PubDate: 2021-03-12

Abstract: In recent years, with the more and more researchers studying the problem of maximizing monotone (nonsubmodular) objective functions, the approximation algorithms for this problem have gotten much progress by using some interesting techniques. In this paper, we develop the approximation algorithms for maximizing a monotone function f with generic submodularity ratio \(\gamma \) subject to certain constraints. Our first result is a simple algorithm that gives a \((1-e^{-\gamma } -\epsilon )\) -approximation for a cardinality constraint using \(O(\frac{n}{\epsilon }log\frac{n}{\epsilon })\) queries to the function value oracle. The second result is a new variant of the continuous greedy algorithm for a matroid constraint. We combine the variant of continuous greedy method with the contention resolution schemes to find a solution with approximation ratio \((\gamma ^2(1-\frac{1}{e})^2-O(\epsilon ))\) , and the algorithm makes \(O(rn\epsilon ^{-4}log^2\frac{n}{\epsilon })\) queries to the function value oracle. PubDate: 2021-03-11

Abstract: We consider the problem of scheduling a set of jobs with different processing times and sizes on a single bounded parallel-batch machine with periodic maintenance. Because the machine is in batch-processing model and the capacity is fixed, several jobs can be processed simultaneously in a batch provided that the total size of the jobs in the batch doesn’t exceed the machine capacity. And the processing time of a batch is the largest processing time of the jobs contained in the batch. Meanwhile, the production of each batch is non-resumable, that is, if a batch cannot be completed processing before some maintenance, that batch needs to be processed anew once the machine returns available. Our goal is to minimize the makespan. We first consider two special cases where the jobs have the same sizes or the same processing times, both of which are strongly NP-hard. We present two different approximation algorithms for them and show that these two algorithms have the same tight worst-case ratio of 2. We then consider the general case where the jobs have the arbitrary processing times and arbitrary sizes, for which we propose a 17/5-approximation algorithm. PubDate: 2021-03-10

Abstract: We consider a single-machine scheduling problem such that the due dates are assigned to each job depending on its order, and the lengths of the intervals between consecutive due dates are identical. The objective is to minimize the total penalty for the earliness and tardiness of each job. The early penalty proportionally increases according to the earliness amount, while the tardy penalty increases according to the step function. We show that the problem is strongly NP-hard, and furthermore, polynomially solvable if the two types of processing times exist. PubDate: 2021-03-09

Abstract: Let n, k, t be positive integers. What is the maximum number of arcs in a digraph on n vertices in which there are at most t distinct walks of length k with the same endpoints' Determine the extremal digraphs attaining the maximum number. When \(t=1\) , the problem has been studied by Wu, by Huang and Zhan, by Huang, Lyu and Qiao, by Lyu in four papers, and they solved all the cases but \(k=3\) . For \(t\ge 2\) , Huang and Lyu proved that the maximum number is equal to \(n(n-1)/2\) and the extremal digraph is the transitive tournament when \(n\ge 6t+2\) and \(k\ge n-1\) . They also discussed the maximum number for the case \(n=k+2,k+3,k+4\) . In this paper, we solve the problem for the case \(k\ge 6t+1\) and \(n\ge k+5\) , and we also characterize the structures of the extremal digraphs for \(n=k+2,k+3,k+4\) . PubDate: 2021-03-04

Abstract: The discrete Wasserstein barycenter problem is a minimum-cost mass transport problem for a set of probability measures with finite support. In this paper, we show that finding a barycenter of sparse support is hard, even in dimension 2 and for only 3 measures. We prove this claim by showing that a special case of an intimately related decision problem SCMP—does there exist a measure with a non-mass-splitting transport cost and support size below prescribed bounds' Is NP-hard for all rational data. Our proof is based on a reduction from planar 3-dimensional matching and follows a strategy laid out by Spieksma and Woeginger (Eur J Oper Res 91:611–618, 1996) for a reduction to planar, minimum circumference 3-dimensional matching. While we closely mirror the actual steps of their proof, the arguments themselves differ fundamentally due to the complex nature of the discrete barycenter problem. Containment of SCMP in NP will remain open. We prove that, for a given measure, sparsity and cost of an optimal transport to a set of measures can be verified in polynomial time in the size of a bit encoding of the measure. However, the encoding size of a barycenter may be exponential in the encoding size of the underlying measures. PubDate: 2021-03-03

Abstract: We study extensions of the classical single machine common due-date (CON) and common due-window (CONW) assignment problems to the setting of lot scheduling. In the CON problem, all the jobs share a common due-date, and jobs completed prior to or after the due-date are penalized according to their earliness/tardiness. In CONW, there exists a time interval, such that jobs completed within this interval are not penalized. In both cases the due-date/due-window are decision variables. In lot scheduling, a number of customer orders of different sizes may be processed in the same lot. We allow order splitting between consecutive lots. The objective is to find the order allocation to lots, such that the total cost of earliness, tardiness and due-date/due-window is minimized. Given \(n\) orders, and under the very realistic assumption that the lot capacity is of the order of \(n\) , we introduce polynomial time dynamic programming algorithms for both extensions. Our numerical tests indicate that both algorithms can easily solve medium-size problems. PubDate: 2021-03-01

Abstract: It is known that any chordal graph on n vertices can be represented as the intersection of n subtrees in a tree on n nodes (Gavril in J Comb Theory 16:47–56, 1974). This characterization has been recently used to generate random chordal graphs on n vertices by generating n subtrees of a tree on n nodes. The space (and thus time) complexity of an algorithm generating n subtrees of a tree on n nodes is at least the sum of the sizes of the generated subtrees. The determination of this sum was left as an open question in Şeker et al. (Generation of random chordal graphs using subtrees of a tree. arXiv preprint arXiv:1810.13326, 2018). In this paper, we show that the sum of the sizes of n subtrees in a tree on n nodes is \(\varTheta (m\sqrt{n})\) . We also show that we can confine ourselves to contraction-minimal subtree intersection representations because they are sufficient to generate every chordal graph with strictly positive probability. Moreover, the sum of the sizes of the subtrees in a contraction-minimal representation is at most \(2m+n\) . We use this result to derive the first linear-time random chordal graph generator. Based on contraction-minimal representations, we also derive connectivity-related structural properties of chordal graphs. Besides these theoretical results, we also conduct experiments to study the quality of the chordal graphs generated by our algorithm and compare them to those generated by existing methods from the literature. Our algorithm does not generate chordal graphs uniformly at random, which is a quite challenging open question, irrespective of the time complexity of the generator. However, our experimental study suggests that the generated graphs have a fairly varied structure as indicated by the sizes of maximal cliques. Furthermore, our algorithm is simple to implement and produces graphs with 10,000 vertices and \(4 \times 10^7\) edges in less than one second on a laptop computer. PubDate: 2021-02-26

Abstract: We study a fairness-based model for 2-facility location games on the real line where the social objective is to minimize the maximum envy over all agents. All the agents seek to minimize their personal costs, and the envy between any two of them is the difference in their personal costs. We consider two cases of personal costs, called min-dist cost and sum-dist cost. We are interested in pursuing strategyproof mechanisms for 2-facility location games in both cases. For the min-dist personal cost, we first show that a lower bound of the additive approximation for any deterministic strategyproof mechanism is 1/4, then devise a deterministic group strategyproof mechanism with additive approximation of 1/2 and two randomized strategyproof mechanisms with additive approximation of 1/4. For the sum-dist personal cost, we devise a group strategyproof deterministic mechanism which is also optimal. PubDate: 2021-02-23

Abstract: We study a scheduling problem where the jobs we have to perform are composed of one or more tasks. If two jobs sharing a non-empty subset of tasks are scheduled on the same machine, then these shared tasks have to be performed only once. This kind of problem is known in the literature under the names of VM-PACKING or PAGINATION. Our objective is to schedule a set of these objects on two parallel identical machines, with the aim of minimizing the makespan. This problem is NP-complete as an extension of the PARTITION problem. In this paper we present three exact algorithms with worst-case time-complexity guarantees, by exploring different branching techniques. Our first algorithm focuses on the relation between jobs sharing one or more symbols in common, whereas the two other algorithms branches on the shared symbols. PubDate: 2021-02-22

Abstract: One can get an integer solution for the one-dimensional cutting stock problem by means of the constructive or residual heuristic. In this work we propose a change in the Constructive Greedy Heuristic that consists in building a cutting pattern by sorting in descending order the items of pair or odd length, priority being given to those which appear more frequently in the problem, cut from objects in stock with pair or odd length respectively, with the aim of minimizing the quantity of cut objects. Computing tests that were carried out showed the efficiency of the proposed heuristic when compared with other methods in the literature that generate an integer solution for the problem and will be presented at the end of this work. PubDate: 2021-02-18

Abstract: Cops and Robbers is a classical pursuit and evasion game in graph theory, which was introduced by Nowakowski and Winkler and independently by Quilliot. In this paper, we study the zero-visibility cops and robber game, which is a variant of Cops and Robbers. In the zero-visibility cops and robber game, the robber is invisible to the cops throughout the game. We introduce a simple method for proving lower bounds on the zero-visibility cop number. This lower bound method is based on a connection between the zero-visibility cop number and the matching number. Using this technique, we investigate graph joins, lexicographic products of graphs, complete multipartite graphs and split graphs. For each of these classes of graphs, we prove lower bounds and upper bounds on the zero-visibility cop number. We also present a linear time approximation algorithm for computing the lexicographic product of a tree and a graph G. The approximation ratio of this algorithm is bounded by \( V(G) / (\nu (G) + V(G) {\setminus } V(\mathcal {M}(G)) )\) , where V(G) is the vertex set of G, \(\nu (G)\) is the matching number of G, \(\mathcal {M}(G)\) is a maximum matching of G and \(V(\mathcal {M}(G))\) is the vertex set of \(\mathcal {M}(G)\) . PubDate: 2021-02-17

Abstract: This work studies the problem of cross-docking truck scheduling introduced by (Int J Prod Econ 194:102–112, Serrano et al. 2017), where internal operations including repacking and storing operations are considered. The objective is to minimize the number of inbound trucks which are assigned beyond their contracted time windows. For the special case where only inbound truck arrivals are considered while the outbound truck departures are not ensured, we present some properties of the model and propose a heuristic algorithm called Light Period and Less Options (LPLO) to solve it. For the general case of the problem, due to its NP-hardness, we design a non-trivial Three-phase Greedy (TG) algorithm and the Genetic algorithm (GA). Experimental results with comparison between exact solutions by the CPLEX solver and solutions by the two heuristic algorithms demonstrate that the both proposed heuristic algorithms are of high efficiency in generating optimal or nearly optimal solutions for the considered problem in this work. PubDate: 2021-02-17

Abstract: Let \(P_3\) denote the path on three vertices. A \(P_3\) -packing of a given graph G is a collection of vertex-disjoint subgraphs of G in which each subgraph is isomorphic to \(P_3\) . The maximum \(P_3\) -packing problem is to find a \(P_3\) -packing of a given graph G which contains the maximum number of vertices of G. The perfect \(P_3\) -packing problem is to decide whether a graph G has a \(P_3\) -packing that covers all vertices of G. Kelmans (Discrete Appl Math 159:112–127, 2011) proposed the following problem: Is the \(P_3\) -packing problem NP-hard in the class of claw-free graphs' In this paper, we solve the problem by proving that the perfect \(P_3\) -packing problem in claw-free cubic planar graphs is NP-complete. In addition, we show that for any connected claw-free cubic graph (resp. (2, 3)-regular graph, subcubic graph) G with \(v(G)\ge 14\) (resp. \(v(G)\ge 8\) , \(v(G)\ge 3\) ), the maximum \(P_3\) -packing of G covers at least \(\lceil \frac{9v(G)-6}{10} \rceil \) (resp. \(\lceil \frac{6v(G)-6}{7} \rceil \) , \(\lceil \frac{3v(G)-6}{4} \rceil \) ) vertices, where v(G) denotes the order of G, and the bound is sharp. The proofs imply polynomial-time algorithms. PubDate: 2021-02-09

Abstract: The H-rank of a mixed graph \(G^{\alpha }\) is defined to be the rank of its Hermitian adjacency matrix \(H(G^{\alpha })\) . If \( G^{\alpha } \) is switching equivalent to a mixed graph \((G^{\alpha })' \) , and two vertices u, v of \(G^{\alpha }\) have exactly the same neighborhood in \((G^{\alpha })'\) , then u and v are said to be twins. The twin reduction graph \(T_{G^{\alpha }}\) of \(G^{\alpha }\) is a mixed graph whose vertices are the equivalence classes, and \([u][v]\in E(T_{G^{\alpha }})\) if \(uv\in E((G^{\alpha })')\) , where [u] denotes the equivalence class containing the vertex u. In this paper, we give the upper (resp., lower) bound of the number of vertices of the twin reduction graphs of connected mixed bipartite graphs, and characterize all twin reduction graphs of the connected mixed bipartite graphs with H-rank 4 (resp., 6 or 8). Then, we characterize all connected mixed graphs with H-rank 4 (resp., 6 or 8) among all mixed graphs containing induced mixed odd cycles whose lengths are no less than 5 (resp., 7 or 9). PubDate: 2021-02-08

Abstract: This paper offers a new robust mathematical model for designing an efficient flood evacuation plan in disasters. The mathematical model takes a set of potential locations for establishing evacuation shelters with limited capacities. We suggest an innovative model to use helicopters to rescue people from the flood for the first time in this field of knowledge. Besides, the helicopters' capacity and maximum flying time are considered limited to adopt real-world conditions. Our goal is to maximize the number of rescued people and to minimize the total cost. By solving the model, we determine the allocation of people to the shelters, the optimal location of shelters, allocation of the helicopters to the evacuation shelters, and flying path and routes of the helicopters. Since the main parameters of the proposed model are due to uncertainty in real-world situations, we implemented robust optimization to formulate uncertainties. Due to the Np-hardness of the suggested formulation, we offer four algorithms to solve the mathematical model. We enhance the efficiency of the algorithms through a robust design of experiments and assess their performance considering several measures via post hoc analysis. At the end, we implement the robust model on real-world data from 2011 Japan’s destructive tsunami in Ishinomaki city. The results reveal that the model is able to provide promising solutions compared to the classical models and leads to higher rescue rates and lower cost. PubDate: 2021-02-06 DOI: 10.1007/s10878-021-00699-0

Abstract: The paper is concerned with the two-machine scheduling problem where each job is to be processed on the first-stage machine and after that on the second-stage machine. In order to be processed, each job requires storage space that it seizes at the start of its processing on the first-stage machine and releases only at the completion of processing on the second-stage machine. The storage space is limited and its consumption varies from job to job. The goal is to minimise the time needed for the completion of all jobs. All instances of the considered scheduling problem are classified by means of five parameters. This leads to the sixty four families of instances. For each family, the paper establishes its computational complexity and, in the case of polynomial-time solvability, presents a polynomial-time algorithm, constructing an optimal schedule. PubDate: 2021-02-06 DOI: 10.1007/s10878-021-00706-4

Abstract: This paper considers an on-line exploration problem. We use a mobile robot to explore an unknown simple grid polygon. The robot is assumed to have limited sensing capability that can only detect the four basic cells adjacent to it. The robot’s task is to explore each cell and to return to the start. To explore a cell, the robot has to enter it. We ask for a short exploration tour, that is, minimizing the number of multiple-visit cells. The competitive ratio is used to measure the performance of our strategy. It is the ratio between the length of the on-line exploration path and the length of the shortest path achieved with full knowledge of the polygon. As our main result, we present a new on-line exploration strategy and prove that the robot’s path is at most 7/6 times longer than the shortest path. This matches the previously known lower bound. PubDate: 2021-02-03 DOI: 10.1007/s10878-021-00705-5