Abstract: This work deals with a project scheduling problem where the tasks consume resources to be activated, but start to produce them after that. This problem is known as Dynamic Resource-Constrained Project Scheduling Problem (DRCPSP). Three methods were proposed to divide the problem into smaller parts and solve them separately. Each partial solution is obtained by CPLEX optimizer and is used to generate more complete partial solutions. The obtained results show that this hybrid method works very well.

Abstract: Two graph classes are presented; the first one (k-ribbon) generalizes the path graph and the second one (k-fan) generalizes the fan graph. We prove that they are subclasses of chordal graphs and so they share the same structural properties of this class. The solution of two problems are presented: the determination of the subchromatic number and the determination of the toughness. It is shown that the elements of the new classes establish bounds for the toughness of k-path graphs.

Abstract: In this paper, we investigate the separation problem on some valid inequalities for the s - t elementary shortest path problem in digraphs containing negative directed cycles. As we will see, these inequalities depend to a given parameter k ∈ ℕ. To show the NP-hardness of the separation problem of these valid inequalities, considering the parameter k ∈ ℕ, we establish a polynomial reduction from the problem of the existence of k + 2 vertex-disjoint paths between k + 2 pairs of vertices (s1, t1), (s2, t2) ... (sk+2, t k+2) in a digraph to the decision problem associated to the separation of these valid inequalities. Through some illustrative instances, we exhibit the evoked polynomial reduction in the cases k = 0 and k = 1.

Abstract: The 0-1 exact k-item quadratic knapsack problem (E - kQKP) consists of maximizing a quadratic function subject to two linear constraints: the first one is the classical linear capacity constraint; the second one is an equality cardinality constraint on the number of items in the knapsack. Most instances of this NP-hard problem with more than forty variables cannot be solved within one hour by a commercial software such as CPLEX 12.1. We propose therefore a fast and efficient heuristic method which produces both good lower and upper bounds on the value of the problem in reasonable time. Specifically, it integrates a primal heuristic and a semidefinite programming reduction phase within a surrogate dual heuristic. A large computational experiments over randomly generated instances with up to 200 variables validates the relevance of the bounds produced by our hybrid dual heuristic, which yields known optima (and prove optimality) in 90% (resp. 76%) within 100 seconds on the average.

Abstract: This paper shows a method for solving linear programming problems that includes Interval Type-2 fuzzy constraints. The proposed method finds an optimal solution in these conditions using convex optimization techniques. Some feasibility conditions are presented, and some interpretation issues are discussed. An introductory example is solved using the proposed method, and its results are described and discussed.

Abstract: This paper describes an application in group decision making, aimed at developing a procedure to help define priorities in preventive maintenance activities. The method applied is called DRV Processes (Decision with Reduction of Variability) and it combines both statistical techniques and multicriteria decision aid procedures. Among its advantages, we may highlight the possibility of reducing the noise affecting information in group decision making and of reaching a consensual decision. This approach generally improves the level of shared knowledge and helps to avoid conflict within the group. The application was carried out in a major pharmaceutical production plant. The experience showed an eighty per cent reduction in the original amount of process noise. Moreover, the paper describes evidence of improvement in interpersonal relationships.

Abstract: Different approaches for deploying resilient optical networks of low cost constitute a traditional group of NP-Hard problems that have been widely studied. Most of them are based on the construction of low cost networks that fulfill connectivity constraints. However, recent trends to virtualize optical networks over the legacy fiber infrastructure, modified the nature of network design problems and turned inappropriate many of these models and algorithms. In this paper we study a design problem arising from the deployment of an IP/MPLS network over an existing DWDM infrastructure. Besides cost and resiliency, this problem integrates traffic and capacity constraints. We present: an integer programming formulation for the problem, theoretical results, and describe how several metaheuristics were applied in order to find good quality solutions, for a real application case of a telecommunications company.

Abstract: We present the transcript of the IFORS distinguished lecture delivered by the author on invitation of SOBRAPO and IFORS. The lecture concerned the development of an interdisciplinary research motivated by an application in mobile telecommunication systems, a project jointly developed by four research teams. The presentation follows the typical steps of a classical operations research study, and aims at reviewing the main theoretical and practical results that were obtained.

Abstract: This work aims at complementing the development of the EFM (Ellipsoidal Frontier Model) proposed by Milioni et al. (2011a). EFM is a parametric input allocation model of constant sum that uses DEA (Data Envelopment Analysis) concepts and ensures a solution such that all DMUs (Decision Making Units) are strongly CCR (Constant Returns to Scale) efficient. The degrees of freedom obtained with the possibility of assigning different values to the ellipsoidal eccentricities bring flexibility to the model and raises the interest in evaluating the best distribution among the many that can be generated. We propose two analyses named as local and global. In the first one, we aim at finding a solution that assigns the smallest possible input value to a specified DMU. In the second, we look for a solution that assures the lowest data variability.