• No se han encontrado resultados

Algorithms for the Minmax Regret Path problem with interval data

N/A
N/A
Protected

Academic year: 2020

Share "Algorithms for the Minmax Regret Path problem with interval data"

Copied!
28
0
0

Texto completo

(1)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/325787097. Algorithms for the Minmax Regret Path Problem with Interval Data Article in Information Sciences · September 2018 DOI: 10.1016/j.ins.2018.06.016. CITATION. READS. 1. 143. 4 authors: Francisco Pérez-Galarce. Alfredo Candia-Vejar. Pontificia Universidad Católica de Chile. Universidad de Talca. 11 PUBLICATIONS 31 CITATIONS. 35 PUBLICATIONS 164 CITATIONS. SEE PROFILE. SEE PROFILE. César A Astudillo. Matthew Dealton Bardeen. Universidad de Talca. Universidad de Talca. 39 PUBLICATIONS 183 CITATIONS. 20 PUBLICATIONS 164 CITATIONS. SEE PROFILE. SEE PROFILE. Some of the authors of this publication are also working on these related projects:. enviromental chemistry View project Calibration of two-source models to estimate vineyard water requirements using ground-based weather measurements and high-resolution thermal and multispectral images acquired by an unmanned aerial vehicle (UAV) View project. All content following this page was uploaded by Francisco Pérez-Galarce on 29 January 2019. The user has requested enhancement of the downloaded file..

(2) Accepted Manuscript. Algorithms for the Minmax Regret Path Problem with Interval Data Francisco Pérez-Galarce, Alfredo Candia-Véjar, César Astudillo, Matthew Bardeen PII: DOI: Reference:. S0020-0255(18)30456-0 10.1016/j.ins.2018.06.016 INS 13709. To appear in:. Information Sciences. Received date: Revised date: Accepted date:. 12 September 2017 5 June 2018 7 June 2018. Please cite this article as: Francisco Pérez-Galarce, Alfredo Candia-Véjar, César Astudillo, Matthew Bardeen, Algorithms for the Minmax Regret Path Problem with Interval Data, Information Sciences (2018), doi: 10.1016/j.ins.2018.06.016. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain..

(3) ACCEPTED MANUSCRIPT. Algorithms for the Minmax Regret Path Problem with Interval Data. Francisco Pérez-Galarce1 , Alfredo Candia-Véjar2∗, César Astudillo3 , Matthew Bardeen3 1 2. Departamento de Ciencias de la Computación, Universidad de Talca, Camino Los Niches km. 1, Curicó, Chile Abstract. CR IP T. 3. Computer Science Department, Pontificia Universidad Católica de Chile, Santiago, Chile. Departamento de Ingenierı́a Industrial, Universidad de Talca, Camino Los Niches km. 1, Curicó, Chile. 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25. ED. 4. Introduction. We study a variant of the well known Shortest Path (SP) problem called the Minmax Regret Path (MMR-P) Problem. In the classic SP problem, a digraph G = (V, A), where V is the set of nodes and A is the set of arcs, with non-negative lengths associated to each arc and two special nodes s and t belonging to V are considered. The SP problem consists of finding a path between s and t (s-t-path) with the minimum total length. Efficient algorithms for the original SP problem have been known since [14], in which the authors proposed a polynomial time algorithm and from that study, multiple approaches have been proposed. Some SP variants, algorithms and applications are discussed in [2]. In this research the focus is on SP problems where there is uncertainty in the objective function parameters (the length function). In this SP variant, for each arc we have a closed interval that defines the possibilities for the arc length. The uncertainty model used here is the minmax regret approach (MMR), sometimes named robust deviation. In this approach the aim is to make decisions that will have a good objective value under any likely input data scenario included in the decision model. Three criteria are known to select among robust decisions, they are: absolute, MMR and relative MMR [27]. We use MMR, where the regret associated with each combination of decisions and input data scenario is defined as the difference between the resulting cost to the decision maker and the cost from the decision taken if it had been known prior to the time of the decision which scenario of data input would have occurred. In the context of Optimization with Uncertainty an important alternative model is the Fuzzy model, where several papers have studied the SP problem, see [20, 36, 17]. The MMR Model has been increasingly studied in combinatorial optimization, see the books by [27], and [23], as well as the reviews by [4] and [8]. Most research on Minmax Regret Combinatorial Optimization (MMR-CO) has been focused on mono objective problems and recently, a paper has proposed robust multiobjective CO problems [15] and, in the last years, several papers have extended. PT. 3. 1. CE. 2. AC. 1. M. AN US. The Shortest Path in networks is an important problem in Combinatorial Optimization and has many applications in areas like Telecommunications and Transportation. It is known that this problem is easy to solve in its classic deterministic version, but it is also known that it is an NP-Hard problem for several generalizations. The Shortest Path Problem consists in finding a simple path connecting a source node and a terminal node in an arc-weighted directed network. In some realworld situations the weights are not completely known and then this problem is transformed into an optimization one under uncertainty. It is assumed that an interval estimate is given for each arc length and no further information about the statistical distribution of the weights is known. Uncertainty has been modeled in different ways in Optimization. Our aim in this paper is to study the Minmax Regret Path with Interval Data problem by presenting a new exact branch and cut algorithm and, additionally, new heuristics. A set of difficult and large size instances are defined and computational experiments are conducted for the analysis of the different approaches designed to solve the problem. The main contribution of our paper is to provide an assessment of the performance of the proposed algorithms and an empirical evidence of the superiority of a simulated annealing approach based on a new neighborhood over the other heuristics proposed. Keywords: Minmax Regret Model with Interval Data; Simulated Annealing; Shortest Path Problem; Branch and Cut; Neighbourhoods for path problems. ∗ Corresponding. author. E-mail addresses: fjperez10@uc.cl (Francisco Pérez-Galarce), Alfredo Candia-Véjar (acandia@utalca.cl), César Astudillo (castudillo@utalca.cl), Matthew Bardeen (mbardeen@utalca.cl). 1.

(4) ACCEPTED MANUSCRIPT. 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88. CR IP T. 32. AN US. 31. M. 30. ED. 29. PT. 28. the concepts of robustness to Multiobjective CO problems [45, 9]. Moreover, SP has been studied in the context of multi-objective uncertain problems,[44]. It is known that MMR-CO problems with interval data are usually NP-hard, even when the underlying classic problem is easy to solve; this is the case of the minimum spanning tree problem, SP problem, assignment problem and others, see [4] and [23] for a detailed analysis. Several efforts have been made to obtain exact solutions using a broad set of exact methods, frequently formulating an MMR problem like a Mixed Integer Linear Programming (MILP) problem and then using a commercial code or applying branch and bound, branch and cut or Benders decomposition approaches in a dedicated scheme. Some problems that have been studied are: MMR Spanning Trees [30, 42], MMR Paths [22, 23, 31, 32], MMR Assignment [39], MMR Set Covering [40], and MMR Traveling Salesman [34]. Particularly, for MMR-P, [47] proved that the problem is NP-Hard even when a graph is restricted to be directed acyclic planar and regular of degree three and [46] proved that the problem is NP-Hard even in the case of a restricted class of Layered networks. Additional results about the complexity of MMR-P for some classes of networks is given in [23] and [5]. Exact algorithms for MMR-P have been proposed by [23, 31, 32], which show the application of several algorithmic approaches. However, most of these papers had computational experiments using small instances or instances with a special structure like real road networks. In fact, [32] compared several exact algorithms and concluded that an algorithm able to clearly outperform the others does not exist. Moreover, they established some recommendations depending on the type of instances to be solved. [16] presented some results about some classes of networks of MMR-P for which polynomial or pseudopolynomial approaches exist. The authors of [38] addressed the MMR-P on a finite multi-scenario model and they proposed three new approaches for algorithmic purposes. Numerical experiments using randomly generated instances showed that some of the proposed algorithms were able to obtain solutions in reasonable times for network instances up to 750 nodes. Very recently, [18] have proposed a new procedure to obtain a lower bound for the optimal value of instances of MMR-P. This value is part of a branch and bound algorithm that outperforms existing exact algorithms in the literature when it is applied to some classes of MMR-P instances. With respect to heuristic approaches, only a few methods are available. A basic heuristic based on the definition of a particular scenario (the midpoint of the intervals) was designed as an approximation algorithm for general MMR-CO problems [24, 23]. A new basic heuristic, HM U , solves an MMR-CO problem for two scenarios: the midpoint scenario and the scenario in which all the weights are set to their upper bounds, then the HM U returns the better of these two solutions. HM U achieves a good performance for several MMR-CO problems [24, 23]. [21] proposed a heuristic for MMR-P but only small instances were tested for comparison with other approaches. A new lower bound for the optimal value of MMR-CO problems was proposed in [10]. In particular, for MMR-P, [23] showed that for networks with a number of nodes under 1 000, HM U obtained solutions with gaps under 6% (relative deviation from the reported optimum) for several classes of directed and undirected networks. A problem related to MMR-P, the minmax relative regret robust shortest path problem (MMRRP), was studied in [11]. They proposed a mixed integer linear programming formulation and also developed several heuristics with emphasis on providing efficient and scalable methods for solving large instances for the MMR-P, based on pilot method and random-key genetic algorithms. The CPLEX branch-and-bound algorithm based on this formulation found optimal solutions for most of the small Layered and Grid instances with up to 200 nodes. However, gaps of 10% or higher were found for some instances. The Grid instances proposed in this paper were much harder to solve than the Layered instances found in the literature. Other heuristic approaches for MMR-CO problems are the Simulated Annealing approach for MMR-Spanning Tree by [35], the heuristic based on a bounding process for MMR- spanning Arborescences by [12], the metaheuristic approach for MMR-Assignment problem [39] and the Tabu Search for the MMR-Spanning Tree by [25]. Our main contributions in this paper are: i) an efficient Branch and Cut algorithm was able to find exact solutions for some classes of large size instances and outperformed other exact algorithms for several of these instances, ii) a local search heuristic and a simulated annealing metaheuristic that uses a novel neighborhood to find good solutions for large sized instances that exact algorithms could not and iii) an extensive experimental analysis using several classes of network instances showing the performance of the different algorithms and highlighting the particular conditions when they could be used. In Section 2 the problem is formally defined and known results about the computational complexity of the problem are presented; in Section 3 a new Branch & Cut exact algorithm for MMR-P is introduced; in Section 4, various heuristics are analyzed including well-known basic heuristics, then a local search and simulated annealing approaches based on a new neighborhood for the problem are also presented; in Section 5, benchmark instances are presented and an implementation description is given. In Section 6 experiments are conducted with exact approaches, determining the perfor-. CE. 27. AC. 26. 2.

(5) ACCEPTED MANUSCRIPT. 91. mance of the algorithms when applied in several types of instances. The computational results of the heuristic and their analysis for hard instances are presented in Section 7, finally, in Section 8 some conclusions are discussed.. 92. 2. 89 90. Definition of MMR-P and Computational Complexity. 94. First of all, in 2.1 basic notation and the formal definition of MMR-P are presented. Then, in 2.2 important known results about the computational complexity of the problem are presented.. 95. 2.1. 96 97 98 99 100 101 102 103 104. Notation for MMR-P. We use a standard notation for MMR-CO problems, specially we follow the notation used in [39]. We considered a digraph G = (V, A) where V is the set of nodes, | V |= n and | A |= m the set of + arcs. For each arc e ∈ A, two non negative numbers c− c+ are given and c− ij and ij ≤ cij . The length  − ij+  can take on any real number from its uncertainty interval cij , cij , regardless of the values taken by   + the costs of other arcs. The Cartesian product of the uncertainty intervals c− ij , cij , (i, j) ∈ A, is denoted as S and any element s of S is called a scenario; S is the vector of all possible realizations of the costs of arcs. csij , (i, j) ∈ A denotes the cost of the arc (i, j) corresponding to scenario s. Let Φ the set of all s-t paths in G. For each X ∈ Φ and s ∈ S, let F (s, X) be the cost of the s-t path X in the scenario s. X s F (s, X) = cij (CP) (i,j)∈X. The classical s-t SP problem for a fixed scenario s ∈ S is:. AN US. 105. CR IP T. 93. min {F (s, X) : X ∈ Φ} 106 107 108. Let F ∗ (s) be the optimum objective value for problem (CSP). For any X ∈ Φ and s ∈ S, the value R(s, X) = F (s, X) − F ∗ (s) is called the regret for X under scenario s. For any X ∈ Φ, the value Z(X) is called the maximum (or worst-case) regret for X. Z(X) = max R(s, X) s∈S. The MMR version of Problem (CSP) is:. M. 109. min {Z(X) : X ∈ Φ} = min max R(s, X). 113. 114 115. 116 117 118 119 120 121 122 123 124 125 126 127 128 129. ED. PT. 112. (MMR-Path). Let Z ∗ denotes the optimum objective value for Problem MMR-P. Further, Z ∗ is called a worstcase scenario for X. For any X ∈ Φ, the scenario induced by X, s(X), for each (i, j) ∈ A is defined by ( c+ (i, j) ∈ X s(X) ij , cij = (1) c− , otherwise. ij Property 1: For each s-t path X in Φ it is verified,. CE. 111. X∈Φ s∈S. (MR-Path). Z(X) = F s(X) (X) − F s(X). (P1). It is clear from the above definitions that the worst-case regret can be computed by solving just two classic SP problems.. AC. 110. (CSP). 2.2. Computational Complexity of MMR-P. Several works analyzing the computational complexity of MMR-P have shown that the problem is NP-Hard even for several classes of special networks. In the following two classes of directed graphs (digraphs) are defined. More details about the classes of digraphs and computational complexity results can be found in [23]. Layered digraphs: In a layered digraph G = (V, A), set V can be partitioned into disjoint subsets V1 , V2 , ..., Vk called layers and arcs exist only between nodes from Vi and Vi+1 for i = 1, ..., k − 1. The maximal value of |Vi | for i = 1, ..., k is called a width of G. In every layered digraph all paths between two specified nodes s and t have the same number of arcs. Edge series-parallel multidigraphs: An edge series-parallel multidigraph (ESP) is recursively defined as follows. A digraph consisting of two nodes joined by a single arc is ESP. If G1 and G2 are ESP, so are the multidigraphs constructed by each of the operations: • Parallel composition p(G1 , G2 ): identify the source of G1 with the source of G2 and the sink of G1 with the sink of G2 .. 3.

(6) ACCEPTED MANUSCRIPT • Series composition s(G1 , G2 ): identify the sink of G1 with the source of G2 .. 130. In the following some computational complexity results are summarized:. 131. - MMR-P is strongly NP-hard for acyclic directed layered graphs, even if the bounds of weight intervals are 0 or 1.. 132 133. 135. - MMR-P is strongly NP-hard for undirected graphs, even if the bounds of weight intervals are 0 or 1.. 136. - MMR-P is NP-hard for edge series-parallel digraphs with a maximal node degree at most 3.. 137. - MMR-P is NP-hard for layered digraphs of width 3 and for layered multidigraphs of width 2.. 134. - MMR-P for ESP admits an FPTAS, that is an algorithm that for a given ESP computes path  P such that ZG (P ) ≤ (1 + )OPT in time O |A|3 /2 .. 138 139. 143. 3. CR IP T. 142. The above results show that MMR-P is a very difficult problem still for some special classes of graphs. From the algorithmic point of view this represents a challenge when the objective is to develop efficient algorithms for its resolution.. 140 141. Exact Algorithms for MMR-P Problem. 145. In this section the proposed branch and cut (B&C) algorithm and a known MILP formulation for MMR-P are presented.. 146. 3.1. 147 148 149 150 151 152 153 154. We consider a digraph G = (V, A) with two distinguished nodes  s and  t and according the previous + section each arc (i, j) ∈ A has associated an interval length c− ij , cij . We use Kasperski’s MILP formulation of the MMR-P Problem [23], this formulation is obtained using the duality properties. The problem MMR-P is formulated using the general formulation MMR-P defined in the previous section, by introducing both, the property P1 and the particular definitions of (CSP) and (CP) for SP. In this formulation each arc (i, j) in A has associated a binary variable xij expressing if the arc (i, j) is part of the solution X ∈ Φ. The constraints yij ∈ {0, 1} have been replaced by yij ≥ 0 because the matrix A associated to the typical constraints of s-t paths is totally unimodular and yij ≤ 1 in every optimal solution of the above relaxed formulation. X + min (cij xij + c− (2) ij (1 − xij ))yij. M. 155. A MILP Formulation for the MMR-P Problem. AN US. 144. ED. (i,j)∈A. 156. X. {i:(j,i)∈A}. 160 161. 162 163. CE. 159. 165. (3) (4). max λs − λt. (5). − λi ≤ λj + c+ ij xij + cij (1 − xij ), (i, j) ∈ A. (6). Then we can use these results and tackle the MMR-P problem with the integer programming formulation showed in (7-10). This formulation can be numerically solved by a software like CPLEX. X + min cij xij − λs + λt (7) (i,j)∈A. − λi ≤ λj + c+ ij xij + cij (1 − xij ), (i, j) ∈ A   j=s 1, X X xji − xkj = 0, j ∈ V \ {s, t}   {i:(j,i)∈A} {k:(k,j)∈A} −1, j = t. xij ∈ {0, 1} , ∀ (i, j) ∈ A. 164. j=s j ∈ V \ {s, t} j=t. The dual for this problem (2-4) is presented in (5-6).. AC. 158. ykj. {k:(k,j)∈A}.   1, = 0,   −1,. yij ≥ 0, ∀ (i, j) ∈ A. PT. 157. yji −. X. (8) (9). (10). It is important to comment that we use this approach for evaluating the performance of both the B&C algorithm described next and the heuristics proposed in Section 4.. 4.

(7) ACCEPTED MANUSCRIPT. 167 168 169 170 171 172 173 174. 3.2. Branch and Cut Approach. We implemented a B&C over CPLEX framework using the formulation presented in equations (11), (12), and (13) where the constraints are separated by robust constraints in Equation (12) and topology in Equation (13). This formulation has an exponential number of robust constraints (one per each path s-t ∈ Φ) and it is based on [42]. The topology constraints consider the flow formulation for the shortest path problem 3 and they are represented for X ∈ Φ in Equation (13), these constraints are added at the beginning of the algorithm. The robust constraints are the cuts in our B&C and they are added when a new feasible solution is found in each node of the branching process. ∗ ZM M R = min. X. e∈E(X). s.t.. θ≤. X. c+ e −θ. (11) X. c− e +. e∈E(Y). θ ∈ IR≥0 and X ∈ Φ. 176 177 178. ∀Y ∈ Φ. (12). (13). Additionally, if a fractional solution (X̃) is found, we find a valid cut by rounding this fractional solution to a feasible one; to do so, we find a near integer vector X̃0 by solving the SP on G with edge costs defined by Equation (14), using the obtained vector X̃0 , an induced solution Ỹ0 is calculated and the corresponding cut is added to the model if the cut is violated.. AN US. 175. − (c+ e − ce ),. e∈E(Y)∩E(X). CR IP T. 166. + c̃e = (c− e + ce ) min{1 − x̃ij , 1 − x̃ji }, ∀e : {i, j} ∈ E;. (14). 182. Moreover, using X̃ (feasible or not) we apply a local-search in order to find still more violated robust constraints and add them to the model. We have also embedded into the B&C a primal heuristic which attempts to provide better upper bounds using the information of the fractional solution X̃; a feasible vector X̃0 is calculated by solving the SP on G with edge costs defined by(14).. 183. 4. 180 181. Heuristics for MMR-P. M. 179. 189. 4.1. 191 192 193 194 195 196 197 198 199 200 201 202 203. 204 205 206 207 208 209 210 211 212. Basic Heuristics for MMR-P. Two basic heuristics for MMR-P are known; in fact the heuristics are applicable to any MMR-CO problem. These heuristics are based on the idea of specifying a particular scenario and then solving the classic problem using this scenario. The output of these heuristics are feasible solutions for the MMR-CO problem, for more details see [8, 12, 23], [34] and [40].    − First we mention the midpoint scenario, sM , defined for each edge e ∈ A as sM = c+ e + ce /2 . We refer to the heuristic based on the midpoint scenario as HM. The other heuristic based on the upper limit scenario will be denoted by HU. The computation of the output solution for each one of these heuristics implies to solve only twice the corresponding classic problem. The first of these problems is the computation of the solution Y in the specific scenario, sM for HM or sU for HU , and the second one is the computation of Z(Y ). These heuristics have been integrated in the new heuristic HM U by the sequential computing of the solutions given by HM and HU and getting the best. In the evaluation of heuristics for MMR problems several experiments have shown that if these heuristics are considered as an initial solution, the performance of more sophisticated heuristics is improved. For an in-depth discussion, please refer to [34, 39, 40] and [8].. PT. 190. CE. 186 187. AC. 185. ED. 188. In this section we present the proposed heuristic approaches for solving MMR-P. It contains (i) Two simple and known heuristics based on the definition of specific scenarios (ii) A Simulated Annealing and a Local Search approaches based on a novel definition of a neighborhood of feasible s-t paths and (iii) a Simulated Annealing approach based on a traditional k-opt type neighborhood for combinatorial optimization problems.. 184. 4.2. Local Search for MMR-P. Local Search (LS), described in Algorithm 1, is a traditional search method for a CO problem P with feasible space S. The method starts from an initial solution and iteratively improves it by replacing the current solution with a new candidate, which is only marginally different. During this initialization phase, the method selects an initial solution s from the search space S. This selection may be at random or may take advantage of some a priori knowledge about the problem. An essential step in the algorithm is the acceptance criterion, i.e., a neighbor is identified as the new solution if its cost is strictly less in comparison to the current solution. This cost is a function assumed to be known and is dependent on the particular problem. The algorithm terminates when no. 5.

(8) ACCEPTED MANUSCRIPT. 213 214 215 216 217 218. improvements are possible, which happens when all the neighbors have a higher (or equal) cost when compared to the current solution. The method outputs the current solution as the best candidate. Observe that, at all iteration steps, the current solution is the best solution found so far. LS is a sub-optimal mechanism, and it is not unusual that the output will be far from the optimum. The literature reports many algorithms that attempt to overcome the hurdles encountered in the original LS strategy.. Algorithm 1 Local Search. 219. 221 222 223 224 225 226 227 228. 4.3. A Simulated Annealing Approach for MMR-P Problem. Simulated Annealing (SA) is a well known probabilistic metaheuristic proposed by Kirkpatrick et al. in the 80’s for solving hard combinatorial optimization [26, 6]. SA seeks to avoid being trapped in local optimum as would normally occur in algorithms using local search methods. A key characteristic of SA is the possible acceptation of worse solutions than the current during the exploration of the local neighborhood. Accordingly with the physical analogy of SA with metallurgy, several parameters must be tuned in order to find good solutions. Typical parameters are associated to concepts like neighborhood, cooling schedule, size of internal loop and termination criterion. These parameters are usually adjusted through experimentation and testing (see Algorithm 2).. AN US. 220. CR IP T. Input: Search space (S), cost function (f (·)), neighborhood function (N (·)). Output: best solution founded Y , cost f (Y ). Y ←s // s ∈ S while Termination Criterion = TRUE do Y 0 ← N (S, Y ) if f (Y 0 ) ≤ f (Y ) then Y ←Y0 end if end while. Algorithm 2 Simulated Annealing (SA). ED. M. Input: Search space (S), cost function (f (·)), neighborhood function (N (·)), initial and final temperature (ti , tf ), number of internal loops (K), cooling programming (β), acceptance function (g(·)). Output: best solution founded Y ∗ , cost f (Y ∗ ). t ← ti Y ←s // s ∈ S while t ≥ tf do k←0 while k ≤ K do Y 0 ← N (S, Y ) if f (Y 0 ) ≤ f (Y ) then Y ←Y0 if f (Y 0 ) ≤ f (Y ∗ ) then Y∗ ←Y0 end if else if g(Y, Y 0 ) == T RU E then Y ←Y0 end if end if k ←k+1 end while t ← βt end while. 230 231 232 233 234 235 236 237 238 239 240 241. AC. CE. PT. 229. Within the context of the MMR-P problem, we shall now describe the main concepts and parameters generally used in SA. Search Space: A subgraph S of the original graph G is defined such that this subgraph contains a s-t path. In S a classical s-t shortest path subproblem is solved, where the arc lengths are chosen taking the upper limit arc costs. Then, the optimum solution of this problem is evaluated for acceptation. Next Subsection details this part. Initial Solution: The initial solution s is obtained by applying the heuristic HM U to the original network. Cooling Programming: A geometric descent of the temperature is used according to parameter β. Internal Loop: Next subsection describes in detail about this parameter. Neighborhood Search Moves: Next subsection describes in detail the structure of the neigbourhood used.. 6.

(9) ACCEPTED MANUSCRIPT. 245. Acceptation Criterion: A standard probabilistic function is used for managing the acceptation of new solutions. Termination Criterion: A fixed value of temperature (final temperature Tf ) is used as termination criterion.. 246. 4.4. 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288. 289. 290 291 292 293 294 295 296 297. CR IP T. 251. AN US. 250. M. 248 249. Neighborhood Structure for the MMR-P problem. Two fundamental concepts in LS are the Search space and Neighborhood structure. The Search space, denoted as S, is defined as the set of all feasible solutions for the problem. At each iteration of LS, a slight modification of the current solution leads to a neighbor, which on a more critical inspection, can be seen as a function which corresponds to a local transformation on the current solution. This function induces a set of possible neighbors to a current solution, concept know as the neighborhood set, and which is denoted by N (Y ). In particular N (Y ) ⊆ S. Many different neighborhood structures can be defined for the same problem, yielding the challenge of selecting the most suitable. It is important to note that depending on the context, small modifications of the neighborhood structure may lead to strongly different cost for the best solution found by the algorithm. In the classic SP problem the determination of neighborhood is more complex than in other problems, such as the TSP [28]. In [37] is presented a LS heuristic for the multicriteria SP problem. The mechanism to obtain a new path p0 from an existing path p is described as follows: first, a subpath starting from node s is obtained by cutting the path p at node i. Next, an arc emanating from node i and connected to the node j is attached to the new solution. Finally, the algorithm searches for a path from j to the terminal node t. This entire process is repeated for every node in the original path, and for every node j adjacent to node i, which, from our perspective, is prohibitive for many applications of the SP. A traditional neighborhood used in designing heuristics for CO problems is the family of k-opt. The idea in this scheme is to eliminate k arcs (in the network problems context) and add new arcs to complete a feasible solution. Typically, in problems where the cardinality of the arcs in the solution is fixed, (like the TSP or the Minimum Spanning tree problems) k eliminated arcs are replaced by k new arcs. In paths optimization problems, if k arcs are eliminated from a feasible solution, a different number of arcs added could generate a feasible solution. Some papers [19, 43, 29] have considered this strategy. For our problem, k-opt strategy is used by considering the values k = 2 and k = 3. Given the importance of the new neighborhood structure in our proposed method, we have dedicated this section to explain it in detail. We start by defining the LS mechanism. Subsequently we detail the concepts of neighborhood structure and Search space. After that, we explicitly describe an architectural model for obtaining a new candidate solution by restricting the original search space. Typically, in LS, several types of neighborhood structures are analogous to the k-opt method explained above, in the sense that a candidate solution is obtained by applying a slight modification to the previous candidate, see [3] for an analysis of several types of large neighborhoods for combinatorial optimization problems. A fundamentally different philosophy is the one of using subspaces to induce candidate solutions. In this model, the new candidate is not obtained directly from a previous solution. Rather the candidate is generated by an indirect step, which consists in perturbing a subspace in a LS fashion so as to obtain a new subspace which is marginally different in comparison to the former. Finally, the new subspace is employed to derive the new candidate solution. This concept adds an extra layer in the architectural model for defining the neighborhood structure. The method is detailed in Algorithm 3, which generalizes the method presented in [35] for solving minmax regret spanning tree problem. [35], in the first step, applied local transformations to a connected graph (subspace) to obtain a new graph which is also connected (new subspace). In the second step, the differences in the regret between the original and the modified candidate solutions are evaluated.. ED. 247. PT. 244. CE. 243. AC. 242. Algorithm 3 Neighbor induction (R) Input: R, a subspace of original search space S. Output: Y 0 , the new candidate solution. 1: R’← subspace-perturbation (R). 2: Y 0 ← generate-candidate (R’).. Our proposed solution for the implementation of the MMR-P Neighborhood retains the idea of using bitmap strings to represent (and restrict) the search space. We start by defining a bitmap string with cardinality |A|, such that π (j) = 1 if edge aj belongs to the current subset, and π (j) = 0 otherwise. Further, π (j) denotes the bit j of the bitmap vector. The full process for creating a new search space is detailed in Algorithm 4. At each iteration, a predetermined fraction of arcs from the original subspace are modified, i.e., they are set to 1 (added) if they were not present in π or set to 0 (deleted) otherwise. This fraction is controlled by the parameter γ, and directly relates the concept of exploration and exploitation. 7.

(10) ACCEPTED MANUSCRIPT. 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324. as detailed as follows. Small values for γ lead to slight perturbations of the current subspace, i.e., the resultant subspace will be only marginally different from the subspace currently being examined. This configuration favors the exploitation of the current solution. In contrast, large values for γ produce strong perturbations of the subspace, producing subspaces which are expected to be much different from the subspace currently being perturbed, which favors the exploration of unvisited regions in the original search space. Exploratory test on a variety of datasets have show evidence that a suitable value for depends on the dataset being tested and particularly its size. Once the subspace is determined, the algorithm ensures that there exists a path between s and t. If so π 0 is accepted, otherwise we reject it and randomly generate a new version of π 0 following the same scheme. The overall algorithm starts with the entire search space by setting all the bits of the vector π to 1. Observe that, in our definition of neighborhood, a subspace is not restricted to connected graphs, i.e., a subspace may (or may not) possesses disconnected components. For this reason, we must check at all iterations that possess at least a single s-t path. Note that the disconnected components may become connected depending on the stochastic properties of the environment. Once the auxiliary graph is determined, we obtain a new candidate solution from it. When the node t is reachable from the node s, the new candidate solution is processed using Algorithm 5. In our proposition, the new candidate solution, i.e., a new s-t path, is obtained by a heuristic criterion. We decided to apply the HM U method mentioned earlier. We then calculate the regret of this path with a classical SP algorithm over the original graph, then using it to determine whether or not to accept the new subspace. With this method, we are able to tailor the percentage of arcs we flip when generating a neighbor candidate, enabling us to find the correct balance between exploration and exploitation. The result of this, however, is that we can no longer use the delta between the regrets as our acceptation criteria. Instead we have calculate the regret via a heuristic method. For MMR-P this compromise is acceptable, as we know of linear time algorithms for calculating the two SP required for the calculation of the HU and HM heuristics.. CR IP T. 299. AN US. 298. Algorithm 4 Algorithm MMR-P for subspace perturbation (π, γ). M. Input: - π, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej belongs to the current subset, and π (j)=0 otherwise. - γ, the fraction of arcs from the original subspace which are to be flipped (Γ = bγ ∗ nc, where n is the number of arcs). Output: - π’, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej belongs to the current subset, and π (j)=0 otherwise. π0 ← π for k = 0 → Γ do j ← RAN DOM (0, |π 0 |) if π 0 (j) = 0 then π 0 (j) ← 1 else π 0 (j) ← 0 end if end for. CE. PT. ED. 325. 326. 327 328 329. AC. Algorithm 5 Algorithm MMR-P for generate candidate. 5. Input: - π, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej belongs to the current subset, and π (j) = 0 otherwise. - f (·), a cost function. Output: - Y ’, a new candidate solution. 1: YHU ← HU (π) 2: YHM ← HM (π) 3: if f (YHU ) < f (YHM ) then 4: Y 0 ← YHU 5: else 6: Y 0 ← YHM 7: end if. Benchmark Instances. In the literature, several classes of instances have been considered in computational experiments for evaluating the performance of algorithms proposed for MMR-P. Among them we found the. 8.

(11) ACCEPTED MANUSCRIPT. 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361. PT. ED. 362. CR IP T. 332. following, Random networks [33, 31, 32] and [41], Road networks located in some European cities [33, 31, 32] and Layered networks [33, 31, 32, 41]. Extensive experiments on random networks [41] showed that instances from 1 000 to up 20 000 nodes were solved, in short times, by an implementation in CPLEX and thus this class of instances were not considered at the present research. Road networks from European cities are not available and therefore only Layered networks, from this traditional group of instances, is considered here. A new particular class of networks, Grid instances (which could be interpreted as a type of road networks) was defined in [11] when they studied the relative robust version of MMR-P. In the present paper this class of instances is considered in the experiments and defined below. Layered networks were introduced in the paper of [46] in the study of the computational complexity of MMR-P problem. In [32] it is mentioned that Layered networks simulate some class of telecommunication networks. Layered networks  are named as K-n-c-d-w, where n is the number of nodes, each cost interval has form c− , c+ where a random number cij ∈ [1, c] is generated and ij ij  −  + c− ∈ [(1 − d)c , (1 + d)c ], c ∈ c + 1, (1 + d)cij ( 0 < d < 1) and w is the number of layers ij ij ij ij ij [31]. In Figure 1 an example of a Layered instance (K-12-c-d-3) is presented. Two groups of Layered instances were created. The group L1 contains eight subgroups of instances where for each subgroup only the width of the uncertainty interval is variable. The number of nodes is 1 000 for the first subgroup and 10 000 for the last. The number of layers at each subgroup is fixed as the 10% of n. The second group of Layered instances, L2, contains four subgroups of instances where for each subgroup is varied the width of the uncertainty interval and the number of layers. The number of nodes is 250 for the first subgroup and 2 000 for the last. Both group of instances are described in detail in Tables 1 and 4, for L1 and L2, respectively. A Grid network is related to a matrix with n rows and m columns. Each matrix cell corresponds to a node and two arcs with different directions connecting each pair of nodes whose respective matrix cell are adjacent. Therefore, the resulting directed graph has nxm nodes and 2(2mn − n − m) arcs. The node s is assumed located in the position (1,1) of the matrix and the node t in the position (m, n), an example is given in Figure 2 with n = 3 and m = 4. The interval costs were generated the same way as for Layered instances. The instances are named as G-n-m-c-d, where G identifies the instance type, n is the number of rows and m is the number of columns. We consider c = 200 and d = 0.5 for all instances in this group. For grid group, G, instances of different sizes were considered. 2x{20, 40, 80, 160, 320} with {40, 80, 160, 320} nodes respectively and {116, 236, 476, 956, 1916} arcs, 4x40 with 160 nodes and 552 arcs, 8x80 with 640 nodes and 2 384 arcs, 16x160 with 2 560 nodes and 9 888 arcs and 32x320 with 10 240 nodes and 40 256 arcs.. AN US. 331. M. 330. Figure 2: Example of a Grid instance G-3-4-c-d. Figure 1: Example of a Layered instance K-12-c-d-3. 365 366 367 368. Implementation of Algorithms: The exact approaches were implemented using CPLEX 12.5 and Concert Technology. The heuristic approaches were implemented in C++. All CPLEX parameters were set to their default values, except in B&C approach where the following parameters were set: (i) CPLEX cuts were turned off, (ii) CPLEX heuristics were turned off, (iii) the time limit was set to 900 seconds. All the experiments were performed on a Intel Core i7-3610QM machine with 16 GB RAM, where each execution was run on a single processor.. CE. 364. AC. 363. Instances and best known solutions can be found at https://github.com/frperezga/MinmaxRegretPath. 9.

(12) ACCEPTED MANUSCRIPT. 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405. CR IP T. 375. AN US. 374. M. 373. ED. 372. Exact Results and Analysis. We know of four papers that propose exact algorithms and conduct experiments for MMR-P. In [32], according to the authors, outperformed previous approaches by the same group of researchers [33, 31], therefore we focus on the first paper. Other experimental research appears in a chapter of the book [23]. A general drawback of the experiments conducted using these approaches is the size of the instances tested. Only instances with small sizes were tested and then was very difficult to outline some conclusions. Even so, in [32] the performance of the algorithms was analyzed when applied on random instances, Layered instances and three instances from real road networks, and the authors concluded that Benders approach had a better performance than a branch and bound algorithm and a MILP formulation given in [22] and implemented by CPLEX. Very recently, [18] proposed a B&C procedure which considers an improved lower bound for the problem. They considers several classes of graph instances, including two real large size instances. Group L1. Our effort in this paper is to try to gain more information about the performance of algorithms when applied to instances of both greater size and different structure. In the case of the group L1 of Layered instances, Table 2 shows the results of MILP considering a time limit of 900 seconds. It is clear that from 4 000 nodes and up, the algorithm’s performance degrades dramatically, so that for 5 000 nodes no optimum solution was achieved and worse yet, no feasible solutions were found. For the same group of instances, B&C algorithm was always able to find optimal solutions in no more than 250 seconds on average over ten runs, except for n =10 000 where the algorithm begins to be affected by the combinatorial explosion. Group L2. In Table 3 and Table 4 the performance of MILP and B&C algorithms for the second group of instances L2 is illustrated. These instances contain 250, 500, 1 000 and 2 000 nodes and each one contains two, four and six layers. In Table 3 is shown that MILP is able to get optimal solutions for all combinations of number of nodes when the number of layers is equal to six. However, its performance clearly diminished when the number of nodes increased and the number of layers is two or four. For example, for 2 000 nodes and two layers, MILP achieved 8% gap on average. In Table 4 is shown that the performance of B&C is clearly inferior to MILP, achieving large gaps (about 30%) for 250 nodes and two layers. Clearly MILP outperforms B&C for this class of instances. In conclusion, after the experimentation with the exact algorithms MILP and B&C applied to Layered instances, the group L1 of large instances can be rapidly solved by B&C. With respect to group L2, the performance of MILP is better than B&C but loses efficiency from 1 000 nodes and two layers. It is clear that heuristic approaches are necessary for solving the large size L2 instances. Group G. MILP provides better solutions than B&C. However, as the size of the instances is increased, gaps also increase (see Table 1). For two combinations of the parameters m and n, both exact algorithms generate high gaps. It is also noted that the time limit was exhausted for the instances. Considering that the size of these instances is relatively small, it is clear that heuristics are necessary for solving large instances with this structure. Table 1: Running times and gaps for B&C and MILP in G instances. n and m represent the rows and columns in the grid.. PT. 371. 6. class n m. min. gap (%) av. 2 2 2 2 2 4 8 16 32. 20 40 80 160 320 40 80 160 320. 0 0 0 0 26.32 0 0 0 3.80. 0 0 0 5.32 32.13 0 0 0 7.00. 2 2 2 2 2 4 8 16 32. 20 40 80 160 320 40 80 160 320. 0 0 0 0 5.49 0 0 0 1.60. 0 0 0 0 9.19 0 0 0 3.10. CE. 370. AC. 369. max B&C 0 0 0 13.91 36.89 0 0 0 14.50 MILP 0 0 0 0 13.04 0 0 0 5.10. 10. min. time (sec.) av. max. 0.02 0.02 0.19 412.00 900.05 0.062 1.16 10.16 900.20. 0.03 0.03 0.52 818.47 900.12 0.089 2.33 33.69 900.90. 0.05 0.05 0.77 900.16 900.20 0.141 4.25 65,36 900.90. 0.03 0.03 0.16 3.10 900.14 0.11 1.13 13.94 900.10. 0.04 0.05 2.31 7.82 900.15 0.14 2.28 105.48 900.60. 0.06 0.08 5.00 15.20 900.16 0.19 5.66 240.83 900.90.

(13) ACCEPTED MANUSCRIPT. Table 2: Running times and gaps for MILP and B&C in L1 instances. * very large gap (UB and / or LB very low quality). n is the number of nodes, nk is the number of nodes in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal solution.. 3 000. 4 000. 5 000. 6 000. 7 000. 10 000. 1 000. 2 000. 0 0 0 0 0 0 0 0 0 0 0 0 * * * * * * * * * * * *. 200. 300. 400. 500. 600. 700. 1 000. 100. 200. 300. PT. 3 000. 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85. 100. CE. 4 000. AC. 5 000. 6 000. 7 000. 10 000. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. gap (%) av. max MILP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 *(3) * *(7) * *(7) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * *(10) * B&C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 **(4) 0 **(3) 0 **(2) 0. 400. 500. 600. 700. 1 000. min. time (sec.) av. max. 11. #optimum. 20.94 22.67 23.64 118.47 137.30 133.31 358.11 400.77 408.13 675.55 864.91 818.83 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. 25.34 26.85 28.05 139.26 158.06 162.69 407.35 455.92 464.93 845.94 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. 27.80 30.36 32.53 159.84 176.95 187.33 485.86 519.02 522.63 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. 10 10 10 10 10 10 10 10 10 7 3 3 0 0 0 0 0 0 0 0 0 0 0 0. 0.94 1.08 1.16 5.47 5.52 4.50 17.72 19.50 12.50 38.31 42.92 31.27 67.41 67.28 52.02 126.95 135.84 95.67 193.64 207.67 149.44 570.89 542.10 386.67. 1.41 1.75 1.53 5.93 7.91 7.18 19.44 23.02 19.85 41.20 49.46 49.74 76.30 84.86 67.04 137.67 145.01 126.02 228.99 241.70 213.20 693.81 719.92 508.32. 2.31 2.13 1.75 6.30 10.25 8.63 21.81 27.49 23.69 45.00 67.98 95.67 80.09 120.67 108.77 152.44 168.50 247.25 327.39 341.64 363.34 860.27 900.00 900.00. 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 7 8. CR IP T. 2 000. 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85. min. AN US. 1 000. nk. M. instance d. ED. n.

(14) CR IP T. ACCEPTED MANUSCRIPT. Table 3: Running times and gaps for MILP in L2 instances. n is the number of nodes, nk is the number of nodes in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal solution.. 6. 500. 2. 4. 6. 1 000. 2. 4. 2. CE. 2 000. PT. 6. 4. 6. AC. min 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.62 0.56 0.90 0.00 0.00 0.00 0.00 0.00 0.00 4.30 4.66 5.23 0.01 0.01 0.01 0.00 0.00 0.00 6.43 7.24 7.49 0.62 0.95 0.90 0.00 0.00 0.00. gap (%) av 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 2.03 2.38 2.69 0.00 0.01 0.01 0.00 0.00 0.00 5.26 5.80 6.05 0.06 0.03 0.12 0.00 0.00 0.00 7.45 7.98 8.31 1.55 1.65 1.56 0.00 0.00 0.00. max 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.00 3.39 3.26 3.80 0.01 0.01 0.01 0.00 0.00 0.00 6.40 6.68 7.38 0.51 0.26 0.62 0.00 0.00 0.00 7.96 8.85 9.31 2.18 2.14 1.96 0.00 0.00 0.00. min 1.95 2.83 2.28 0.42 0.45 0.38 0.27 0.28 0.30 900.08 900.08 900.06 4.91 4.77 5.27 1.06 1.14 1.02 900.23 900.23 900.23 46.44 40.03 59.02 13.64 13.58 17.19 900.86 900.83 900.86 900.86 900.88 900.88 58.81 56.20 69.38. time (sec.) av 11.95 15.67 30.65 1.33 1.56 1.38 0.43 0.43 0.43 900.10 900.10 900.09 6.52 7.27 6.88 3.27 3.25 3.10 900.25 900.25 900.25 284.59 372.91 397.96 18.85 19.61 19.73 901.02 900.88 900.97 901.19 900.91 900.97 183.86 357.61 517.14. #optimum. max 34.48 58.66 126.11 3.13 3.20 2.81 0.81 0.64 0.64 900.11 900.25 900.11 9.58 14.64 12.56 6.36 6.44 6.78 900.30 900.27 900.27 900.28 900.28 900.30 23.81 23.97 24.73 901.50 900.98 900.38 902.61 900.99 901.33 303.00 901.00 901.09. AN US. 4. p 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85. M. nk 2. ED. n 250. 12. 10 10 10 10 10 10 10 10 10 0 0 0 10 10 10 10 10 10 0 0 0 9 9 8 10 10 10 0 0 0 0 0 0 10 7 5.

(15) ACCEPTED MANUSCRIPT Table 4: Running times and gaps for B&C in L2 instances. n is the number of nodes, nk is the number of nodes in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal solution.. 6. 500. 2. 4. 6. 1 000. 2. 4. 6. 2 000. 2. 4. 406. gap (%) av 27.66 27.59 27.83 0.01 0.15 0.19 0.00 0.00 0.00 36.10 35.60 35.72 9.88 10.08 10.71 0.01 0.01 0.01 36.85 37.39 37.18 19.69 27.66 20.37 5.57 5.82 6.84 37.61 38.87 39.03 24.72 25.82 25.32 12.37 11.59 12.35. max 31.49 30.74 31.70 0.01 1.40 1.86 0.01 0.01 0.00 38.24 37.25 37.23 12.57 13.08 14.13 0.01 0.01 0.01 37.77 38.56 37.12 22.96 31.49 24.94 7.37 7.51 8.73 43.15 42.94 43.06 28.30 28.78 28.38 15.59 13.42 13.81. min 900.02 900.03 900.03 3.67 5.17 4.27 0.36 0.64 0.55 900.05 900.03 900.03 900.03 900.03 900.05 10.34 8.36 9.52 900.06 900.06 900.00 900.05 900.02 900.00 900.05 900.06 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. time (sec.) av 900.07 900.13 900.06 206.71 275.76 270.76 1.34 1.33 1.34 900.10 900.07 900.12 900.07 900.07 900.11 156.85 151.48 183.92 900.10 900.08 900.00 900.08 900.07 900.00 900.08 900.10 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. #optimum max 900.16 900.63 900.11 717.99 900.06 900.06 4.16 3.19 2.45 900.14 900.14 900.30 900.14 900.23 900.33 524.13 522.14 744.63 900.16 900.13 900.00 900.17 900.16 900.00 900.16 900.16 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00 900.00. 0 0 0 10 10 10 10 10 10 0 0 0 0 0 0 10 10 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. ED. 6. min 24.36 24.74 24.55 0.00 0.00 0.00 0.00 0.00 0.00 33.82 33.97 33.89 7.06 6.48 7.05 0.00 0.01 0.01 35.50 35.63 35.03 17.43 24.36 18.56 3.81 4.74 4.06 36.55 36.46 36.15 22.21 22.89 22.22 8.27 9.27 9.25. CR IP T. 4. p 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85 0.15 0.50 0.85. AN US. nk 2. M. n 250. 7. Performance of the Heuristic Approaches. 415. 7.1. 410 411 412 413. 416 417 418 419 420 421 422 423 424 425. CE. 409. Algorithm parameters and measure of performance. AC. 408. PT. 414. Taking into account the conclusion related to hard instances in both topologies (Layered and Grid), we have considered appropriate to apply heuristics only to hard instances. Specifically, we consider six groups of L2 instances and two groups of G instances (shown in bold in tables 1, 3 and 4). Our heuristic approaches are based on the neighborhood (Nγ ) defined in Subsection 4.4, Nγ is embedded in two SA settings and in a local search setting, both metaheuristic frameworks were explained in Section 4. Additionally, as pointed out in Subsection 4.4, a SA approach using the neighborhood Nk-opt based on the traditional heuristic k-opt was implemented here using k = 2 and k = 3.. 407. An important drawback of metaheuristic approaches is the step related to the selection of the best set of parameters. This task can be time-consuming and it is always necessary to deal with the tradeoff between time and solution quality. Good discusions can be found in [13, 1] and [7]. The selected parameters were obtained through a mixed process based on a brute-force search over a grid and a trial-and-error procedure. The search over the grid allows a good exploration in the parameter space and trial-and-error was used in order to intensify the search near good solutions. After the experiments, we defined the settings shown in Table 5. Note that we chose one configuration for Nk-opt and three configurations for Nγ in order to represent the trade-off between time-consumption and solution quality in our neighborhood. In the case of SA using Nk-opt , more demanding parameters were tested but the results had a very marginal improvement.. 13.

(16) ACCEPTED MANUSCRIPT Table 5: Parameters selected for heuristic algorithms. ti is the initial temperature, tf is the final temperature and N is the neighborhood structure for each metaheuristic. Algorithm Simulated Annealing Simulated Annealing Simulated Annealing Local Search. 427 428 429 430. ti 50 5 5 -. tf 0.1 0.01 0.1 -. cooling factor 0.9 0.9 0.88 -. loops 800 800 500 20 000. N Nk-opt Nγ Nγ Nγ. The parameter γ must be regulated depending on the density, size and topology of the graph. The selection must consider the trade-off between exploration and the probability of obtaining a k disconnected graph. We have estimated γ according to γ ≈ |A| , where |A| is the total number of arcs in G and k ∈ [2, 10] is the number of modified edges in each iteration. Table 6 shows the final value of γ in each group of instances.. CR IP T. 426. id SA0 SA1 SA2 LS. Table 6: Selected values for the parameter γ, considering different groups of instances. Group L2 - 1 000 L2 - 2 000. 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473. AN US. 436. 7.2. Performance comparison of the algorithms. As we mentioned above, few papers have tackled the MMR-P problem using heuristics, therefore ad-hoc neighborhood structures that consider the nested structure in the problem formulation MMRPath defined in Subsection 2.1 do not exist. As a natural strategy we use the neighborhood (Nk-opt ) mentioned in Subsection 4.4 in a SA scheme (SA0 algorithm). This implementation had a better performance than another approach based on Ant Colony Optimization algorithm (ACO) that we designed for the problem. So, ACO was discarded and SA0 was compared with the heuristic HM U, since the literature has shown that it obtains moderate gaps for several classes of MMR-P instances and it is a fast algorithm that only needs to solve four classic problems [23, 41]. As detailed in Table 7, HM U achieved gaps between 2.37% and 4.33% for most L2 instances. However, in G instances its performance is irregular. In the G-2-320 instances, the gaps are 11.47% on average and in the other group of instances, G-32-320, they do not exceed 1.53%. To the best of our knowledge, the performance of HM U over the G-2-320 instances is its worst performance over all classes of instances reported in the literature. SA0+2-opt and SA0+3-opt outperform HM U in the majority of L2 instances and SA0+3-opt outperforms SA0+2-opt in most of the L2 instances (except the last) but it achieves worse gaps in G instances. Note that for instances with smaller interval (d = 0.15) the performance of SA0+2-opt is worse. For detailed results, see the Tables 9 10 12 11 in Appendix 10. In summary, k-opt neighborhood in SA framework obtained interesting results, it is able to improve the solutions reached by HM U heuristics in the majority of instances. Regarding run times, in Table 7, we highlight the difference observed between the two classes of G instances. Both variants of SA0 took much more run time in instances G-32-320 than the instances G-2-320. This is due to the difficulty in rebuilding a path in G-32-320 class using the k-opt framework. From the previous analysis it is clear that SA0 (using both variants) outperforms HM U but over most instances it does not reach the best known solutions BKS (they can be accessed in the link at the footnote of page 9). Therefore the task of the SA approach using the new neighborhood Nγ is to compete with the BKS values. In this context the performance of the LS and SA using a set of different parameters is analyzed (SA1 and SA2). The objective in including the performance of LS using the proposed neighborhood is to analyze to what extent the mechanism of SA to escape from the local optimum found in LS is effective. Table 8 shows the results of LS and SA approaches using Nγ . LS clearly achieved better gaps than HM U and SA0 for all instances, running at similar times to SA0. From the same Table, it is clear that, respect to L2 instances, SA1 and SA2 outperform LS noting that SA2 is able to obtain better results than LS in less time. Additionally, it can be also noted that the performance of SA1 is slightly better than SA2 as it was expected since the parameters used by SA1 are computationally more expensive than those used by SA2. These results are detailed in Tables 16 17 and 18. For example, in L2 instances with 2 000 nodes, the statistics related to gap (minimum, average and maximum) are 0.76, 1.06, 1.39 for LS and 0.71, 0.93, 1.22 for SA2. At the same time, when the variant SA1 is applied, more run time is necessary, but the results are better than the obtained by. M. 435. ED. 434. PT. 433. γ 0.004 0.0001. To measure performance, we use basic statistics (minimum, average and maximum) for the gaps and execution times from 50 runs for each instance. The results presented for the gaps are relative to the best solution found by the best exact algorithm in each instance ((S − Sbest ) /Sbest ).. CE. 432. Group G - 2 - 320 G - 32 - 320. AC. 431. γ 0.004 0.001. 14.

(17) ACCEPTED MANUSCRIPT Table 7: Gaps (%) and running times obtained by SA0 and HM U for each class of instances. Each class contains 10 instances and we run 50 experiments for each one in SA0 approach. min. - 1 000 - 0.15 - 1 000 - 0.50 - 1 000 - 0.85 - 2 000 - 0.15 - 2 000 - 0.50 - 2 000 - 0.85 G - 2 - 320 G - 32 - 320. L2 L2 L2 L2 L2 L2. 1.45 0.54 0.34 3.06 1.22 0.50 1.92 0.00. 2.97 1.52 1.11 3.53 2.08 1.30 8.68 0.53. L2 L2 L2 L2 L2 L2. 36.63 36.47 36.36 73.42 73.58 70.44 30.31 776.77. 37.81 37.14 36.82 74.56 75.60 71.52 32.04 894.10. - 1 000 - 0.15 - 1 000 - 0.50 - 1 000 - 0.85 - 2 000 - 0.15 - 2 000 - 0.50 - 2 000 - 0.85 G - 2 - 320 G - 32 - 320. 481 482 483 484 485 486 487 488 489 490 491 492. HM U av. max. 2.28 2.13 2.17 2.32 2.10 1.93 15.04 1.53. 2.97 2.70 2.74 3.06 2.78 2.37 6.70 0.00. 3.51 3.19 3.24 3.54 3.28 3.19 11.57 0.53. 4.03 3.83 3.88 4.33 4.19 4.00 15.20 1.53. 39.62 39.24 39.58 75.69 77.50 78.74 33.08 938.68. AN US. 480. min. M. 479. max. ED. 478. PT. 476 477. SA0+3opt min av gap (%) 3.99 0.41 1.37 2.80 0.35 1.17 2.13 0.24 1.16 4.33 0.86 1.65 3.19 0.73 1.46 2.16 0.66 1.29 15.15 6.27 11.47 1.53 -0.18 0.39 time (seconds) 40.08 36.61 37.65 38.42 36.63 37.48 37.56 36.64 39.69 77.97 73.38 74.18 78.89 73.30 74.75 88.33 73.58 75.41 34.47 30.22 31.28 932.64 710.54 889.89. SA2. These results confirm the effectivity of SA using Nγ when a group of difficult instances is investigated. The performance of heuristics applied to G instances is very different depending on the type of the instances used, G-2-320 or G-32-320. LS, SA1 and SA2 are not able to improve the quality of the solutions provided by exact algorithms nor the quality of the solutions provided by HM U for the instances (32,320). Considering that the best gap is 1.53% from MILP, these instances could be well solved for the corresponding size. The situation for the G-2-320 instances is different. The heuristics are able to largely improve the gaps of HM U and SA0 and are almost able to equal the best known value of the exact algorithms. In particular, SA1 is able, in one instance, to improve the solution given by exact approaches. It is clear that HM U finds solutions with large gaps, over 15% in some instances. Considering that the best gap from MILP is 5%, these instances tend to be difficult to solve when the size of the instances increases. As previously mentioned, two versions with different parameters of SA algorithm were tested with our novel neigbourhood. The degradation in the quality of the obtained solutions when more relaxed parameters were considered was small but significant. This allows the priorization of either time or quality of the solution. However, even the more relaxed version of the Simulated Annealing algorithm found better solutions than the implemented Local Search. For detailed results, see the tables 13 14 15 16 17 and 18 in Appendix 10. Table 8: Gaps (%) and running times obtained by LS, SA1 and SA2 for each class of instances. Each class contains 10 instances and considers 50 runs. min. LS av. L2 L2 L2 L2 L2 L2. 0.07 0.02 0.00 0.00 0.04 -0.43 -0.05 0.00. 1.70 1.38 1.44 0.97 1.15 1.07 2.15 0.53. L2 L2 L2 L2 L2 L2. 33.05 34.03 34.11 69.97 71.33 70.42 19.97 425.49. 36.05 35.94 35.59 72.96 72.82 72.65 36.93 537.62. CE. 475. class. - 1 000 - 0.15 - 1 000 - 0.50 - 1 000 - 0.85 - 2 000 - 0.15 - 2 000 - 0.50 - 2 000 - 0.85 G - 2 - 320 G - 32 - 320. AC. 474. SA0+2opt av max. CR IP T. Class. - 1 000 - 0.15 - 1 000 - 0.50 - 1 000 - 0.85 - 2 000 - 0.15 - 2 000 - 0.50 - 2 000 - 0.85 G - 2 - 320 G - 32 - 320. SA1 min av gap (%) 3.56 0.00 1.05 3.3 -0.04 0.93 3.54 0.00 1.11 3.39 -0.07 0.64 3.10 -0.11 0.88 3.08 -0.45 0.83 7.72 -0.12 1.62 1.53 0.00 0.53 time (seconds) 41.56 81.34 85.82 39.63 80.74 85.00 38.20 81.03 84.20 76.57 167.48 175.00 76.08 162.89 173.68 79.83 157.30 163.90 39.52 85.47 89.24 690.44 418.83 439.19 max. 15. max. min. SA2 av. max. 1.53 3.31 3.54 3.39 3.10 3.08 6.97 1.53. 0.00 -0.04 0.00 0.01 -0.05 -0.44 0.00 0.00. 1.35 1.11 1.24 0.83 1.01 0.96 2.23 0.53. 3.56 3.31 3.54 3.39 3.10 3.14 8.18 1.53. 97.66 89.70 89.39 190.40 182.99 178.73 93.52 579.79. 26.39 26.44 26.53 54.99 54.88 55.33 27.17 203.92. 27.43 27.61 27.59 56.97 56.55 57.20 19.11 227.89. 29.33 30.28 29.20 61.22 58.34 60.12 30.83 273.21.

(18) ACCEPTED MANUSCRIPT. 493. 8. Conclusions and final comments. 521. 9. 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519. 522. 523 524 525. 526 527. 528 529. 530 531 532. 533 534. 535. 536 537. 538 539 540. 541 542 543 544. Acknowledgements. AN US. 500. Alfredo Candia-Véjar was supported by CONICYT, FONDECYT project N◦ 1121095.. M. 499. References. [1] B. Adenso-Diaz and M. Laguna. Fine-tuning of algorithms using fractional experimental designs and local search. Operations Research, 54(1):99–114, 2006.. ED. 498. [2] R. Ahuja, T. Magnanti, and J. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, Upper Saddle River, NJ, 1993.. PT. 497. [3] R. K. Ahuja, Ö. Ergun, J. Orlin, and A. Punnen. A survey of very large-scale neighborhood search techniques. Discrete Applied Mathematics, 123(1):75–102, 2002. [4] H. Aissi, C. Bazgan, and D. Vanderpooten. Min-max and min-max regret versions of combinatorial optimization problems: A survey. European Journal of Operational Research, 197(2): 427–438, Sept. 2009.. CE. 496. [5] I. Averbakh and V. Lebedev. Interval data minmax regret network optimization problems. Discrete Applied Mathematics, 138(3):289–301, 2004.. AC. 495. CR IP T. 520. Both exact and heuristic algorithms were proposed for solving the MMR-P problem, a NP-Hard combinatorial optimization problem with uncertainty. The problem has been used as an effective way to formulate a version of the very known shortest path problem in a network when the arc weights are not completely known. A B&C exact algorithm has been proposed here for solving MMR-P. A broad set of instances from telecommunication networks, the Layered instances, whose size range from 100 to 10 000 nodes were analyzed. The algorithm has proven to outperform another traditional exact approach based on a MILP formulation and implemented by the CPLEX solver when applied to the set of Layered instances. Additionally, a class of Layered networks with special structure is investigated because exact algorithms have great difficulty finding their exact solutions. For these instances MILP outperformed the B&C approach. However, the MILP approach loses efficiency as the size of the instance grows. Another class of test instances was introduced for the problem in our research, the Grid instances, which resembles road networks. For these networks, MILP approach outperformed B&C approach but is unable to solve instances with more than 5 000 nodes. A new and sophisticated neighborhood was designed for MMR-P and Local Search and Simulated Annealing algorithms based on this neighborhood were proposed. These heuristics were able to outperform a traditional basic heuristic, HM U , a metaheuristic ACO and another SA approach using the neighborhood k-opt, when they were tested on the sets of instances considered. More important, the Simulated Annealing algorithm was able to obtain feasible solutions with a similar quality to the solutions found by the two developed exact algorithms for the Grid instances. For larger Grid instances, both exact algorithms generate larger gaps or are unable to obtain feasible solutions in reasonable times. In this context, Simulated Annealing was able to find good feasible solutions in relatively short times. Since the SP problem and its variants have many important applications in several fields, the study of new efficient heuristics for large instances is necessary. Future research should consider to exploit the novel neighborhood applying it to different MMR Problems.. 494. [6] D. Bertsimas and J. Tsitsiklis. Simulated annealing. Statistical Science, 8(1):10–15, 1993. [7] M. Birattari and J. Kacprzyk. Tuning Metaheuristics: A Machine Learning Perspective, volume 197. Springer, 2009. [8] A. Candia-Vejar, E. Alvarez-Miranda, and N. Maculan. Minmax regret combinatorial optimization problems: an algorithmic perspective. RAIRO Operations Research, 45(2):101–129, 2011. [9] N. Chao and Y. Fengqi. Adaptive robust optimization with minimax regret criterion: Multiobjective optimization framework and computational algorithm for planning and scheduling under uncertainty. Computers and Chemical Engineering, 108. doi: https://doi.org/10.1016/j. compchemeng.2017.09.026.. 16.

(19) ACCEPTED MANUSCRIPT. 552. 553 554. 555 556. 557 558. 559 560 561 562 563. 564 565. 566 567. 568 569 570. 571 572. 573 574. 575 576. 577 578. 579 580. 581 582. 583 584. 585 586. 587 588 589 590. 591 592 593. [12] E. Conde and A. Candia. Minimax regret spanning arborescences under uncertain costs. European Journal of Operational Research, 182(2):561–577, Oct. 2007. [13] S. Coy, B. Golden, G. Runger, and E. Wasil. Using experimental design to find effective parameter settings for heuristics. Journal of Heuristics, 7(1):77–97, 2001. [14] E. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1): 269–271, Dec. 1959.. CR IP T. 551. [15] M. Ehrgott, J. Ide, and A. Schöbel. Minmax robustness for multi-objective optimization problems. European Journal of Operational Research, 239(1):17–31, 2014. [16] B. Escoffier, J. Monnot, and O. Spanjaard. Some tractable instances of interval data minmax regret problems: bounded distance from triviality (short version). In 34th International Conference on Current Trends in Theory and Practice of Computer Science, volume 4910 of Lecture Notes in Computer Science, pages 280–291, Nový Smokovec, Slovakia, Jan. 2008. SpringerVerlag.. AN US. 550. [11] A. Coco, J. Júnior, T. Noronha, and A. Santos. An integer linear programming formulation and heuristics for the minmax relative regret robust shortest path problem. Journal of Global Optimization, 60(2):265–287, 2014.. [17] Y. Gao. Shortest path problem with uncertain arc lengths. Computers and Mathematics with Applications, 62(6):2591–2600, 2011. [18] H. Gilbert and O. Spanjaard. A double oracle approach to minmax regret optimization problems with interval data. European Journal of Operational Research, (262):929–943, 2017. [19] W. Guerrero, N. Velasco, C. Prodhon, and C. Amaya. On the generalized elementary shortest path problem: A heuristic approach. Electronic Notes in Discrete Mathematics, 41:503–510, 2013.. M. 549. [20] T. Hasuike. Robust shortest path problem based on a confidence interval in fuzzy bicriteria decision making. Information Sciences, 221:520–533, 2013.. ED. 548. [21] J. Kang. The minmax regret shortest path problem with interval arc lengths. International Journal of Control and Automation, 6(5):171–180, 2013. [22] O. Karasan, M. Pinar, and H. Yaman. The robust shortest path problem with interval data. Technical report, Bilkent University, 2001.. PT. 547. [10] A. Chassein and M. Goerigk. A new bound for the midpoint solution in minmax regret optimization with an application to the robust shortest path problem. European Journal of Operational Research, 244(3):739–747, 2015.. [23] A. Kasperski. Discrete Optimization with Interval Data, volume 228 of Studies in Fuzziness and Soft Computing. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.. CE. 546. [24] A. Kasperski and P. Zieliński. An approximation algorithm for interval data minmax regret combinatorial optimization problems. Information Processing Letters, 97(5):177–180, 2006. [25] A. Kasperski, M. Makuchowski, and P. Zieliński. A tabu search algorithm for the minmax regret minimum spanning tree problem with interval data. Journal of Heuristics, 18(4):593–625, 2012.. AC. 545. [26] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983. [27] P. Kouvelis and G. Yu. Robust Discrete Optimization and its applications. Kluwer Academic Pablishers, 1997. [28] L. Lin and M. Gen. Priority-based genetic algorithm for shortest path routing problem in ospf. In M. Gen, D. Green, O. Katai, B. McKay, A. Namatame, R. Sarker, and B.-T. Zhang, editors, Intelligent and Evolutionary Systems, volume 187 of Studies in Computational Intelligence, pages 91–103. Springer Berlin Heidelberg, 2009.. [29] Y. Marinakis, A. Migdalas, and A. Sifaleras. A hybrid particle swarm optimization–variable neighborhood search algorithm for constrained shortest path problems. European Journal of Operational Research, 261(3):819–834, 2017.. 17.

(20) ACCEPTED MANUSCRIPT. 602 603. 604 605. 606 607. 608 609 610. 611 612. 613 614. 615 616. 617 618 619. 620 621 622. 623 624 625. 626 627. 628 629 630. 631 632. 633 634. [33] R. Montemanni, L. Gambardella, and A. Donati. A branch and bound algorithm for the robust shortest path problem with interval data. Operations Research Letters, 32(3):225–232, 2004. [34] R. Montemanni, J. Barta, M. Mastrolilli, and L. Gambardella. The Robust Traveling Salesman Problem with Interval Data. Transportation Science, 41(3):366–381, Aug. 2007.. CR IP T. 601. [35] Y. Nikulin. Simulated annealing algorithm for the robust spanning tree problem. Journal of Heuristics, 14(4):391–402, 2008. [36] S. Okada and M. Gen. Fuzzy shortest path problem. Computers and Industrial Engineering, 27 (1-4):465–468, 1994. [37] L. Paquete, J. Santos, and D. Vaz. Efficient paths by local search. In Agra, Agostinho and Doostmohammadi, Mahdi (2011) A Polyhedral Study of Mixed 0-1 Set. In: Proceedings of the 7th ALIO/EURO Workshop. ALIO-EURO 2011, Porto, pp. 57-59., page 243, 2011. [38] M. Pascoal and M. Resende. The minmax regret robust shortest path problem in a finite multi-scenario model. Applied Mathematics and Computation, 241:88–111, 2014.. AN US. 600. [32] R. Montemanni and L. Gambardella. The robust shortest path problem with interval data via Benders decomposition. 4or, 3(4):315–328, Dec. 2005.. [39] J. Pereira and I. Averbakh. Exact and heuristic algorithms for the interval data robust assignment problem. Computers & Operations Research, 38(8):1153–1163, Aug. 2011. [40] J. Pereira and I. Averbakh. The robust set covering problem with interval data. Annals of Operations Research, 207(1):217–235, 2013. [41] F. Pérez, C. Astudillo, M. Bardeen, and A. Candia-Véjar. A simulated annealing approach for the minmax regret path problem. In Proceedings of the Congresso Latino Americano de Investigación Operativa (CLAIO)—Simpósio Brasileiro de Pesquisa Operacional (SBPO), 2012.. M. 598 599. [31] R. Montemanni and L. Gambardella. An exact algorithm for the robust shortest path problem with interval data. Computers & Operations Research, 31(10):1667–1680, Sept. 2004.. [42] F. Perez-Galarce, E. Álvarez Miranda, A. Candia-Véjar, and P. Toth. On exact solutions for the minmax regret spanning tree problem. Computers & Operations Research, 47(0):114 – 122, 2014.. ED. 597. [43] T. Pinto, C. Alves, and J. de Carvalho. Variable neighborhood search for the elementary shortest path problem with loading constraints. In International Conference on Computational Science and Its Applications, pages 474–489. Springer, 2015.. PT. 596. [30] R. Montemanni. A Benders decomposition approach for the robust spanning tree problem with interval data. European Journal of Operational Research, 174(3):1479–1490, Nov. 2006.. [44] A. Raith, M. Schmidt, A. Schöbel, and L. Thom. Extensions of labeling algorithms for multiobjective uncertain shortest path problems. Networks, (In Press). doi: 10.1002/net.21815.. CE. 595. [45] A. Raith, M. Schmidt, A. Schöbel, and L. Thom. Multi-objective minmax robust combinatorial optimization with cardinality-constrained uncertainty. European Journal of Operational Research, 267(2):628 – 642, 2018.. AC. 594. [46] G. Yu and J. Yang. On the robust shortest path problem. Computers & Operations Research, 25(6):457–468, 1998. [47] P. Zieliński. The computational complexity of the relative robust shortest path problem with interval data. European Journal of Operational Research, 158(3):570–576, 2004.. 18.

(21) ACCEPTED MANUSCRIPT. 10. Appendix. Table 9: Running times and gaps for Simulated Annealing considering SA0 parameter and 2-opt neigbourhood in G instances # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9. min 3.13 6.37 4.20 3.24 6.17 3.73 1.92 2.20 2.23 7.49 4.07 1.53 0.72 0.13 0.00 0.02 1.14 0.30 0.40 0.00 0.12 0.44. gap(%) av max 10.07 15.04 10.31 13.08 7.30 10.29 7.33 8.16 9.81 10.58 10.70 13.74 5.74 6.72 6.85 9.74 8.20 15.15 10.52 12.82 8.68 11.53 1.53 1.53 1.02 1.05 0.49 0.50 0.00 0.00 0.02 0.02 1.14 1.14 0.30 0.30 0.66 0.68 0.00 0.00 0.12 0.12 0.53 0.54. min 30.68 31.42 30.31 31.31 31.52 31.44 31.48 31.58 31.52 31.7 31.30 896.62 898.16 888.18 879.52 899.52 887.7 891.9 886.23 884.54 776.77 878.91. time (sec.) av max 32.09 32.92 32.04 32.86 31.46 31.86 31.6 32.2 32.02 33.31 31.76 32.22 31.81 32.34 32.22 33.53 32.68 34.24 32.68 34.47 32.04 33.00 910.8 927.51 906.56 914.49 900.92 917.93 892.47 913.02 913.54 932.64 904.76 918.02 909.44 925.54 900.94 915.43 898.33 913.98 803.21 910.18 894.10 918.87. HMU 15.04 13.50 10.29 8.16 10.58 13.74 6.72 9.74 15.15 12.82 11.60 1.53 1.02 0.50 0.00 0.02 1.14 0.30 0.68 0.00 0.12 0.50. CR IP T. 32 32 32 32 32 32 32 32 32 32. m 320 320 320 320 320 320 320 320 320 320 x̄ 320 320 320 320 320 320 320 320 320 320 x̄. AN US. n 2 2 2 2 2 2 2 2 2 2. Table 10: Running times and gaps for Simulated Annealing considering SA0 parameter and 3-opt neigbourhood in G instances. CE. 32 32 32 32 32 32 32 32 32 32. min 15.04 13.08 10.29 8.16 10.58 13.74 6.27 9.74 14.86 12.82 11.46 1.53 0.43 -0.12 0.00 -0.18 1.07 0.30 0.40 0.00 -0.02 0.34. gap(%) av 15.04 13.08 10.29 8.16 10.58 13.74 6.40 9.74 14.86 12.82 11.47 1.53 0.53 0.07 0.00 -0.13 1.14 0.30 0.40 0.00 0.08 0.39. 0 1 2 3 4 5 6 7 8 9. max 15.04 13.08 10.29 8.16 10.58 13.74 6.40 9.74 14.86 12.82 11.47 1.53 0.70 0.13 0.00 0.02 1.14 0.30 0.40 0.00 0.12 0.43. M. # 0 1 2 3 4 5 6 7 8 9. ED. m 32 32 32 32 32 32 32 32 32 32 x̄ 320 320 320 320 320 320 320 320 320 320 x̄. PT. n 2 2 2 2 2 2 2 2 2 2. AC. 635. 19. min 31.5 30.26 30.25 30.25 30.28 30.22 31.38 31.42 31.33 30.3 30.72 710.54 896.01 888.93 877.52 881.04 894.51 890.59 889.29 885.76 887.29 870.15. time (sec.) av max 32.1 32.91 31.79 32.72 30.31 30.42 30.61 31.55 30.63 31.56 30.96 31.84 31.67 32.24 31.95 33.08 31.71 32.16 31.09 32.12 31.28 32.06 758.06 929.12 912.94 938.68 909.13 919.4 894.36 911.28 897.88 917.06 911.62 929.2 907.34 924.43 908.33 928.76 898.22 919.31 901.05 917.87 889.89 923.51.

Figure

Table 1: Running times and gaps for B&amp;C and MILP in G instances. n and m represent the rows and columns in the grid.
Table 2: Running times and gaps for MILP and B&amp;C in L1 instances. * very large gap (UB and / or LB very low quality)
Table 3: Running times and gaps for MILP in L2 instances. n is the number of nodes, n k is the number of nodes in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal solution.
Table 4: Running times and gaps for B&amp;C in L2 instances. n is the number of nodes, n k is the number of nodes in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal solution.
+7

Referencias

Documento similar

The incomplete information might be due to the size of interaction data, since 300 instances (24 students) was not sufficient for applying data mining methods in

Thus, the main contributions of this paper are (i) a strategy to detect relevant behavioral changes based on a ranking mechanism in a given share of hosts, (ii) an approach to model

To sum up, the main original contributions of this paper are i) the use of icosahedral CNNs over SRP-PHAT maps for DOA estimation and tracking of sound sources and ii) the use of a

The exact matching algorithm (BS algorithm) is typically query latency bound, since many cycles are lost waiting for data (idle time in Fig. 4), wasting part of the available mem-

In this paper, we consider interval Time Petri Nets (i.e., TPNs, TPNFs, XTPNs) and structural subclasses of them, and we propose an efficient method to compute perfor- mance bounds

This paper provides an optimization framework and computationally less intensive heuristics to tackle exactly the aforementioned problems. The main contributions of this work are: i)

 The expansionary monetary policy measures have had a negative impact on net interest margins both via the reduction in interest rates and –less powerfully- the flattening of the

A branch and bound exact algorithm and an estimation of distribution algorithm are described to ex- plore the search space of a bi-objective NRP and find the set of