A user manual for jMarkov package

Texto completo

(1)A User Manual for jMarkov Package. Trabajo de Tesis presentado al Departamento de Ingeniería Industrial por. Marco Sergio Cote Vicini Asesor: Raha Akhavan-Tabatabae German Riaño. Para optar al título de Ingeniero Industrial. Ingeniería Industrial Universidad de Los Andes Junio 2011.

(2) To my father and my mother for their constant support, to my sister for her unconditional love.. ii.

(3) Acknowledge. I want to acknowledge the work done by German Riaño, Juan Fernando Perez, Andres Sarmiento and Julio Goez who are the developers of the jMarkov software and were always there to support me and answerer my innite questions. Also I want to acknowledge all the support and guidance given by German, Raha and Andres through this last year. Without their help and advice this document have never been done.. iii.

(4) Table of Contents Dedicatoria. ii. Acknowledge. iii. List of Figures. vi. Abstract. vii. I. Introduction. 1. II Background. 4. 2.1. Literature Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 4. 2.2. Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6. 2.3. Quasi Birth and Death Process (QBD). . . . . . . . . . . . . . . . .. 7. 2.4. Phase Type Distributions (PH Distributions) . . . . . . . . . . . . .. 9. 2.4.1 2.5. Fitting Algorithms . . . . . . . . . . . . . . . . . . . . . . . .. 12. Markov Decision Process (MDP) . . . . . . . . . . . . . . . . . . . .. 12. 2.5.1. Finite Horizon Problems. . . . . . . . . . . . . . . . . . . . .. 13. 2.5.2. Innite Horizon Problems . . . . . . . . . . . . . . . . . . . .. 14. 2.5.3. Continuous Time Markov Decision Processes. . . . . . . . . .. 15. 2.5.4. Event Modeling. . . . . . . . . . . . . . . . . . . . . . . . . .. 15. III Work Process. 18. IV User Manual. 22. 4.1. Programming Knowledge. . . . . . . . . . . . . . . . . . . . . . . . .. 22. 4.2. Structure Description. . . . . . . . . . . . . . . . . . . . . . . . . . .. 23. 4.2.1. . . . . . . . . . . . . . . . . . . . . . . . . . .. 23. jMarkov.basic. iv.

(5) 4.3. 4.2.2. jMarkov and jQBD. . . . . . . . . . . . . . . . . . . . . . . .. 25. 4.2.3. jPhase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 28. 4.2.4. jMDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 32. Modeling Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 4.3.1. jMarkov.basic. 4.3.2. jMarkov. 34. . . . . . . . . . . . . . . . . . . . . . . . . . .. 34. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 40. 4.3.3. jQBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 46. 4.3.4. jPhase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 50. 4.3.5. jMDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 57. V Applications in Real Life Problems. 64. VI Conclusions. 66. v.

(6) List of Figures 1. Taxonomy for MDP problems. . . . . . . . . . . . . . . . . . . . . . .. 12. 2. User's Manual Architecture. . . . . . . . . . . . . . . . . . . . . . . .. 20. 3. The main classes of the basic package . . . . . . . . . . . . . . . . . .. 24. 4. The main classes of the jMarkov modeling package. . . . . . . . . . .. 25. 5. BuildRS algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 26. 6. The User Interface of jMarkov and jQBD . . . . . . . . . . . . . . . .. 27. 7. The Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 27. 8. The Interface Views . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 28. 9. The jPhase Main Classes . . . . . . . . . . . . . . . . . . . . . . . . .. 29. 10. The jPhase Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . .. 31. 11. Createing a New PH Distribution . . . . . . . . . . . . . . . . . . . .. 32. 12. Choosing a Fitting Process . . . . . . . . . . . . . . . . . . . . . . . .. 32. 13. The jMDP Main Classes. 33. . . . . . . . . . . . . . . . . . . . . . . . . .. vi.

(7) Abstract. This document presents the user manual of the jMarkov software, an objectoriented modeling software designed for the analysis of stochastic systems in steady state. It allows the user to create Markov models of arbitrary size with its component jMarkov. It also provides the tools to model quasi-birth-death process. processes with its jQBD component. Further development of the software has created two new modules. jPhase allows representation of phase-type distributions that are useful when representing real-life systems, and jMDP is a module for modeling and solving Markov decision processes and dynamic programming problems.. vii.

(8) Chapter I Introduction When analyzing real-life stochastic systems, using analytical models is often easier, cheaper and more eective than studying the physical system or a simulation model of it. The stochastic modeling is a powerful tool that helps the analysis and optimization of stochastic systems. However, the use of stochastic modeling is not widespread in today's industries and among practitioners. This lack of acceptance has two main causes. The rst is the curse of dimensionality, which is dened by the number of states required to describe a system. This number grows exponentially as the size of the system increases. The second is the lack of user-friendly and ecient software packages that allow the modeling of the problem without involving the user with the implementation of the solution algorithms necessary to solve it. The curse of dimensionality is a constant problem that has been addressed by dierent approaches over time, but it is outside the scope of this document. The focus of this it is the latter issue, the lack of user-friendly and ecient software packages. We propose a generic solver that enables the user to focus on modeling without becoming involved in the complexity required by the solution methods.. 1.

(9) jMarkov is an object-oriented framework for stochastic modeling with four components: jMarkov, which models Markov chains; jQBD, which models quasi-birthdeath processes; jPhase, which models phase-types distributions; and jMDP, which models Markov decision processes (MDPs). The main contribution of this framework is the separation of modeling steps from the solution algorithms. The user can concentrate on modeling the problems and choose from a set of solvers at his or her convenience. The structure of the software package allows third-party developers to plug in their own solvers. The modeling software does not use external plain les, like ".txt" or ".dat" les, written with specic commands; rather, it is based on object-oriented programming (OOP) [1], which enables the encapsulation of entities in classes and exposes its functionality independently of its internal implementation. This has many benets, including, rst, the analogy between the mathematical elements and their computational representation; second, the exploration algorithm, which nds all the states in the chain, avoiding possible mistakes the user can make by manual denition in very large systems; and, third, that the software is based in Java, so the user need not deal with technical processes like memory allocations. Although jMarkov was developed a few years ago, stochastic modeling is not widespread. Most people still prefer other system-analyzing tools like simulations. jMarkov has not xed the initial problem of creating an user-friendly software for motivating the use of stochastic modeling. The main reason for this is the lack of a simple and easy reading manual.. The existing documentation has an academic. and computer-science approach, confusing to people new to stochastic modeling and lacking extensive programming knowledge. Its examples require signicant software understanding, making jMarkov look like a complex tool, when it is actually quite. 2.

(10) easy to work with. The existing user manual is presented in this document. The present document is organized as follows. Section 2 provides a brief mathematical background needed to understand the other sections, as well as descriptions of the notations used.. Readers who feel comfortable with the topic can skip this. section without any break in continuity. Section 3 describes the work done to create the new manual. Section 4 is the user manual. It explains the main computational elements that the user needs to know to build a model and describes the handling of dierent problems with an example. Section 5 presents references to real industry problems that have been and are being solved using the software. Section 6 presents general conclusions.. 3.

(11) Chapter II Background In this section we give a short explanation of the main mathematical topics the reader has to be familiar with to facilitate understanding of the software. We do not intend to explain the topics, just to present a quick review.. We also provide. references for readers who want to read more about these topics. And we present the literature review done for look onder software packages similar to jMarkov. It is important to mention that to fulll the main purpose of the document-to create an eective user manual for jMarkov-this chapter has been taken from the thesis done by jMarkov developers. We want to acknowledge credit for the following pages to German Riaño [30], Juan Fernando Perez [47], Andres Sarmiento [48] and Julio Goez [45].. 2.1. Literature Review. Several packages for solving stochastic processes, such as MAMSolver [2], MGMTool [3], SMAQ [4], Qnap2 [5], and XMarca [6], can be found but them have a focus on the solution algorithm rather than ours, which has a modeling purpose. They are tools that implement analytical algorithms to solve queuing problems. Additionally, there can be found the SIMULA language [7], which is the rst modeling tool based on. 4.

(12) OOP. Others, such as MARCA [8], SMART [9], PROBMELA [10] and Generalized Markovian Analysis of Timed Transitions Systems [11], allow building and analyzing Markov chains, but they are not based on OOP, so the modeling becomes harder for beginners to the subject. The program most similar to jMarkov is probably SHARPE [24], which models dierent types of systems including combinatorial ones, such as fault trees and queuing networks, and state-space ones, such as Markov and semi-Markov reward models, as well as stochastic Petri nets. It also computes steady-state, transient and interval measures. The main dierence between these software packages and ours is that jMarkov has a component for Markov decision processes (MDPs) that allows not only analysis, but also optimization, of the systems under study. From another perspective, some stochastic linear programming languages have been developed, such as Xpress-SP [12], extensions for AMPL [13] and SAMPL/SPInE [14]. These are not fully comparable to the framework presented in this document, because they have some specic limitations and are not very open to reach to any type of stochastic modeling. The literature review specic on MDP modeling software includes a program able to solve them, winQSB [15], into which the user needs to input the transition matrices for each action.. Other works include MDPLab [16], an educational tool. developed by Lamond to test dierent algorithms, a toolbox for Matlab to handle MDPs [17] and a variety of software made in academia, such as Pascal Poupart from University of Waterloo [18], Trey Smith's with his software called ZMDP [19], Anthony R. Cassandra from Brown University [20], Matthijs Spaan from Universiteit van Amsterdam with his software called perseus [21], Tarek Taha with a set of tools for solving POMDP (partially observable MDP) [22] and an open source project. 5.

(13) called Caylus [23].. However, all of these software packages are primarily focused. on the solutions algorithm, and all of them use plain les as inputs for building the Markov chain. Our proposed software package centers the eort on facilitating the modeling aspect of the problem providing a pre-coded solution algorithms while also leaving open the possibility of coding new solvers, such as the ones mentioned above.. 2.2. Markov Chains. Suppose we observe some characteristic of a system. Let system characteristic at time. t,. X(t). be the value of the. this value is not known with certainty before time. t so it can be viewed as a random variable. This characteristic that describes the system in a specic time is called state. A state changes its value, when an event occurs. The probability of passing from one state to another is called the transition probability. A stochastic process is simply a description of the relation between the random variables [25]. A Discrete Time Markov Chain (DTMC) is a special type of stochastic process that meets the Markovian property. The property denes a process where the conditional distribution of any future state and the present state. X(t). X(t+1), given the past states X(0), ..., X(t−1). , is independent of the past states and depends only on. the present state [25]. From now on we limit our description to Continuous Time Markov Chain (CTMC), although jMarkov can also handle Discrete Time Markov Chains (DTMC) as well. Let. Q,. {X(t), t ≥ 0}. be a CTMC, with nite State Space. S. and generator matrix. with components. qij = lim P {X(t) = j|X(0) = i} t↓0. 6. i, j ∈ S..

(14) It is well known that this generator matrix, along with the initial conditions, completely determines the transient and stationary behavior of the Markov Chain [26]. The diagonal components ing rate for state rate from state. i. i,. qii are non-positive and represent the exponential holdqij. whereas the o diagonal elements. to state. represent the transition. j.. The transient behavior of the system is described by the matrix. P(t). with com-. ponents. pij (t) = P {X(t + s) = j|X(s) = i}. i, j ∈ S.. This matrix can be computed as. P(t) = eQt. t > 0.. For an irreducible chain, the stationary distribution. π = [π1 , π2 , . . . , ]. is determined. as the solution to the following system of equations. πQ = 0 π1 = 1, where. 1. 2.3. Quasi Birth and Death Process (QBD). is a column vector of ones.. Consider a Markov process. {X(t) : t ≥ 0}. S = {(n, i) : n ≥ 0, 1 ≤ i ≤ m}. process and the second coordinate of phases. m. is nite.. with a two dimensional state space. The rst coordinate. i is called the phase.. n. is called the. level. of the. We assume that the number. In applications, the level usually represents the number of. items in the system, whereas the phase might represent dierent stages of a service process.. 7.

(15) We will assume that, in one step transition, this process can go only to the states in the same level or to adjacent levels. This characteristic is analogous to a Birth and Death Process, where the only allowed transitions are to the two adjacent states [26]. Transitions can be from state. (n, i). (n0 , i0 ). to state. only if. n0 = n, n0 = n − 1. n0 = n+1, and, for n ≥ 1 the transition rate is independent of the level n. the generator matrix,. Q,. or. Therefore,. has the following structure.  B B01  00  B10 A1 A0  Q=  A 2 A 1 A0   ... .. ... .. where, as usual, the rows add up to 0. Note that. . ... Aij. ..     ,   . and. Bij. presents sub matri-. ces, don't confused them to simple elements. An innite Markov Process with the conditions described above is called a Quasi-Birth and Death Process (QBD). In general, the level zero might have a number of phases these rst. states. of sizes. m0. states the. Note that matrix. (m0 × m). and. boundary states,. B00. has size. (m × m0 ),. m0 6= m.. We will call. and all other states will be called. (m0 × m0 ),. whereas. B01. respectively. Also note that. and. Aij. B10. typical. are matrices. has size. (m × m). Assume that the QBD is an ergodic Markov Chain. As a result, there is a steady state distribution. π1 = 1.. π that is the unique solution π to the system of equations πQ = 0,. Divide this. π. vector by levels, analogously to the way. π = [π 0 , π 1 , . . .]. Then, it can be shown that a solution exists that satises. π n+1 = π n R,. 8. n > 0,. Q. was divided, as.

(16) where. R. is a constant square matrix of order. m. [28]. This. R. is the solution to the. matrix quadratic equation. A0 + RA1 + R2 A2 = 0. There are various algorithms that can be used to compute the matrix example, one can start with any initial guess. R0. and obtain a series of. Rk. R.. For. through. iterations of the form. Rk+1 = −(A0 + R2k A2 )A−1 1 . This process is shown to converge (and. A1. does have an inverse). More elaborated. algorithms are presented in Latouche and Ramaswami [27]. Once termined then. π0. and. π1. R. has been de-. are determined by solving the following linear system of. equations.. .  π0 π1.     B01 B00   = 0 0 B10 A1 + RA2. π 0 1 + π 1 (I − R)−1 1 = 1.. 2.4. Phase Type Distributions (PH Distributions). In this subsection, we review the denition and some properties of PH distributions. We follow the treatment presented in [29] and [27], and therefore, the proofs in this section are not included since the interested reader can nd them in those books. A continuous PH distribution is dened as the time until absorption in a CTMC, with one absorbing state and all others transient. process with. m+1. states can be written as. . . 0 0  Q= , a A. 9. The generator matrix of such.

(17) where. A. is a square matrix of size. m, a. is a column vector of size. m. and. 0. is a row. vector of zeros. Here, the the rst entry in the state space represents the absorbing state. As the sum of the elements on each row must be equal to zero,. a is determined. by. a = −A1, where. 1. is a column vector of ones. In order to completely determine the process,. the initial probability distribution is dened, and can be partitioned, in an similar way to the generator matrix, as. .  α0 α ,. where. α0. is the probability that the process starts in the absorbing state. 0.. Since. the sum of all the components in the initials conditions vector must be equal to 1,. α0. is determined by. α0 = 1 − α1. The distribution of a continuous PH variable by the parameters. α. and. A. X. given above.. cumulative distribution function (CDF) of. X. X. is, therefore, completely determined has a representation. (α, A).. The. is shown to be. F (t) = 1 − αeAt 1,. t ≥ 0.. Notice that this has a clear similarity to the well-known exponential distribution. In fact, if there is just one transient phase with associated rate at time. 0. λ. and it is selected. with probability one, then the distribution is the exponential. From the. previous expression, the probability density function (PDF) of the continuous part can be computed as. f (t) = αeAt a,. 10. t > 0..

(18) The Laplace-Stieltjes transform of. F (·). is given by. E[e−sX ] = α0 + α(sI − A)−1 a,. Re(s) ≥ 0,. from which, the non-centered moments can be calculated as. E[X k ] = k!α(−A−1 )k 1,. k ≥ 1.. A Discrete PH distribution can be seen as a discrete analogous case to the continuous PH distribution. In this case, the distribution is dened as the number of steps until absorption in a Discrete Time Markov Chain (DTMC), with one absorbing state and all others transient. The properties for this case can be found in [27]. As stated above, a relevant property of PH distributions is that they are closed under various operations, such as convolution, order statistics, convex mixtures, among others.. For example, the mixture of two independent PH variables with. representations. (α, A) and (β, B) which are chosen with probabilities p and (1 − p),. respectively, has a PH representation. (γ, C),. γ = [pα, (1 − p)β]. and. where.   A 0  C=  0 B. Note that this is analogous to the construction of a hyper-exponential distribution. These closure properties can be exploited in modeling some systems, as done, for example, in [30]. Continuous PH distributions have some extra closure properties such as the distribution of the waiting time in a. M/P H/1. queue, the residual time,. the equilibrium residual time, and the termination time of a PH process with PH failures [31].. 11.

(19) 2.4.1 Fitting Algorithms In the last twenty years, the problem of tting the parameters of a PH distribution has received great attention from the applied probability community.. There are. dierent approaches that, as noted in [32], can be classied in two major groups: maximum likelihood methods and moment matching techniques. Nevertheless, almost all the algorithms designed for this task have an important characteristic in common: they reduce the set of distributions to be tted from the whole PH set to a special subset. The maximum likelihood algorithms we review were by Asmussen et. al. [33], Khayari et. al. [34], and Thümmler et. al. [35], as well as the moment matching algorithms by Telek and Heindl [36], Osogami and Harchol [37], and Bobbio et. al. [38].. 2.5. Markov Decision Process (MDP). The problems found in this topic can be divided into nite or innite horizon problems, and they can also be divided into deterministic and stochastic problems. Besides that we can also create a more general kind of problems which are called MDP's with events. A suggested taxonomy is shown in Figure 1.. Figure 1:. Taxonomy for MDP problems. Deterministic (DP) Finite Horizon. Infinite Horizon. Total Cost. Discounted Cost. Stochastic (MDP) Finite Horizon Total Cost. Total Cost Average Cost. 12. Infinite Horizon Discrete Time. Continuous Time. Discounted Cost. Discounted Cost. Total Cost. Total Cost. Average Cost. Average Cost.

(20) 2.5.1 Finite Horizon Problems We will show how a nite horizon Markov decision process is built (for a complete reference see Puterman [39] and Bertsekas [40]).. Consider a discrete state. {(Xt , At ), t =. space, discrete time epochs or stages and a bivariate random process. 0, 1, . . . , T − 1}. stage. t,. Each of the variable. and each. called the. At ∈ At. horizon. The sets. St. i.e.. represents the state of the system at. is the action taken at that stage. The quantity. T <∞. is. of the problem.. and. At. represent the feasible states and actions at stage. we assume that both are nite.. t,. Xt ∈ St. Let. Ht. be the. Ht = (X0 , A0 , X1 , A1 , . . . , Xt−1 , At−1 , Xt ).. history. t,. and. of the process up to time. The dynamics of the system are. governed by a state in which an action is taken, leading the system to another state according to a probability distribution.. In general, the resulting state after. a transition from one state to another when the process is moved up to the next stage depends on the history and the action taken in the previous stage. system satises the Markov property which implies. P {Xt+1 = j|Xt = i, At = a} = pijt (a).. A. P {Xt+1 = j|Ht = h, At = a} =. decision rule. is a function. history realization, assigns a probability distribution over the set decision rules if given. Xt. π = (π0 , π1 , . . . , πT ). is called a. policy .. The. A policy. π. πt. A.. that, given a. A sequence of. is called Markovian. all previous history becomes irrelevant, that is. Pπ {At = a|Ht = h} = Pπ {At = a|Xt = i}, where. Pπ. denotes the transition probability distribution following policy. Markovian policy. π. is called. deterministic. 13. if there is a function. π.. ft (i) ∈ A. A. such.

(21) that. Pπ {At = a|Xt = i} = For each action. a. taken at state. i.    1. if.   0. otherwise.. and stage. t,. a = f (i). a nite cost. Consequently it is possible to dene a total expected cost. t. T. to the nal stage. following policy. " vtπ (i) = Eπ. T X. π;. this is called the. vtπ (i). ct (i, a). is incurred.. incurred from time. value function. # cs (Xs , As ) Xt = i ,. i ∈ S0. (1). s=t where. Eπ. is the expectation operator following the probability distribution associ-. ated with policy. π.. 2.5.2 Innite Horizon Problems Consider a discrete space discrete time bivariate random process In particular we will assume that the the system is. . (Xt , At ), t ∈ N. time homogeneous,. .. this means. that at every stage the state space and action space remain constant and the transition probabilities are independent of the stage. i.e.. j|Xt = i, At = a}. pijt (a) = pij (a) = P {Xt+1 =. for all stages. Consequently, a policy. π = (π0 , π0 , . . .). be time homogeneous. Costs are also time homogeneous and for the cost incurred when action. must also. ct (i, a) = c(i, a) stands. a is taken in state i at any stage.. Besides the total. cost objective function presented in the nite horizon problem, it is customary to dene two other objective functions namely, discounted cost and average cost. The respective value functions under a policy. " vαπ (i) = Eπ. ∞ X. π. are. # αt c(Xt , At ) X0 = i ,. i ∈ S,. t=0. 0 < α < 1, " ∞ # X v π (i) = Eπ c(Xt , At ) X0 = i ,. for the discounted cost where. t=0. 14. i ∈ S,.

(22) for the total cost, and. " T # X 1 v (i) = lim Eπ c(Xt , At ) X0 = i , T →∞ T t=0 π. i ∈ S,. for the average cost.. 2.5.3 Continuous Time Markov Decision Processes Consider an innite horizon problem with time-homogeneity, where the set. 0. .  X(t), A(t) , t ≥. is the bivariate process that describes the state of the system and the action taken. at time t, on a continuous time space. The continuous time innite horizon problem is denoted by CTMDP. Time-homogeneous transitions between states are described by a transition rate. λij (a) = limh→0 P {X(t + h) = j|X(t) = i, A(t) = a}.. The. Markovian property implies that the time between transitions from one state to another has an exponential distribution with parameter. λi (a). equal to the sum of the. exit rates from the former state. This implies that a transition occurs only when the state of the system rate. λ. c̃(i, a). such that. X(t). changes and no self transitions are allowed. We dene. λ ≥ λ(i, a). for all states and actions.. Costs can be lump costs. incurred in the instant when an action is taken and can also be continuously. incurred at rate. γ(i, a). while remaining at state. i.. 2.5.4 Event Modeling In this subsection we introduce the MDP's with events (for complete reference see the work of Becker [41], Mahadevan [42], and Feinberg [43]) in order to show that the process with events is equivalent to another process without events such as the ones described earlier in this section. The mathematical model presented is an extension of the mathematical models explained before in the sense that the same problems can be represented in both. The diference lies that in the problems with event we. 15.

(23) need to condition each transition to the event triggering it. The advantage of such a presentation is that conditioning reduces the reachable set for each state and permits an easier characterization of the system dynamics. For the mathematical model with events, consider the discrete time random process. . system,. At. (Xt , At , Et ), t = 0, 1, . . . , T − 1 represents the action taken, and. , where. Et. Xt. represents the state of the. is the event that occurs at stage. t (as. a consequence of the current state and the action taken) that triggers the transition to. Xt+1 .We call the set of events that can occur at stage t, Et (i, a).. process up to stage. t,. is dened as. The history of the. Ht = (X0 , A0 , E0 . . . , Xt−1 , At−1 , Et−1 , Xt ).. Markovian behavior of the system implies that. The. P {Xt+1 = j|Ht = h, At = a, Et =. e} = P {Xt+1 = j|Xt = i, At = a, Et = e}.. Consequently the dynamics can be. described by transition probabilities dened as. pijt (a, e) = P {Xt+1 = j|Xt = i, At =. a, Et = e}. that describe the. reachable states. given. that event. sense that. St (i, a, e), e. occurs.. conditional. probability of reaching state. given that the current state is. i,. action. a. j. in the set of. is chosen and. The actions also present a Markovian property in the. P {At = a|Ht = h} = P {At = a|Xt = i}.. Finally, we assume that the. occurrence of events follows the Markov property. i.e.. P {Et = e|Ht = h, At = a} = P {Et = e|Xt = i, At = a},. e ∈ Et (i, a). Let. ct (i, a, e). be the cost incurred by taking action. pt (e|i, a) = P {Et = e|Xt = i, At = a} of event. e. at stage. be the. t.. 16. a. at stage. conditional. t. from state. i. and. probability of occurrence.

(24) In the nite horizon problem, the value function is dened as. " vtπ (i). = Eπ. T X. # cs (Xs , As , Es ) Xt = i ,. s=t. i ∈ S0 , t = 0, 1, . . . , T − 1.. 17.

(25) Chapter III Work Process As stated before, the jMarkov project lacks a user manual by which any person can learn how to model with the program. When the software was coded, each of the designers wrote a document for each component that tried to explain that modeling process. But as user manuals, the documents have some problems. They mostly focus on the package structure that is the software architecture. In them, the user can nd detailed explanations of all the classes' codes, as well as the relationships between them.. Although this is important for a designer, it. is unnecessary information for the user, who does not need to know the internal processes of the software. The documents also explain the modeling and solving algorithms used by the software. We believe this is an excessively academic approach. It is very important to understand which algorithms we are using in the software, but new users and beginners to the subject will nd this unnecessary information. Finally, the examples provided in the documents are composed of just the coding. They need to be explained so users can fully understand what they have to do to work with jMarkov. The examples must be written for beginners, not knowledgeable users.. 18.

(26) The purpose of the software is to help stochastic modeling become widespread and 'widespread' does not mean within computer science or academic communities, but instead in industry. We need to make the manuals as simple as they can be, so that people with a minimum knowledge of programming and the basic concepts in stochastic modeling can use them to analyze small systems. We hope that this way, in time, more and more people will become interested in the software and will use it for more complex problems. Thus we decided to create a user manual that will help users understand jMarkov and realize how easy it is to work with. This manual gives the steps for modeling with the software.. It does not explain the internal working; it only provides the. information a user really needs to solve a problem. This explanation will consist of a description of the classes the user will work with, as well as one example explaining the steps for working with the tool. Through the document we will provide references to other documents written for users who want to learn about jMarkov's internal functions. We used the following process to write the manual:. 1. Read the existing documents to see what had already been done toward the goal writing a user manual. From those documents we adapted some information as mathematical background.. 2. Read the code and the existing examples to understand the internal working of the software and to determine which parts were important to explain and which were not.. 3. Held meetings with the designers of the software to answer questions about it. During those meetings we conrmed the need for a user manual, since the only way to completely understand the software was by talking to its authors.. 19.

(27) 4. Solved small problems in each component to determine the best way to explain the process with jMarkov.. 5. Translated the modeling steps into simple less technical words so that anyone would to understand them.. As a result, we realized that to understand how jMarkov works, people would need to have some essential programming knowledge, knowledge of basic concepts of structure and ability to learn the modeling process by following a series of steps. We wrote the user manual taking the architecture shown in Figure 2 into account.. Figure 2: Programming Knowledge. Structure knowledge. Modeling process. User's Manual Architecture. • Object Oriented Programming • Abstract Functions. • Extending Classes.. • Classes final user needs. • Methods final user needs.. • When to use each class and method. • How to use each class and method. Beyond knowledge of the basic concepts of Java, like conditionals and cycles, a user must know what an abstract function is and how to extend a class.. To. understand the structure well enough, the user must know which classes and methods he or she must use and which ones are just for the software's internal use. The user only needs to learn about the classes he will be working with. For that reason we present the names of those classes with their respective descriptions. Finally, we present the modeling process the user must follow.. It consists of. a guide in which we explain when to use each class or method and how the user. 20.

(28) should use them. This is followed by an example to better illustrate the point for the reader.. 21.

(29) Chapter IV User Manual 4.1. Programming Knowledge. Java is a high-level programming language created by Sun Microsystems [1].. It. implements an object-oriented approach, the main idea of which can be explained as "divide and conquer." It assumes that every software program can be divided into elements or objects, which can be coded separately and then connected to the others to create a program. This approach is very useful when working with massive problems. The object-oriented programming is based on four key principles: abstraction, encapsulation, inheritance and polymorphism.. An excellent explanation of OOP. and the Java programming language can be found in [1]. The abstraction capability is the one that we take the most advantage of for programming jMarkov. It consists of the creation of abstract functions and extending abstract classes. An abstract function is one in which only the input parameter must be dened, and the output type is returned without coding the function implementation. The modeler can use an abstract function to program the dierent algorithms, which. 22.

(30) work the same without taking into account the dierent implementations the abstract elements can have. Extending abstract classes consists of creating general classes that will later help code particular ones.. In a general class we code the algorithms and the abstract. functions. Later, when the user wants to model something, he just needs to extend an abstract class. By doing so, he must code the abstract functions in the extending class, so the model can compile and run. Thanks to this, the user does not need to worry about the coding the extended class; he only needs to be concerned with coding the few abstract functions.. 4.2. Structure Description. In this section we discuss the classes that are part of the modeling process and that have a direct relationship to the user. The focus of this document is to give new users the key elements necessary to model with jMarkov and to give an idea of the software's capabilities. If the reader wants details about the solving packages or the architecture of the coding in the software, please refer to the reference provided in each subsection. The following subsections describe each of the important packages, limiting the explanation to the denition of the classes the user needs in order to create a model.. 4.2.1 jMarkov.basic This package is a collection of classes designed to represent the most basic elements that interact in any kind of Markov chain or MDP. Figure 3 shows the name of the main classes in the packages that the user should extend to model a problem. The. state. class represents a state in a Markov Chain or MDP. The user of the. class should establish the coding convention and code the. 23. compareTo method.. In the.

(31) Figure 3:. The main classes of the basic package. jmarkov.basic. State. Event. Action. PropertiesState. PropertiesEvent. PropertiesAction. compareTo method the user should give a formula of how the computer must organize the states. In other words, when comparing two states, the system must determine which is larger.. This is done to create an organized data structure to facilitate. internal searching. The coding convention means that the user should code the data structure, where the states are going to be saved, while coding the construction method on the extended class. That data structure, along with the be established.. compareTo,. must. A complete element order is needed since, for eciency, jMarkov. works with ordered sets. The package also provides a class, called process to do general-problem modeling. the. state. PropertiesState,. with a default order. It has all the methods implemented in. class and it orders the states by an array of integer-valued properties, so. the user does not have to be concerned with the structure or the comparison. In conclusion, if the state can be represented with a vector of integers describing its properties, then it might be easier to implement. state. PropertiesState. class rather than. class.. The same situation works with the. Event. Action class versus PropertiesAction class.. class versus. The class. PropertiesEvent. class and. Event allows the user to dene. the implementation of the events that can alter the states of the Markov chain. The class. Action. represents a single action in MDP.. 24.

(32) 4.2.2 jMarkov and jQBD In this package the user can nd all the classes needed to model a Markov chain or. Simple MarkovProcess. a QBD. Figure 4 shows two main classes:. and. GeomProcess.. For a more detailed explanation, see [45]. Figure 4:. The. The main classes of the jMarkov modeling package. Simple MarkovProcess. class is the more general, and it is the class used to. create a CTMC or DTMC with a nite state space. The class generates the model through the buildRS algorithm (See Figure 5). This enables it to generate all states and the transition matrix from the behavior rule specied by the user. These rules. active, dests. are determined by implementing the methods The class. GeomProcess. class extends the class. and. rate.. represents a continuous or discrete QBD process.. Simple MarkovProcess.. This. The building algorithm uses the in-. formation stored about the dynamics of the process to explore the graph and build only the rst three levels of the system. From this, extracting matrices. B10 , A0 , A1 ,. and. A2. B00 , B01 ,. is straightforward. Once these matrices are obtained, the sta-. bility condition is checked. If the system is found to be stable, then the matrices. A1 ,. and. A2. A0 ,. are passed to the solver, which takes care of computing the matrix. and the steady-state probabilities vectors. π0. and. π1,. R. using the formulas described. above. The implemented solver uses the logarithmic reduction algorithm [27].. 4.2.2.1. Space state building algorithm. Transitions in a CTMC are triggered by the occurrence of events such as arrivals and departures. The matrix. Q. can be decomposed as. 25. Q=. P. e∈E. Q(e) ,. where. Q(e).

(33) contains the transition rates associated with event. e,. and. E. is the set of all possible. events that may occur. In large systems, it is not easy to know in advance how many states there are in the model. However, it is possible to determine what events can occur in every state, and the destination states produced by each transition when it occurs.. jMarkov. works based on this observation, using an the BuildRS Algorithm (shown in Figure 5) presented by Ciardo [46].. Figure 5:. BuildRS algorithm. S = ∅, U = {i0 }, E given. while U 6= φ do for all e ∈ E do if active(i, e) then D := dests(i, e) for all j ∈ D do if j ∈/ S ∪ U then U := U ∪ {j}. end if. Rij := Rij + rate(i, j, e). end for end if end for end while. The algorithm builds the state space and the transition rate by a deep exploration of the graph. It starts with an initial state. i0. and searches for all other states. At. every instant, it keeps a set of unchecked states. U. and the set of states. S. that have. been already checked. For every unchecked state the algorithm nds the possible destinations and, if they are not previously found, they are added to the do this, it rst calls the function. U. set. To. active that determines if an event can occur.. does, then the possible destination states are found by calling the function The transition rate is determined by calling the function. 26. rate.. If it. dests..

(34) From this algorithm, we can see that a system is fully described once the states and events are dened and the functions. active, dests,. and. rate. have been spec-. ied. As we will see, modeling a problem with jMarkov entails coding these three functions.. 4.2.2.2. User Interface. The packages of jMarkov and jQBD have a graphic interface, where the results of the model are shown. Figure 6 shows the rst view of the interface. Figure 7 shows the toolbar found in the interface. Finally, gure 8 shows a close-up of the dierent views of the program.. Figure 6:. The User Interface of jMarkov and jQBD. Figure 7:. The Toolbar. Explanation of each view:. 27.

(35) Figure 8:. •. The Interface Views. Browse: Provides visualization of the entire system graphically. It shows the states, events and transition probabilities in tree form.. •. States: Shows all the states of the system with their respective equilibrium probabilities.. •. Rates: Shows the transition probabilities.. •. MOPs: Shows the mean and the standard deviation of the system's Measures of performance (MOP). These measures are calculated by default in some cases, but in most the user has to code them.. •. Events: Shows the system's events with the event occurrence rate as shown. This rate indicates the expected value of occurrence of each event in a specic period.. •. Output: Shows a summary of all the other views, making copying and pasting the results in a text le easy.. Any exible output mechanism can be. programmed from the code, bypassing the graphical interface.. 4.2.3 jPhase As explained before in section 2.4, to completely represent a continuous Phase-type distribution (PH distribution), a user only needs the generator matrix Q and the vector of initial probabilities or the transition probability matrix P and the vector of. 28.

(36) initial probabilities for represent a discrete one. In jPhase, the user can represent the matrix and the vector in dense or sparse form. The dense form consists of putting the entire matrix with all its zeros. This is useful for many applications, where the number of phases is not large and memory is not a problem. However, the sparse form shows only the numbers dierent from zero, so it uses a compressed sparse row. This two types of representation are the reason why the structure of the package shown in gure 9 is made up of four modeling classes, the ones that the user will usually manipulate. For further explanation of the structure of the package, refer to [47].. Figure 9:. The jPhase Main Classes jphase. SparseDiscPhaseVar. The. DenseContPhaseVar. DenseDiscPhaseVar. and. DenseContPhaseVar. DenseDiscPhaseVar. SparseContPhaseVar. are classes that represent con-. tinuous and discrete PH distributions with a dense representation.. They have. constructors for many simple distributions, such as exponential or Erlang in the continuous case and geometric or negative binomial in the discrete case.. SparseContPhaseVar. and. SparseDiscPhaseVar. The. classes represent continuous and dis-. crete PH distributions with a sparse representation.. 4.2.3.1. jFitting. The jFitting module is a complement of jPhase and allows tting a set of data of a distribution to a PH distribution. Dierent tting algorithms can be classied in two major groups: maximum-likelihood methods and moment-matching techniques. Some algorithms, believed to be representative of each group, are coded in dierent. 29.

(37) classes in the package and described below. For further explanation, refer to [47].. 1. Moment Matching. • MomentsACPH2Fit:. Implements the acyclic continuous PH distributions of. second order. This is for the continuous case [36].. • MomentsADPH2Fit:. Implements the acyclic discrete PH distributions of. second order. This is for the discrete case [36].. • MomentsECCompleteFit:. Erlang-Coxian distributions. The method matches. the rst three moments of any distribution to a subclass of phase-type distributions known as Erlang-Coxian distributions. This class implements the complete solution [37].. • MomentsECPositiveFit:. Erlang-Coxian distributions.. This class imple-. ments the positive solution [37].. • MomentsACPHFit:. Acyclic PH distributions. The method matches the rst. three moments of any distribution to a subclass of phase-type distributions known as acyclic phase-type distributions [38].. 2. Maximum Likelihood Estimate (MLE). • EMPhaseFit:. EM for general PH distributions. The method ts any dis-. tribution to the entire class of phase-type distributions [33].. • EMHyperExpoFit:. EM for hyper-exponential distributions.. The method. ts heavy-tailed distributions to the class of hyper-exponential distributions [34].. 30.

(38) • EMHyperErlangFit:. EM for hyper-Erlang distributions. The method ts. any distribution to a subclass of phase-type distributions known as hyperErlang distributions [35].. 4.2.3.2. Users Interface. Figure 10 shows the user interface of jPhase packages.. The interface has three. important views. The rst one shows the parameters of the PH distributions (the matrix and the initial vectors), the other two show a graphic of the PDF and the CDF, respectively.. Figure 10:. The jPhase Interface. Figure 10 shows the user interface of jPhase packages. The interface has three important views. The rst one shows the parameters of the PH distributions (the matrix and the initial vectors); the other two show a graphic of the PDF and the CDF, respectively. Figure 11 shows the interface for creating a new PH distribution, with examples of the most common distributions already coded in the packages.. The Figure 12. also shows the options for choosing the tting algorithm that the user wants to use.. 31.

(39) Figure 11:. Createing a New PH Distribution. Figure 12:. Choosing a Fitting Process. 4.2.4 jMDP This package contains the classes that represent all the dierent types of problems described before and represented in Figure 1. The name of those classes is shown in Figure 13. For further explanation of the structure of the package, refer to [48]. As explained in section 2.5.4, to model an MDP with the jMDP package, the user can choose to use the model with or without events. For that reason there are two classes for each type of problem; one class implements events, and the other. 32.

(40) Figure 13:. The jMDP Main Classes jmarkov.mdp. Deterministic. Stochastic. FiniteDP. Infinite. DTMDP. DTMDPEv. DTMDPEvA. FiniteMDP. CTMDP. CTMDPEv. FiniteMDPv. CTMDPEvA. class does not. Next we present a list of the names of the classes the user must use to solve each type of problem.. • FiniteDP:. Deterministic Finite Horizon Problem. • FiniteMDP:. Finite Horizon Problem without Events. • FiniteMDPev: • DTMDP:. Finite Horizon Problem with Events. Innite Horizon Discrete Time Problems without Events. • DTMDPev:. Innite Horizon Discrete Time with Events. • DTMDPevA:. Innite Horizon Discreate Time with Events where Actions Depend. on the Event. • CTMDP:. Innite Horizon Continious Time without Events. • CTMDPev:. Innite Horizon Continious Time with Events. • CTMDPevA: Innite Horizon Continious Time with Events where Actions Depend on the Event. 33.

(41) 4.3. Modeling Process. In this section we explain the jMarkov package modeling process.. We use a sim-. ple example just for educational reasons, not because the program cannot handle complex examples (it was actually built for that), but because we think it would be easier to understand the modeling process with an example for which the reader will not need to spend too great of an eort to understand what the software is doing. Throughout the section we will add some features to the initial problem so we can show how the entire software works with the same example. Proper following of the steps described in this section will allow the problem to be solved. In every step we also include the code used to solve the problem. Consider a queuing system comprised of two parallel servers, each one with a service time that follows an exponential distribution with rate. µ2. for server 1 and. for server 2. The entities arrive at a single queue, and as both servers can attend. all the clients, the entities go to the idle one.. α. µ1. If both servers are idle there is a. probability that the entity will go to server 1 instead of to server 2.. The time. between entity arrivals at the queue follows an exponential distribution with rate The queue has a nite capacity model, more specically an. N.. λ.. The problem represents a very common queuing. M/M/2/N. problem.. 4.3.1 jMarkov.basic First we need to identify the basic components of the model.. As it is a Markov. chain, those are states and events.. 1. Dene the states of the Markov chain.. 34. We decided that the states in the.

(42) problem would have three dimensions. We identied each state with the triplet. (X, Y, Z), values of. where. 0. X. and. are the status of the servers.. if the server is idle or. can take the values of. •. Y. 1. if it is busy.. Z. They can take the. is the queue number that. (0, 1, ..., N − 2).. We extend the class. PropertiesState.. be found in section 4.2.1. Note that. The explanation of this class can. MM2dNState. is the name of the class,. and is chosen by the user.. class. •. MM2dNState. extends. PropertiesState. Code the construction method. here; in our case it is the triplet MM2dNState (. int. int. x,. y,. int. z). {. . . .. }. The user must dene the space state. (X, Y, Z).. {. super ( 3 ) ; this . p r o p [ 0 ]. = x;. this . p r o p [ 1 ]. = y;. this . p r o p [ 2 ]. =. z ;. }. From the last method we can make some remarks.. The. super(3). is a. method normally used when extending classes. Our class extends from. PropertiesState, which had its own constructor that determines the coding convention (see 4.2.1). The. super(3). tells the computer to use the. constructor of the extended or super class (PropertiesState) and the number 3 conveys that the state space has 3 dimensions or properties. The. PropertiesState. organizes the states in a vector of integers, so we. need to clarify in which position of the vector the value of each property is going to be saved. The line. this.prop[0] = x. 35. does that..

(43) •. Code the measures of performance (MOPs) we want to calculate for each state. For our example, we nd the level of utilization for each server, the average queue length and the average number of entities in the system. Note that we separate the method into many sub-methods. This is not necessary, but it is good programming practice to create cleaner code.. public void. computeMOPs ( M a r k o v P r o c e s s. mp). {. setMOP ( mp ,. " Utilization. Server. A" ,. getStatus1 () ) ;. setMOP ( mp ,. " Utilization. Server. B" ,. getStatus2 () ) ;. setMOP ( mp ,. " Queue. setMOP ( mp ,. " Number. Length " , in. getQSize ( ) ) ;. System " ,. getStatus1 (). +. getStatus2 (). +. getQSize ( ) ) ; }. public int return. getStatus1 (). {. prop [ 0 ] ;. }. public int return. getStatus2 (). {. prop [ 1 ] ;. }. public int return. getQSize ( ). {. prop [ 2 ] ;. }. Note that the method received by parameter a. MarkovProcess.. This is. the class that represents the Markov chain that will be explained later. The line. setMOP(...). is used to set any MOP the user needs to nd.. It received by parameters the process (mp), an string with the label of the MOP (Utilization. Server A). and the value of the MOP in the state. (getStatus1()).. •. Finally, for the interface, the user can program the methods. description(),. label(). and. where he or she can enter a description of the state space. and the labels from every state, which will be shown on the interface.. 36.

(44) public. label ( ). String. String. if. stg. =. "Empty. (( getStatus1 () stg. if. =. if. =. " Server. stg. =. 1. &&. 1). 1. 1). ( getStatus2 (). /. Server. &&. idle. ==. " Server. 1). busy. == 1. (( getStatus2 (). system " ;. ==. " Server. (( getStatus2 () stg. {. /. idel. ( getStatus1 () Server. &&. busy. 2. 2. busy. Server. 2. /. busy. 0) ). Numeber. == /. ( getStatus1 (). /. ==. /. line. 0" ;. 0) ). Numeber. ==. in. in. line. 0" ;. 1) ). Numeber. in. line :". +. ( getQSize ( ) ) ;. return. stg ;. }. public. String. String. description (). stg. =. "" ;. stg. += " S e r v e r. stg. += " .. Server. stg. += " .. There. return. {. 1. is 2 are. ". +. is. ". ". +. (( getStatus1 () +. ==. (( getStatus2 (). getQSize ( ). +. ". 1). ==. ? 1). " busy " ?. customers. :. " busy " waiting. " idle ") ; :. " idle ") ; in. queue . " ;. stg ;. }. 2. Dene the events. Each event here represents the happenings that can change the state of the system. In our example the events can be explained as the arrival of a new entity to the system or the departure of an entity from the system.. •. We will extend the class. Events.. found in section 4.2.1. Note that. The explanation of this class can be. QMM2dNEvent. is the name of the class,. and it is chosen by the user.. class. •. QMM2dNEvent. extends. Event. {. . . .. }. Create constants that represent each event with an integer. This is done so the computer can distinguish between each event. Note that we separate the two events into more specic ones. This is done to facilitate the. 37.

(45) coding of the model.. public. enum. Type. {. ARRIVAL, ARRIVAL1 , ARRIVAL2 , DEPARTURE1, DEPARTURE2 ; }. The event. ARRIVAL represents a general arrival to the system.. The. ARRIVAL1. represents an arrival to server 1 and is only used when the system is idle. The. ARRIVAL2. represents an arrival to server 2 and is only used when the. system is idle. The. DEPARTURE2. DEPARTURE1. represents a departure from server 1 and. represents a departure from server 2. This method. enum. can. be changed by declaring a constant for each event that is shown as follow.. •. f i n a l s t a t i c int. ARRIVAL =. f i n a l s t a t i c int. ARRIVAL1 =. 1;. f i n a l s t a t i c int. ARRIVAL2 =. 2;. f i n a l s t a t i c int. DEPARTURE1 =. 3;. f i n a l s t a t i c int. DEPARTURE2 =. 4;. 0;. Code the constructor of the class. Here we also need to declare an attribute that will represent the type of event it will be.. private public. Type. type ;. QMM2dNEvent ( Type. nType ). {. super ( ) ; type. = nType ;. }. Note that in the declaration of the attribute the type of the attribute is. Type,. the name of the. enum. already shown. If we were using the constant. 38.

(46) forms, the type of the attribute would be an integer and would look like this:. •. private int type;.. Now we need to code a method that returns a list of all events.. This. would be necessary when coding the actual Markov process. Note that the. EventsSet. is a data structure also provided with the packages, a. special structure that works like a list.. public s t a t i c. E v e n t s S e t <QMM2dNEvent>. E v e n t s S e t <QMM2dNEvent>. for. ( Type. type. e v S e t . add (. return. :. evSet. =. new. getAllEvents (). {. E v e n t s S e t <QMM2dNEvent> ( ) ;. Type . v a l u e s ( ) ). new. QMM2dNEvent ( t y p e ) ) ;. evSet ;. }. •. Finally, as in the state, we can choose to code the method events. It is a requirement for the graphic interface.. public. label. String. (). { String. stg. switch. =. "" ;. ( type ). {. case. ARRIVAL : stg. =. " Arrival. of. the. system " ;. to. server. 1,. ( Only. both. Idle )" ;. to. server. 2,. ( Only. both. Idle )" ;. break ; case. ARRIVAL1 : stg. =. " Arrival. break ; case. ARRIVAL2 : stg. =. " Arrival. break ; case. DEPARTURE1 : stg. =. " Departure. from. server. break ; case. DEPARTURE2 :. 39. 1" ;. label. for the.

(47) stg. =. " Departure. from. server. 2" ;. break ; }. return. stg ;. }. 4.3.2 jMarkov Now that we have coded the basic components, we are going to program the actual process.. 1. We need to extend the class that represents a nite state Markov chain. As shown in 4.2.2, this class name is the name of our class is. SimpleMarkovProcess.. Note that, as before,. QueueMM2dN and is dened by the user.. After extending. the class we need to clarify the name of the classes that represents the state and the event in the code.. In our example the state is. MM2dNState. and the. QMM2dNEvent.. event is. public c l a s s. QueueMM2dN. extends. S i m p l e M a r k o v P r o c e s s <MM2dNState ,. QMM2dNEvent>. {...}. 2. We need to code the constructor of the process. We need to create an attribute for each initial parameter, and we must initialize them in the method.. private double. lambda ;. private double. mu1 ,. private int public int. double. nLambda ,. double. MM2dNState ( 0 ,. 0,. 0) ) ,. = nLambda ;. mu2 = nMu2 ; =. double. nMu2 ,. double. QMM2dNEvent . g e t A l l E v e n t s ( ). mu1 = nMu1 ;. alpha. nMu1 ,. {. super ( ( new lambda. alpha ;. N;. QueueMM2dN ( nN ). mu2 ,. nAlpha ;. 40. ) ;. nAlpha ,.

(48) N = nN ; }. Note that in the method we need to dene an initial state. This is essential to the algorithm explained in 4.2.2.1. In our case we are saying the initial state is when the system is idle. We are also calling the method. getAllEvents,. as. explained in the last section.. 3. We had to code the three methods needed for the creation of the process, which are also explained in the algorithm 4.2.2.1. They are the active method, the destinations method and the rates method.. •. Code the. active. method. Here we must dene the set of feasible events. that can occur when staying in state i. To do so, we need to specify what conditions the state. i. has to meet for the event. e. to occur. For example,. if the system is full there cannot be more arrivals, as a server cannot attend to two entities at the same time.. public boolean boolean switch case. a c t i v e ( MM2dNState. result. =. i ,. QMM2dNEvent. =. ( ( i . getQSize ( ) +. −. < N. getStatus2 (). +. break ; ARRIVAL1 : result. =. i . isEmpty ( ) ;. break ; ARRIVAL2 : result. =. i . isEmpty ( ) ;. break ; case. !=. 0) ;. {. ARRIVAL :. ( getStatus1 (). case. {. false ;. ( e . getType ( ) ). result. case. e). DEPARTURE1 : result. =. ( i . getStatus1 (). >. 41. 0) ;. 2). &&. getQSize ( ).

(49) break ; case. DEPARTURE2 : result. =. ( i . getStatus2 (). >. 0) ;. break ; }. return. result ;. }. •. Code the. dest. method. Here the user must dene the set of reachable. states from state. i,. given that event. e. occurs. For our example, we need. to specify that when an arrival occurs it can be attended by an idle server or it can go to the queue, and when a departure occurs, if there is another in the line, it will be attended or the server will become idle. In other words, we need to dene the new state is in state. public. i. and the event. S t a t e s <MM2dNState>. e. newx =. i . getStatus1 () ;. int. newy =. i . getStatus2 () ;. int. newz. i . getQSize ( ) ;. switch case. ( e . getType ( ) ). happens.. {. ARRIVAL :. if. ( i . getStatus1 () newx =. ==. 0). {. 1;. }. else i f. ( i . getStatus2 (). newy =. ==. 0). 1;. }. else. { newz. =. i . getQSize ( ). +. }. break ; case. ARRIVAL1 : newx =. to which the system will go if it. d e s t s ( MM2dNState. int. =. j. 1;. break ;. 42. 1;. {. i ,. QMM2dNEvent. e). {.

(50) case. ARRIVAL2 : newy =. 1;. break ; case. DEPARTURE1 :. if. ( i . getQSize ( ). 0). {. newx =. 1;. newz. i . getQSize ( ). else. }. !=. =. −. 1;. {. newx =. 0;. }. break ; case. DEPARTURE2 :. if. ( i . getQSize ( ). 0). {. newy =. 1;. newz. i . getQSize ( ). else. }. !=. =. −. 1;. {. newy =. 0;. }. break ; }. return new. S t a t e s S e t <MM2dNState>(. new. MM2dNState ( newx ,. newy ,. newz ) ) ;. }. Note that in the code we are rst initializing each one of the properties of the state by putting its values in state. i.. Then we create a condi-. tional function in which we code what changes in each property when the event. e. occurs. Finally, we return the state. j. with the new values of the. properties.. •. Code the. rate. method.. Here we dene the transition probabilities for. passing between states.. public double. r a t e ( MM2dNState. double. res. switch. ( e . getType ( ) ). case. =. i , MM2dNState. 0; {. ARRIVAL :. 43. j ,. QMM2dNEvent. e). {.

(51) res. =. lambda ;. break ; case. ARRIVAL1 : res. =. lambda. ∗. alpha ;. ∗. (1. break ; case. ARRIVAL2 : res. =. lambda. −. alpha ) ;. break ; case. DEPARTURE1 : res. = mu1 ;. break ; case. DEPARTURE2 : res. = mu2 ;. break ; }. return. res ;. }. 4. The user can choose to code the. description. method which is the one that. describes the system, it is just used for the graphic interface.. public. String. return. description (). "M/M/ 2 /N SYSTEM. {. Queueing. System. with. two. servers. (1 ,. Arrivals. are. Poisson. with. 2) ,. with. rates. " + mu1 + +. lambda. ". and. +". ". and. + mu2 + the. ".. maximum. number. in. the. system. (N). rate. ". is. + N;. ". }. 5. Finally, we need to code the main method of the problem. In this method we initialized the problem, which is the one the computer is going to run when running the package. This is where the user must dene the value of the initial parameters (µ1 ,. µ2 , λ, N. public s t a t i c void double. lda. =. and. α).. main ( S t r i n g [ ]. a). {. 5;. 44.

(52) double. mu1 =. 3;. double. mu2 =. 2. double. alpha. =. int. N =. 0.6;. 10;. QueueMM2dN. theQueue. =. new. QueueMM2dN ( l d a ,. mu1 ,. mu2 ,. alpha ,. N) ;. t h e Q u e u e . showGUI ( ) ; theQueue . p r i n t A l l ( ) ; }. 4.3.2.1. Results. In this part we will show the results of the problem given by the software.. We. will skip them in the following section, since our intention in showing them is to demonstrate the utility of the software, and showing the same result format twice would be redundant. M/M/ 2 /N SYSTEM Queueing. System. Poisson System. has. with 12. with. two. rate. 5.0. servers and. (1 ,. the. 2) ,. with. maximum. rates. number. in. States . EQUILIBRUM. STATE Empty. PROBAB. system. 0.04639. Server. 1. idle. /. Server. 2. busy. /. 0. line. 0.05412. Server. 1. busy. /. Server. 2. idle. /. 0. line. 0.04124. Server. 1. busy. /. Server. 2. busy. /. 0. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 1. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 2. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 3. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 4. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 5. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 6. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 7. line. 0.09536. Server. 1. busy. /. Server. 2. busy. /. 8. line. 0.09536. MEASURES OF PERFORMANCE NAME. MEAN. SDEV. 45. 3.0. and. 2.0.. the. system. Arrivals. (N). is. 10.. are.

(53) Utilization. Server. A. 0.89948. 0.30069. Utilization. Server. B. 0.91237. 0.28275. 3.43299. 2.76915. 5.24485. 3.03406. Queue. Length. Number. in. System. EVENTS OCCURANCE RATES NAME. MEAN RATE. Arrival. of. the. system. 4.29124. Arrival. to. server. A,. ( Only. both. Idle ). 0.13918. Arrival. to. server. B,. ( Only. both. Idle ). 0.09278. Departure. from. server. A. 2.69845. Departure. from. server. B. 1.82474. 4.3.3 jQBD This package models innite Markov chains, so it is similar in many ways to the one explained before. In that order of ideas we are just explaining the dierences between the coding procedures between them. Consider the last example but with the following changes.. The queue is an. innite one and the servers are not in parallel but in series and both cannot be busy at the same time. In other words, until an entity has nished the process with server 1 and then with server 2, the next entity cannot start its own process. The basic components are still almost the same. duplet. (X, Y ).. We have to delete the property. Z , because now the number of entities. is innite, so keeping track of that is impossible. deletion of the event. ARRIVAL2,. The state class becomes the. The change in the event is the. because now the servers are in series so the entities. always have to arrive rst at server 1 and then at server 2. The rest of the coding stays the same. The dierences are not great in the process of modeling a QBD. The methods are the same, and the dierences lie only in the extended class.. 46. As the queue is.

(54) innite, we have to work with something we call the relative level. Remember that in section 2.3 we explained the relative level and that it was innite. So the relative level just shows whether the system will go one level up, go one level down or stays the same. This last one means a change in the phase of the system. Next we will show the problem's code and explain it in detail.. •. The class that should be extended is called. GeomProcess.. The class GeomPro-. cess represents a continuous time or discrete time quasi-birth-death process. This class extends the class SimpleMarkovProcess. The class generates the G matrix through the logarithmic reduction algorithm.. public c l a s s. •. extends. QueueMM2. We present the. active. G e o m P r o c e s s<MM2State ,. QMM2Event>. {...}. method next. Here we must dene the set of feasible. events that can occur while staying in state. i and absolute level l.. The absolute. level is the real level the system is in, and the relative level can be understood as the change in the absolute level.. public boolean boolean switch case. result. a c t i v e ( MM2State =. i ,. int. false ;. ( e . getType ( ) ). {. ARRIVAL : result. =. ( l. !=. 0) ;. ==. 0) ;. break ; case. ARRIVAL1 : result. =. ( l. break ; case. DEPARTURE1 : result. =. ( i . getStatus1 (). ==. 1) ;. ==. 1) ;. break ; case. DEPARTURE2 : result. =. ( i . getStatus2 (). break ;. 47. l ,. QMM2Event. e). {.

(55) }. return. result ;. }. Note that we now use the absolute level to dene the events that will be active in every case. For example, for event. ARRIVAL. empty, so the level has to be greater than. 0;. to occur, the system cannot be. however, for the event. ARRIVAL1,. the arrival when the system is idle, to occur, the system has to be empty, so the absolute level has to be. •. We present the. dest. states from state. public. i,. method next. Here we must dene the set of reachable. given that event. G e o m R e l S t a t e<MM2State > [ ]. int. newx =. i . getStatus1 () ;. int. newy =. i . getStatus2 () ;. int. rLevel. switch case. 0.. =. e. d e s t s ( MM2State. 0;. ( e . getType ( ) ). {. ARRIVAL : rLevel. =. 1;. break ; case. ARRIVAL1 : newx = rLevel. 1; =. 1;. break ; case. DEPARTURE1 : newy =. 1;. newx =. 0;. break ; case. DEPARTURE2 :. if. }. ( l. ==. 1). {. newy =. 0;. newx =. 0;. else. {. rLevel. =. occurs and we are in absolute level l .. − 1;. 48. i ,. int. l ,. QMM2Event. e){.

(56) newy =. 0;. newx =. 1;. }. break ; } MM2State. newSubState. =. G e o m R e l S t a t e<MM2State>. if. (. e . getType ( ). new. MM2State ( newx , newy ) ;. s ;. == Type . DEPARTURE2 &&. l. ==. 1). { s }. else s. new. =. G e o m R e l S t a t e<MM2State>. ( newSubState ) ;. G e o m R e l S t a t e<MM2State>. ( newSubState ,. {. new. =. rLevel ) ;. } S t a t e s S e t <G e o m R e l S t a t e<MM2State>> =. new. return. StatesSet<. statesSet. G e o m R e l S t a t e <MM2State>>. (s) ;. s t a t e s S e t . toStateArray () ;. }. Note that in the method we used the relative level (rLevel) to dene the next stage. j.. Also note that as the method. dest. coded in jMarkov, the method has. to return the new state, but in this case the process is longer. That process occurs as follows. First, we create the new state use an abstract class called new state. StatesSet,. j.. GeomRelState,. j,. which receives as its parameter the. Then we use a data structure, provided in the package, called. whose function is to store the states.. toStateArray. as done before. Next, we. Finally, call the method. from the data structure class. This process can appear dicult,. but it is actually quite mechanical, since the problem does not matter; the process must always be done the same way. We do not show the other methods here, because the coding is the same as in the last section.. 49.

(57) 4.3.4 jPhase This subsection has two parts: rst, the jPhase package modeling explanation, and second, how to connect this package to the last two, especially with jQBD. The subsection is divided thus because it is important to show the entire functionality of this package and how useful it becomes when used together with the other packages.. 4.3.4.1. Modeling with jPhase. The package's main purpose is the generation of PH distributions, meaning that the package returns the matrix and the vector needed to dene any kind of PH distribution. The package also calculates the cumulative distribution function (CDF) and the probability density function (PDF). This package works in four ways:. 1. The most used PH distributions are already coded in the package, so a user only needs to call them as if he or she is calling a normal method.. The. continuous PH distributions that are coded are Erlang, exponential, hyperexponential, hyper-Erlang, Coxian, Erlang-Coxian, and the discrete PH distributions that are coded are geometric and negative binomial. For example, suppose we want to nd the probability that a random variable that follows an Erlang distribution, with. λ. = 5 and 2 phases, takes a value. less than 2. Note that in this case we are throwing the result by console, but it can be also shown in the interface. We did not show it in the interface since using the interface would prohibit us from seeing the code.. private void. example0 ( ) {. S y s t e m . o u t . p r i n t l n ( "EXAMPLE 0 " ) ; ContPhaseVar. v1. =. DenseContPhaseVar . E r l a n g ( 5 ,. S y s t e m . o u t . p r i n t l n ( "P (. v 1 <=. 2.0. ) :\ t". }. 50. 2) ;. +v 1 . c d f ( 2 . 0 ) ) ;.

(58) 2. Another way of generating a PH distribution is by using the closure properties. These properties dictate that the result of operating two PH distributions will also be a PH distribution. The closure properties that are coded in the package are shown following their respective names in the code.. • sum(X):. Convolution between the original distribution and. • sumGeom(P): rameter. P). X. Computes the convolution of a geometric number (with paof i.i.d. PH distributions as the original one. • sumPH(P): Convolution of a discrete P H(P ) number of i.i.d.. PH distribu-. tions. • mix(P, X): and. Convex mixture between the original distribution (weight. P). Y.. • min(X):. Minimum between the original variable and. X. • max(X):. Maximum between the original variable and. X. For example, suppose we have two PH distributions, the same Erlang as before and an exponential distribution with. λ. = 3. We want to nd the PH distri-. bution generated by the sum of the last two. Note that we used the method. toString,. which returns the matrix and the vector that denes the new PH. distribution.. private void. example1 ( ) {. S y s t e m . o u t . p r i n t l n ( "EXAMPLE 1 " ) ; ContPhaseVar. v1. =. DenseContPhaseVar . expo ( 3 ) ;. ContPhaseVar. v2. =. DenseContPhaseVar . E r l a n g ( 5 ,. ContPhaseVar. v3. =. v 1 . sum ( v 2 ) ;. S y s t e m . o u t . p r i n t l n ( " v 3 : \ n "+v 3 . t o S t r i n g ( ) ) ; }. 51. 2) ;.

(59) 3. The third way consists of knowing the matrix and the vector that completely dene a PH distribution.. In that case, users choose the class depending on. whether the PH is continuous or discrete, and then plug in the data. Note that we are using the classes. DenseMatrix. be also supplied sparsely, using the. and. DenseVector,. and that the data can. SparseMatrixPanel class and SparseVector. class.. private void. example2 ( ) {. S y s t e m . o u t . p r i n t l n ( "EXAMPLE 2 " ) ; DenseMatrix 1,. − 5}. }. A =. new. alpha. DenseContPhaseVar rho. PhaseVar. new double [ ] [ ]. {. {. −4 ,2 ,1}. ,. {1 ,. −3 ,1}. ,. {2 ,. ) ;. DenseVector. double. DenseMatrix (. =. v2. =. = v1. new =. DenseVector (. new. new double [ ]. {0.1 ,. DenseContPhaseVar ( a l ph a ,. 0.2 ,. 0.2}) ;. A) ;. 0.5; v1 . w a i t i n g Q ( r h o ) ;. S y s t e m . o u t . p r i n t l n ( " v 2 : \ n "+v 2 . t o S t r i n g ( ) ) ; }. 4. Finally, we can generate the PH distribution by using the jFitting module. As previously explained, some algorithms t a set of data or a distribution to a PH distribution.. The moment matching algorithms receive the moments of. the distribution or data to t them, and the maximum likelihood algorithms only receive a set of data for the tting. If the user wants to t a distribution with a maximum likelihood algorithms, he or she needs to generate random numbers that follow the distribution and use them as a set of data. This example ts a set of data from a plain le with the hyper-exponential distribution EMAlgorithm. The output of the tting is shown under the code.. private void. example3 ( ) {. System . o u t . p r i n t l n. double [ ]. data. =. ( "EXAMPLE 3 " ) ;. readTextFile. ( " e x a m p l e s / j p h a s e /W2. t x t " ) ;. 52.

(60) EMHyperErlangFit ContPhaseVar. v1. =. fitter. =. new. EMHyperErlangFit. ( data ) ;. f i t t e r . f i t (4) ;. i f ( v 1 != null ) { System . o u t . p r i n t l n. ( " v 1 : \ n "+v 1 . t o S t r i n g ( ) ) ;. System . o u t . p r i n t l n. ( " logLH : \ t ". +. f i t t e r . getLogLikelihood () ) ;. } } Phase. −Type. Number. of. Distribution Phases :. 4. Vector : 0 ,2203. 0 ,4917. 0 ,2054. 0 ,0836. −0 ,1522. 0 ,0000. 0 ,0000. 0 ,0000. 0 ,0000. −0 ,9164. 0 ,0000. 0 ,0000. 0 ,0000. 0 ,0000. −9 ,1779. 0 ,0000. 0 ,0000. 0 ,0000. 0 ,0000. −233 ,1610. Matrix :. This example ts the rst three moments of a set of data using the acyclic continuous distributions. The output of the tting is shown under the code.. private void. example4 ( ) {. S y s t e m . o u t . p r i n t l n ( "EXAMPLE 4 " ) ; MomentsACPHFit ContPhaseVar. v1. fitter =. =. new. MomentsACPHFit ( 2 ,. 6,. 25) ;. f i t t e r . f i t () ;. S y s t e m . o u t . p r i n t l n ( " v 1 : \ n "+v 1 . t o S t r i n g ( ) ) ; S y s t e m . o u t . p r i n t l n ( "m1 : \ t "+v 1 . moment ( 1 ) ) ; S y s t e m . o u t . p r i n t l n ( "m2 : \ t "+v 1 . moment ( 2 ) ) ; S y s t e m . o u t . p r i n t l n ( "m3 : \ t "+v 1 . moment ( 3 ) ) ; } EXAMPLE 4 Second Third. 4.3.4.2. moment moment. 1.5. representable. 2.0833333333333335. with can. 2 not. phases be. representable. with. 2. phases. Conecting jPhase with jQBD. The ability of the jPhase module and the jQBD module to connect is very helpful when working with real-life problems.. Random variables in real problems do not. 53.

Figure

Actualización...

Referencias

Actualización...