PDF superior An adaptive model for specification of distributed systems

An adaptive model for specification of distributed systems

An adaptive model for specification of distributed systems

Figure 7 show the new behavior :HOFRPH6FHQHafter running before adaptive actions. The action 6KZB%XWWRQ went to change to present the last scene that a student used and the last class that the student was doing. The action 6KZB7UDIILFOKW had changed to show which modules that student finished, non started or it is studing. The OVWB until OVWB represent this situation. The value equal 1 (green) indicates that student finished the module, the value 2 (yellow) illustrates that student was making this module and the value 0 (red) represents that he didn’t start this module. It can be observed in this behavior the adaptive before action were to used to represent the changes in behavior of this scene.
Mostrar más

12 Lee mas

TítuloTowards Low Latency Model Oriented Distributed Systems Management

TítuloTowards Low Latency Model Oriented Distributed Systems Management

WS use XML dialects for both interface definition (WSDL) and transport (SOAP). It may seem that using XML dialects would promote synergy, but the use of XML as “envelope” of the message and representation of it is orthogo- nal at best. It is worse, in practice, since the message must be either sent as an attachment (which implies Base64 transformation), or with its XML special characters encoded as character entities to avoid being parsed along with the XML elements of the envelope. Additionally, two XML parsings (and the corre- sponding encoding) must be done, being added to message-passing latency. In the foreseeable future, there is not any support in view for platform independent parsed XML representation in WS.
Mostrar más

10 Lee mas

ArchiMeDeS: A Service-Oriented Framework for Model-Driven Development of Software Architectures

ArchiMeDeS: A Service-Oriented Framework for Model-Driven Development of Software Architectures

The Service-Oriented Computing paradigm represents a step forward in comparison to other widely accepted paradigms such as object-orientation, aspect- orientation or component-based software engineering. Similarly to agent-based approaches, its principles refer to entities understood from a higher abstraction level, not constrained to the programming level but to computing entities with an existence that is independent from the platform or implementation technology. Since its origins it has demonstrated its capabilities for quick evolution and spread to other scopes. It is based on entities that work, in essence, in highly dynamic distributed environments in which the critical aspect does not lie within the concrete programming of the computing elements participating in the system. On the contrary, the SOC paradigm prioritizes their interrelation, the means of communication established, the message exchange patterns performed, the availability of the resources associated to each service and the policies and restrictions applied in every task execution. Due to all this features, the importance of service-orientated systems rely greatly on their topological structure, built upon individual identifiable services, and their behaviour and evolution during its entire lifecycle. This means that the Architecture of Service-Oriented systems and applications represents one of the core aspects to be taken into account when trying to develop solutions based on the SOC paradigm.
Mostrar más

294 Lee mas

Towards a Distributed Systems Model based on Multi-Agent Systems for Reproducing Self-properties

Towards a Distributed Systems Model based on Multi-Agent Systems for Reproducing Self-properties

Autonomic Computing faces complexity with the idea of a computer system that adapts to changes without human intervention (Lalanda, Mccann, & Diaconescu, 2013). Autonomic Computing defines an autonomic system as a set of autonomic elements which are responsible for managing a particular element (White, Han- son, Whalley, Chess, & Kephart, 2004). An autonomic element manages its own state and its interactions with an environment (White et al., 2004). The environ- ment consists of signals and messages from other elements and the external world (Kephart, Chess, Jeffrey, & David, 2003). From the Self-management goal, some self-* properties emerge: adapting to the addition or deletion of components (Self- configuration) (Kephart et al., 2003), detecting and recovering from failures with- out disruption to the system operation (Self-healing) (Nami & Bertels, 2007), finding improvements in the efficiency of a system (Self-optimization) (Lalanda et al., 2013), and anticipating and preventing threats (Self-protection) (Lalanda et al., 2013). Distributed systems and autonomic elements can be modelled as multi-agent Systems (Lalanda et al., 2013). Agents are designed as autonomous adaptive entities that observe their internal and external states, act based on their local perceptions, and can model multiple feedback loops(Jun et al., 2004). Additionally, cooperative agents are able to control a computer network, just based on the agent’s (local) in- teractions, or are able to model some autonomic elements. In this way, an agent has the ability to manage resources in a network by analysing the capabilities of each element and by defining it as a managed element (Guo, Gao, Zhu, & Zhang, 2006).
Mostrar más

22 Lee mas

Integrated environment of systems automated engineering

Integrated environment of systems automated engineering

The purpose of this series of papers is to identify, design, develop and integrate the components of an integrated environment for a system automated development, starting from high-level-abstraction formal specifications. It is intended to achieve a generation of systems starting from only two models: the static or data structure model, and the dynamic or functional model. The former is based on an adaptation of the conceptual pattern of entities and relationships, and the latter on the formal specification of operations in objects relational algebra and on the finite automaton theory. The maintenance of the systems generated by the tool would be made by operating directly on the static and dynamic models, with no need for either re-coding or making reverse engineering.
Mostrar más

4 Lee mas

Timed consistency: unifying model of consistency protocols in distributed systems

Timed consistency: unifying model of consistency protocols in distributed systems

In neither SC nor CC real-time is explicitly captured, i.e., in the serializations of H or H i+w operations may appear out of order in relation to their effective times. For instance, in Figure 1 the serialization in part b) shows event r 1 ( B )5 occurring before w 2 ( A )8, but the latter event occurred at time 523, while the former occurred at time 680. In CC, each site can see concurrent write opera- tions in different orders. On the other hand, LIN requires that the operations be observed in their real-time ordering. Ordering and time are two different aspects of consistency. One avoids conflicts between operations, the other addresses how quickly the effects of an operation are perceived by the rest of the system.
Mostrar más

9 Lee mas

Application feature model for geometrical specification of assemblies

Application feature model for geometrical specification of assemblies

Other crucial element for the inter-application integration arises from the need of establishing a common semantics for all those activities integrated in the domain. In particular, for the purpose of this work and with the aim of finding a common intention, it will be important to revise both the semantics and the relations that support some of the generic product models present in literature (STEP (Standard for the Exchange of Product model data), MOKA (Methodology and tools Oriented to Knowledge-based engineering Applications), CPM (Core Product Model) and PPO (Product-Process-Organisation)) as well as other models that specialize the previous ones. This is the case of OAM (Open Assembly Model), an extension of CPM for mechanical assemblies, or the one proposed by Zha and Sriram [12] for embedded systems. These concepts and semantics have been considered when developing a valid model proposal for the purposse of this work: product specification and verification. This domain integrates both product specification and inspection process planning, specification and validation activities. These activities involve reasoning about specification chains established on product assembly architectures during product specification activities, and on inspection process assembly architectures during inspection activities.
Mostrar más

8 Lee mas

One Tier Dataflow Programming Model for Hybrid Distributed  and Shared Memory Systems

One Tier Dataflow Programming Model for Hybrid Distributed and Shared Memory Systems

Modes are used to define mutually exclusive activities inside the transitions that dynamically reconfigure the network. A mode en- ables a subset of connections to input places or output places. For each mode, the user defines a function to process inputs, the asso- ciated places and the default next mode that will be executed when the current one finishes. A transition with several modes changes its mode when all the tokens from the active mode have been pro- cessed. To detect that there are no more tokens remaining or pend- ing to arrive to the input places, special signal tokens are used to in- form of a mode change (mode-change signal). The change of mode in a transition automatically sends mode-change signals to all its output places. Thus, signals are propagated automatically across the network, flushing tokens produced on the previous mode, be- fore changing each transition to the new mode. When a transition change its mode, input and output places are reconfigured accord- ing to the new mode specification. An example of a network with modes can be seen in Fig. 1. The network has a transition (A) with two modes. On each mode, the transition will send tokens to a dif- ferent destination B or C.
Mostrar más

9 Lee mas

A semantic description model for the development and evaluation of personalized learning environments based on distributed systems

A semantic description model for the development and evaluation of personalized learning environments based on distributed systems

commitment, cooperation and satisfaction with the group effort. On the other hand, collaborative technologies have been criticized for providing a reduced cues environment ill-suited to emotional, expressive, or complex communications, and for providing an environment with longer decision times, anti-social flaming behaviors, and decreased social involvement. The critical literature acknowledges that collaborative technologies foster interactions among participants, but there are questions about whether the increased or enhanced interactions promote knowledge exchange and/or learning because of the absence of non-verbal cues, which limits the modes of communication among participants. Early work in assessing technology impacts in distributed learning environments measured student performance outcomes, such as grades on tests or grade point averages (GPAs). In other studies, a comparison was made between learning outcomes (students’ grades and/or perceived learning) in a distributed environment and learning outcomes obtained in a traditional face-to-face environment. More recently, collaborative technology has been seen to impact both cognitive and perceived learning outcomes in distributed settings. Cognitive learning involves changes in an individual’s mental models; that is, internal representations of knowledge elements comprising a domain as well as interrelationships among those knowledge elements. Perceived learning involves changes in a learner’s perceptions of skill and knowledge levels before and after the learning experience. Approaches to the measurement of these two variables differ: cognitive learning measures are often outcome or performance related, while perceived learning measures are often process- related.
Mostrar más

143 Lee mas

Towards a model of software development process for a physically distributed environment

Towards a model of software development process for a physically distributed environment

In this context, it points out the arising of a new problem class in the software development process that involves the cultural differences and the physical distances between the participants of the process. This way, the traditional problems related to the development process, strongly centered in the requirement specification phases and the system analysis, get more critical surroundings. The way to solve these problems is centered on the adoption of more formal and defined specification and development process languages. Verification and Certification models of the maturity level of the software development process, such as CMM (Capability Maturity Model), have become more and more useful and important in order to the contractors organizations to have a minimal guarantee about the quality of the utilized process by the partners system development organizations or laboratories. The era of monolithic and informal development approaches is ending. The systems developers have conscience of the existence of multiple forms of specifying and developing the systems [17]. New technologies and information systems types, such as the expert systems, inference and rule machines, neural networks and genetic algorithm require different development approaches.
Mostrar más

12 Lee mas

Subjective quality assessment of an adaptive video streaming model

Subjective quality assessment of an adaptive video streaming model

performance of video streaming systems, such as start-up latency, coding bit rate, frequency and severity of buffering events, and rendering quality [4]. Another example could be found in the work of Singh et al. where a no-reference metric, which is based on neural networks considering video freezes and the quantization parameters, estimates the QoE of the users of HTTP adaptive video streaming [5]. These proposals try to evaluate the effects of these systems taking into account the possible impact on the end user’s perceived quality in the simplest way, which is using quality metrics and not directly involving people in the evaluation. However, it is well-known that the most reliable way to evaluate the QoE of the users of a certain application is by means of subjective tests [6]. In addition, some works have been proposed regarding the control of the adaptive streaming system considering QoE issue. For instance, the decision strategy of the client for requesting chunks according to the network conditions could be optimized using an estimation of the QoE, based on the impact of video freezes, the effects of frequency and amplitude of the quality changes, and spatial and temporal information of the video [7]. Considering adaptation capability of Scalable Video Coding (SVC), Sieber et al. [8] presented a user-centric DASH/SVC streaming algorithm for mobile platform that reduces the number of quality switches by trying for a stable buffer level before increasing the SVC layers. Their study outperforms the other available algorithms in terms of switching frequency, and usage of the available resources; however it does not take the amplitude of quality switches into account.
Mostrar más

12 Lee mas

Modeling and specification of distributed timed systems

Modeling and specification of distributed timed systems

In Ortiz et al. (2010), Ortiz et al. (2011), we proposed formal methods for the modeling and specification of RTS and DRTS based on RECA with such distributed (a.k.a independent) and memory clocks, yielding the DECA and RMECA. We shown that DECA and RMECA are determinizable, thus closed under complementation; also that their respective language inclusion problems are decidable (more exactly, PSPACE-complete). Additionally, in Ortiz et al. (2010), Ortiz et al. (2011), we proposed extensions of the existing EventClockTL with distributed clocks and memory clocks to allow the specification of distributed and timed properties. RMECTL are PSPACE-complete for the satisfiability and validity problem if the indices of the clocks are encoded in unary and EXPSPACE-complete for the binary case. DECTL are PSPACE-complete for the satisfiability and validity problem. DECA (DECTL) and RMECA (RMECTL) can been used to specify and model systems such as the Controller Area Network (CAN) Monot et al. (2011), WirelessHART Networks De Biasi et al. (2008), and the ARINC-659 protocol Gwaltney & Briscoe (2006). This paper deals with formal methods that can be used to automate the analysis of complex RTS and DRTS and in particular the analysis of the correctness of the system’s behavior. Our contribution is to show the applicability of DECA, RMECA, RMECTL and DECTL over a RTS and DRTS.
Mostrar más

10 Lee mas

An adaptive robust optimization model for power systems planning with operational uncertainty

An adaptive robust optimization model for power systems planning with operational uncertainty

The need for sustainable power systems is driving the adoption of large shares of vari- able renewable energy. Due to this, there is an increasing necessity for new long-term planning models that can correctly assess the reserve capacity and flexibility requirements to manage significant levels of short-term operational uncertainty. Motivated by this key challenge, this work proposes an adaptive robust optimization model for the Generation and Transmission Expansion Planning Problem. The proposed model has a two-stage structure that separates investment and operational decisions, over a planning horizon with multiple periods. The key attribute of this model is the representation of daily operational uncertainty through the concept of representative days and an uncertainty set for demand and the availability of wind and solar power built over such days. Also, the model em- ploys a DC-power flow representation for the transmission network. This modelling setup allows an effective representation of the reserve capacity and flexibility requirements of a system with large shares of renewable energy. To efficiently solve the problem, the col- umn and constraint generation method is employed. Extensive computational experiments on a 20-bus representation of the Chilean power system over a 20-year horizon show the advantages of the proposed robust expansion planning model, compared to an approach based on deterministic representative days, due to an effective spatial placement of both variable resources and flexible resources.
Mostrar más

63 Lee mas

Distributed resource allocation for contributory systems

Distributed resource allocation for contributory systems

A similar approach has been proposed by the Condor team under the term “flock of Condor” [14]. OurGrid has been in production since December 2004 and today aggregates computing resources from about 180 nodes shared by 12 peers. The platform has been limited to supporting bag-of-tasks. Local users have always the priority for their tasks on their local resources, only the unused local resources are shared with other peers. Lo- cal jobs kill remote jobs if needed. For promoting cooperation amongst peers and avoid freeriders [2], OurGrid use a network of favours. Each peer maintains a matrix of the computing time that it gets granted from other peers. Then, if a processor is requested by more than one peer, it allocates it to the peer with the greatest favour. The favour computation is protected against malicious peers that, for example, would reset its state in order to gain more computing time. Other peers are discovered through a centralized discovery system. The network is a free-to-join Grid, where remote peers are not trusted. To address this issue, a sandbox mechanism is proposed (Sand-boxing Without A Name). Several resource allocation policies have been experimented. The first one, Workqueue with Replication (WQR), was simply sending a random task to the first free processor found. In version 2.0 of OurGrid, a new scheduler tries to avoid communication cost by introducing storage affinity. Tasks are sent to computing nodes that are closest to used data. This algorithm tries to avoid the need for redundant information about tasks such as expected completion time. The first algorithm was found to be still more efficient on some cpu-intensive workloads.
Mostrar más

190 Lee mas

Specification of a model for the study of management culture

Specification of a model for the study of management culture

The organizational culture is understood as a process of dependency relationships between external variables regarding internal variables to the organization. This is a scheme in which technology, structure, values, norms and needs determine the motivational variables -affiliation, power, utility - and these in turn affect the resulting variables -leadership, management, entrepreneurship, innovation, productivity, satisfaction, turnover, absenteeism, accidents, adaptation, innovation, reputation-.

11 Lee mas

Peer-to-Peer Systems: The Present and the Future

Peer-to-Peer Systems: The Present and the Future

Was the first generation of peer-to-peer based file sharing, which used an unstructured approach. Nap- ster [11] was one of them with a strategy based in a metaserver and servers for looking up the location of data items, after that the data was transferred directly between peers. Gnutella use a flooding technique, a query is sent to all the peers in the system until the required data of peer is found. Peer-to-peer networks do not rely on a specific infrastructure offering trans- port services. Based on TCP or HTTP connections, peer-to-peer system forms an overlay structure focus- ing on content allocation and distribution. In standard client-server systems content is stored and provided by a central server. Peer-to-peer are highly decentral- ized and locate a desired content at some peer and provide the corresponding IP address of that peer to the searching peer. The download of that content is initiated using a separate connection. In client-server
Mostrar más

6 Lee mas

Grandes desviaciones para V-estadísticos

Grandes desviaciones para V-estadísticos

146 Departamento de Ciencias Básicas 2015 - Año Internacional de la Luz 7. A.M. Mesón y F. Vericat, On the topological entropy of the irregular part of V-statistics multifractal spectra, Journal of Dynamical Systems and Geometric Theories, 11 (2013) 1-12. 8. A.M. Mesón y F. Vericat, On the irregular part of V-statistics multifractal spectra for systems with non-uniform specification Journal of Dynamical Systems and Geometric Theories (en prensa).

6 Lee mas

Caracterización de Sistemas intensivos en Software desde un punto de vista de innovación=Characterizing Software-intensive Systems from the innovation point of view

Caracterización de Sistemas intensivos en Software desde un punto de vista de innovación=Characterizing Software-intensive Systems from the innovation point of view

Summarizing, firms demand to possess knowledge about factors affecting the innovation to increase the probability of success in their product development. The assessment of innovation in software products is a powerful mechanism to get this knowledge. Many authors analysed product innovation assessment from different perspectives and application domains [50, 9, 58, 34, 96, 95, 22, 55, 8, 41, 49, 54, 103]. However, in most of the cases, the product innovation assessment was merely based on factors grouped into dimensions, but each author proposes their own interpretation of the innovation factors and assessment, though based on the same concepts. Additionally, no one pro- vides guidelines about how to perform the assessment process. This unveils a lack of consensus on product innovation assessment. None of the authors provided guidelines on how to perform the assessment using factors. Therefore, literature addressed the need of assessment product innovation but no general models are still available. This thesis aims to identify and present the elements of a framework to assess software products from the innovation perspective. To that end, the concepts needed to rep- resent the assessment of product innovation are combined to build a reference model. The reference model is a composition of the list of factors to model innovation, ques- tionnaires to data gathering and processes to perform the assessment. Additionally, a tool to perform the assessment of software product innovation based on the reference model components has been implemented.
Mostrar más

269 Lee mas

Munk2005--IntroductiontoCGEbasedpoli.pdf

Munk2005--IntroductiontoCGEbasedpoli.pdf

The trade-off between equity and efficiency considerations implies that the rules for dealing with externalities, public goods and increasing returns to scale become more complex than under first-best assumptions. For example, in a sector which is subsi- dised to support the households employed in that sector, it may not be optimal to ap- ply ”the polluter-pays principle” when the distributional consequences of taxing the polluter are taken into account. On the other hand, the changes in tax rates to internal- ise an externality may at same time improve the efficiency of the tax system, giving raise to a Double dividend, i.e. give raise to an increase in social welfare which ex- ceeds the social value of the reduction of the externality. Under first-best conditions, in correcting market failures due to externalities or public goods the effect on other markets may be ignored. This is not the case under second-best assumptions. Under first-best assumptions the increase in the supply of publicly produced good, may be evaluated based on its market price value neglecting the distributional effects; again this is not the case under second-best assumptions.
Mostrar más

59 Lee mas

Train scheduling and rolling stock assignment in high speed trains

Train scheduling and rolling stock assignment in high speed trains

Demand. In this paragraph the demand time dependencies are modelled. We assume that users of a origin- destination pair who want to travel during time period are prepared to use any adequate railway service which stops during time interval . This assumption is modelled considering that for each origin-destination pair there is a function which provides the total number of users willing to realize their travel before time . This function may be non linear. It is also worth noting that the number of intermediate stops may affect the schedulling process, and therefore, the captured demand.
Mostrar más

10 Lee mas

Show all 10000 documents...