in universities and educational environments offering ICT and providing virtual computer labs on demand. In  a management model of trade relations between the university and students based intheCloud is proposed, where the central subject is the student. The student can adapt and customize the system by his preferences. The proposed model is based on an educational website; which integrates the components of the e-learning systems such as: learning themanagement system, themanagement of information, the communication and collaboration tools, the reporting systems, and the social networking services, etc. This system uses both Moodle as a learning manager and SugarCRM as a relations manager among students. The work besides examines the teaching/learning process implemented inthe proposed system and includes metrics for evaluating system performance and usage. In  a runtime environment for scientific applications on a Cloud is proposed, from two perspectives: Deploying a Private Cloud and Configuring a Cluster using the virtual resources that Cloud offers. In  the authors cover mobile devices to access theCloud, solutions here and now, through theCloud. They define Mobile Cloud Computing as "availability of Cloud computing services in a mobile ecosystem". It deals with of outsourcing computing and data stored outside of mobile devices, using them only to access /view results of the computation or data obtained like services on demand, from any place where the device is. In  the authors speak of sets of small clouds named cloudlets, which are closer to points of use or location and are wirelessly accessible via mobile devices. They propose to use them inthe educational process, and in a model of connected school. The cloudlet characteristics proposed are: cheap device, small and with Internet connection, wireless interface
Abstract: Nowadays the most valued asset of the organizations is their knowledge, who is embodied into routines, products, services and employees. Knowledge Management arises as a set of strategies supported by Information Technologies (IT) that tries to leverage the knowledge resources at the maximum level to obtain competitive advantages through new services and products creation, as well as improving the existing, optimizing the customers relationships, streamlining the time of routines and serving information and knowledge to the employees on time. TheCloud Computing paradigm defined by Gartner as: "a computing style where the IT capacities, scalable and elastic, are provided by a service to customers using internet technologies" offers a set of technological advantages to the organizations that wants to incorporate it in their IT projects. Companies that start knowledge man- agement initiatives, can leverage theCloud Computing features to maximize the scope of their projects, and in that way obtain advantages among competition. In this paper will be exposed several ways about how the organizations can upgrade their knowledge management strategies through Cloud Computing features.
2. LARGE AMOUNT OF INFORMATION To understand how easy is to create information, it is necessary to analyse the social media, for example Twitter, which has more than 90 millions of tweets per day, which represents a total of 8 terabytes of information everyday. Theinformation of web transactions has increased significantly, nowadays, Wal-Mart, the largest retailer inthe world, administrates one million of transactions by hour which increases a data base valued on 2.5 petabytes. An example inthe scientific area is the collider of particles CERN, which can eve create 40 terabytes of information per second during the experiments. In networking, theinformation registered by a system of research and network management can reach terabytes in two days.
The arrival of Ultra-High Energy Cosmic Rays (UHECRs) inthe Earth's atmosphere causes Extensive Air Showers (EAS) that produce ultraviolet radiation, which is detected and measured by the telescope from the EUSO programme (). However, atmospheric conditions, and especially the presence of clouds, are known to introduce high rates of uncertainty into UV radiation mea- surements. Accuracy in determining EAS parameters, such as the energy of the primary particle or the shower maximum, is strongly dependent on the atmospheric conditions (such as temperature, pressure or humidity) at the moment when the events take place. These parameters may alter the development and detection of EAS. Unlike ground-based telescopes, JEM-EUSO will be able to observe the majority of the shower development even inthe presence of certain types of clouds (es- pecially when the clouds are optically thin or when their Cloud Top Height (CTH) is located below the shower maximum). As the telescope will monitor different atmospheric conditions at the same time, a precise knowledge of the spatial atmospheric properties (mainly cloud coverage and cloud top height) inside the telescope Field of View (FoV) is mandatory in order to correctly reconstruct the cosmic ray particle properties. In order to know the atmospheric conditions and properties of the clouds inthe FoV of the telescope, the JEM-EUSO Space Observatory is implementing an At- mospheric Monitoring System (AMS) that will include abi-spectral IR-Camera (; ) and a LIDAR. For retrieved CTH using remote sensors, it must be recalled that thermal emission of thecloud comes from its uppermost layer. Thus, the retrieved CTH is not the physical cloud boundary but the radiatively effective one . In addition, when the temperature profile includes thermal inversions, the conversion between brightness temperature and CTH results in greater uncertain- ties, which produce large errors. These limitations are especially relevant for optically thin clouds, which produce strong uncertainties inthe extraction of cloud properties. Advances in computing technology have enabled a substantial capacity for Numerical Weather Prediction (NWP) models to assimilate observations, thereby improving the skill of meteorological predictions . Currently, the improved vertical and horizontal resolutions of such models permits cloud features to be sim- ulated with great precision. Thus, NWP models have been evaluated for calculating CTH of low clouds, because of their independence of disturbances from higher clouds. As mentioned above, algorithms based on remote sensing have greater uncertainties for multilayer cloud events.
Some of these alternative mechanisms are Paxos , developed by Lamport and Microsoft based on state machine replication; Chubby , based on the former and developed by Google, which is defined as a distributed blocking service. These approaches have the advantage that they are adaptations of formal algo- rithms, and therefore their features have been formally proved. RAFT  which separates key elements of consensus such as choice of leaders, record replication and security, and forces a greater degree of consistency to reduce the number of states that need to be considered. The Practical Byzantine Fault Tolerance (PBFT) algorithm, which is based on state machine replication and replicate voting for consensus on state change, is used in Hyperledger and Multichain. SIEVE , treats the blockchain as a black box, executing operations and comparing the output of each replica. If there are divergences between the replicas, the operation is not validated. Another variant of the PBFT is the Byzantine agreement Federated Byzantine Agreement (FBA). In FBA each participant maintains a list of trusted participants and waits for these participants to agree on a transaction before being considered liquidated. It is used in Ripple . Stellar  is another variant that employs the quorum and partial quorum concept. The quorum is a set of nodes, enough to reach an agreement, the partial quorum is a subset of a quorum with the ability to convince another given node about the agreement. HDAC  is a system, currently being implemented, which proposes an IoT Contract & M2M Transaction Platform based on Multichain. HDAC is specially tailored to IoT environments. It uses the ePow consensus algorithm whose main goals are to motivate the participation of multiple mining nodes and to prevent excessive energy waste.
Nevertheless, non-response bias is a potential source of error if prospective respondents that do not answer the study may differ from those that do, in characteristics that are germane to the research (cf. Dillman, 2000). To assess the seriousness of this problem, country and industry distributions were compared between firms that responded to the online survey and those that abstained from participating. Response rates by country were 16.7 % for the USA and 21.83 % for Canada, which implies that conclusions from this report might be slightly biased toward relationships that can be found more easily in Canadian than in US American firms. Of 400 randomly selected firms inthe database of prospective respondents whose industry was identifiable, manufacturing firms accounted for 16.25 %, whereas non-manufacturing ones comprised 83.75 %. Comparing these percentages with those in Table IV.1 reveals that a larger percentage of manufacturing firms answered the survey relative to those that were originally contacted. The implication is that results from this report might slightly overstate relationships that are idiosyncratic to manufacturing companies. A more precise calculation of the non-response bias was not viable because of the way that the sample was composed (i.e., mailing lists from two sources did not include the industry for their firms), in addition to the fact that most of these firms do not have their demographic descriptors (e.g., size, characteristics of their HR units, etc.) available in an economically feasible manner. The following pages describe the group of respondents that recorded their answers inthe web-based survey.
Mario Carpo, historian of architectural theory, points out an interesting affinity between architectural postmodernism and digital design, in spite of the fact that virtually all of the first stars of the digital vanguard emerged from the angular fractures of deconstructivism. The appearance of a new digital tectonics inthe nineties was possible, in parallel with the development of a new generation of modelling software, allowing direct manipulation of curves on the screen by using graphics interfaces (vectors and control points). Two mathematical aspects of this environment have had lasting consequences in digital design approaches: the continuity of splines and the variability of curves within certain limits or parameters.These remain as characteristic reference points within digital architecture. The idea of an open and parametric generic notation implies the possibility of an authorship that can be shared by multiple agents , from designers to final users . This is a phenomenon characterized by the absence of -isms or styles given that computers are neutral machines without aesthetic preferences, but which facilitate the construction of certain types of forms that were impossible to represent and materialize with conventional tools until now.
Regarding the latter challenge, the trust management for modelling the behaviour of the IDSs is claimed as a must tool in assessing alerts. Furthermore, the collaboration of both the IDSs, distributed across different security or administrative domains, and the “small” detection units provided by mobile users, is faced to two well-known challenges in trust management systems. In particular, two problems appear which are related to the assignment of initial trust scores to new entities (newcomers) that want to join a collaborative system. These two problems are known inthe current literature as cold- start and bootstrapping, differing respectively in whether it is the first time that the entity participates with the system or it has already done earlier in other parts of the system. Both problems are also extrapolated to any collaborative environment where entities need to join each other, at least once, for cooperation purposes. Computing the initial trust inthe cold-start problem is a common issue to all entities in a collaborative system, whereas the bootstrapping issue specifically affects highly dynamic scenarios, where mobile entities cooperate with each other, or with other system infrastructure entities, along their path of travel. Because of the great interest in this new paradigm of collaboration, both problems have also been incorporated as distinct examples inthe last challenge of managing trust in distributed environments.
in [Trajkovska 2010a, Dagher 2008, Carlini 2010]. Inthe first one we presented a P2P-Cloud architecture for multimedia streaming including APIs for QoS cost functions, which is one of the contributions that will be described inthe thesis. The later work handles the P2P concepts combined with cloud servers in order to build a social multimedia application on thecloud. It has proven to adjust traffic generated by the users and reduce provisioning cost for large servers. Carlini et al. [Carlini 2010] showed a combination of P2P and cloud computing as successful in providing good performances such as scalability, elasticity, load balancing and resource provision to massive multi-user virtual environments. This include cost optimization methods for either QoS cost or provisioning cost that is similar to our model. The authors claimed that thecloud when mixed with other techniques can bring benefits related to cost. Similar research study was presented with the CLive system [Payberah 2012]. The authors used cloud resources for P2P live streaming to complement the lack of peer resources in order to guarantee a predefined QoS. Thecloud was furthermore recognized as a suitable platform to leverage improved video transcoding, for example in solutions based on Scalable Video Coding (SVC). In this respect, Chang et al. proposed CloudPP [Chang 2012] - a Cloud-based P2P streaming platfrom based on public cloud servers to construct an efficient and scalable video delivery platform with SVC, putting accent on saving cloud resources. CloudStream [Huang 2011] offered multi-level transcoding parallelization framework with two mapping options to optimize transcoding speed and reduce the transcoding jitters while preserving the encoded video quality. This with the objective to deliver high-quality streaming videos based on streaming adaptation to network dynamics. Studies related to combined approaches with cloud, involved also the Video on Demand (VoD) services, such as [Wu 2011]. Inthe storage systems domain, Stormy [Chang 2012] used thecloud storage for efficient data processing, providing optimized resource utilization and increased cost efficiency.
› The Government commissioned a study on climate resilient livelihoods and sustainable natural resources managementinthe Elephant Marsh, one of the important wetlands, with support form GEF under the Shire River Basin Management Program whose overall objective is to increase sustainable social, economic and environmental benefits by effectively and collaboratively planning, developing and managing the Shire River Basin’s natural. The three key objectives of the study inthe Elephant Marsh were to improve understanding of the functional ecology of the Elephant Marsh incorporating hydromorphology, ecosystem services,
Accounting narratives have grown in importance over the past 20 years (Jones & Smith, 2014). They appear inthe different sections of annual reports and represent opportunities for management to describe, discuss and evaluate the financial and non-financial performance of the company, setting the context for the financial statements. The section of the annual report that, depending on the country concerned, is known as theManagement Report (MR), Management Discussion and Analysis (MD&A), Operational and Financial Review (OFR), or Management Commentary (MC) as designed by the International Accounting Standards Board (IASB), provides information to place accounting results within a wider explanatory context that serves to communicate strategy and thereby attract investors (Ghani & Haverty, 1998). However, the way that companies prepare such reports does not always respond to user expectations, particularly in relation with company risks and uncertainties, the criticism being that there is a lack of useful disclosures (Kravet & Muslu, 2013). Several authors argue that a change must be brought about, so that the disclosures offer a better explanation of the situation and performance of the business, but it is not clear how to get companies to disclose information that is relevant for users or to determine the corresponding regulatory approach that should be adopted (Springer, 1992; Hooks, Coy & Davey, 2002; Stock, 2003; Linsley & Shrives, 2006).
Among global initiatives and technical partnerships, the GAVI Alliance Transparency and Accountability Policy (TAP) is an integral part of its monitoring of country performance. In case of grant extension, members of the Inter-agency Coordination Committee have to confirm that the funds received from the Alliance have been used for the purpose stated within the approved application and managed in a transparent manner, in accordance with government rules and regulations for financial management. Inthe case of 3DF, within the three-year period between 2006 and 2009, US$ 91 million of ODA grants have effectively been delivered, improving tens of thousands of lives. For those living with HIV, resources are more widespread and easily accessed for supportive care and treatment; for people living in malaria-endemic regions and migrant workers, there is improved access to ITN, diagnosis and care; and free provision of TB diagnosis and treatment led to an 85% treatment success rate and more than 41 000 smear-positive cases detected in 2009. By the end of June 2010, 3DF was supporting 27 HIV projects, 8 TB projects, 9 malaria projects and 4 integrated projects. The MOH and local administrations are involved inthe 3DF effort through dialogue at central level and support for services provided at township level.
It thought is no longer can be designed that processes with an ideal structure, which will remain unchanged with the steps of the years. On the contrary, the processes are constantly subject to revisions. On the one hand, from an internal point of view, every process is improvable always found in itself, some detail, some sequence that increases its performance in aspects of the productivity of operations or decrease of defects. In order of successfully implement must be invested an organization to be able of management process, time and effort, participation and training. Any activity or sequence of activities should be noted that carried out inthe different units is a process and as such, it must be managed.
EAD was found to be second inthe number of errors produced, with 16 errors of omission of place and date in conferences involving corporate authority. Inthe transformation to EAD we found 12 records that did not include the terms of relation or designators specifying “issuing body” or “funding body” inthe access points of the main entry. A total of 13 EAD records were incomplete and generated confusion, as they did not mention inthe description whether an online resource and its printed counterpart were catalogued. These errors can be attributed to the number of fields used by the archive rules and the heterogeneity of cataloguing procedures. The rest of the formats – MARC 21, Dublin Core, FRAD, RDF, OWL, XBRL and FOAF – showed fewer than 20 errors in total, indicating that their transfers are carried out with quality and that they serve for usage with Linked Data (Figure 8).
All cited ATMS can support daily traffic manage- ment but the decisions on the traffic actions to exe- cute are performed by a single agent, usually a manag- er agent which stores a general overview of the traffic status. However, if there is a communication break- down between the TCC and the other elements of the traffic system, there is not an opportunity for isolated components of the traffic system to run up-to-date ac- tions to deal with local problems (usually, these isolat- ed traffic systems, inthe presence of a communication breakdown, use to run pre-fixed control actions).
In this paper we have presented a HMI framework for improving the design of in-vehicle speech interactive applications using multimodal and context-ware information. Trying to anticipate and extend current W3C initiatives towards advanced voice and multimodal interactive systems, we have proposed the use of a simple and flexible design framework based on W3C SCXML language. We have discussed how SCXML provides a general process control mechanism suitable for combining basic speech interaction (TTS/ASR) with other modalities as well as with other sources of information from sensors available inthe vehicle. An automotive platform on top of the OSGi framework has also been presented as a suitable technological platform for enabling events management and data exchange, both demanded for our HMI framework. The resulting architecture is currently being used and tested for the design of several in-vehicle interactive applications. Therefore future research will address particular needs that will demand specific applications and the limitations imposed by state-chart schemes, i.e. the difficulty in both managing and designing, particularly as no graphical development environment is yet available for SCXML.
The field of Information Science is constantly changing. Therefore, information scientists are required to regularly review - and if necessary - redefine its fundamental building blocks. This article is one of a group of four articles, which resulted from a Critical Delphi study conducted in 2003-2005 (Zins, 2007a, 2007b, 2007c). The study, Knowledge Map of Information Science, was aimed at exploring the foundations of information science. The international panel was composed of 57 leading scholars from 16 countries who represent nearly all the major subfields and