Another area in which geographic informationsystems are currently being used daily by lots of people are navigation devices. Such devices have emerged thanks to the development of the global positioning system (GPS), initially only for military use, in the 1960s, but extended to its civil use since the 1990s. This system allows the localization of an object through a series of satellites. A GPS receiver is incorporated into a mobile device which is capable of accurately determining its location. In addition, the road network is represented by a graph in which the nodes correspond to intersections of the roads and are labelled with the permitted turns, and the edges correspond to segments of the roads and are labelled with the allowed direction, the length, and the type. With this information the system can compute the optimum route between two points taking into account factors such as distance or time required. In addition, the system can show the user a map of the area with the location, direction and speed of the vehicle. This type of devices include also functionalities for calculation and communication of routes, which can
In this sense, shared information, in terms of the extent to which channel members proactively exchange information with each other, has been found to be critical for the effective functioning of distribution channels (Li and Dant, 1997). Thus, InformationSystems (ISs) could enhance these inter-firm relationships (Ritter and Gemünden, 2003), facilitating an efficient and effective informational flow. Integration and coordination via IS has become a key to improved distribution channel performance (Barut et al., 2002). However, although scholars and practitioners in various fields have turned their attention to sharing information, it appears that manufacturers harbor a disinclination to reveal more than minimal information since such disclosure could be perceived as a loss of control. Owing to the adversarial nature of business, managers tend to overestimate the possible risks without seeing the potential benefits and thus are reluctant to share information with their partners (Huang and Gangopadhyay, 2004). Previous research suggests that shared ISs could create perceived risks from privacy concerns, in turn making the system vulnerable to fraud (Papazoglous, 2000; McKnight et al., 2002).
Web applications are characterized by the presentation to a wide audience of a large amount of data, the quality of which can be very heterogeneous. There are several reasons for this variety, but a significant reason is the conflict between two needs. On the one hand informationsystems on the web need to publish information in the shortest possible time after it is available from information sources. On the other hand, the most relevant dimensions are, form one side, accuracy, currency, and completeness, relevant also in the monolithic setting, form the other side a new dimensions arises, namely trustworthiness of the sources. With the advent of internet-based systems, web informationsystems, and peer to peer informationsystems, sources of data increase dramatically, and provenance on available data is difficult to evaluate in the majority of cases. This is a radical change with respect to old centralized systems (still widespread in some organizations, such as banks), where data sources and data flows are accurately controlled and monitored. So, evaluating trustworthiness becomes crucial in web informationsystems. Several papers deal with this issue, see e.g. [20] and [21]. These two requirements are in many aspects contradictory: accurate design of data structures, and in the case of web sites, of good navigational paths between pages, and certification of data to verify its correctness are costly and lengthy activities, while publication of data on web sites requires stringent times. Web informationsystems present three peculiar aspects with respect to traditional information sources: first, a web site is a continuously evolving source of information, and it is not linked to a fixed release time of information; second, the process of producing information changes, additional information can be produced in different phases, and corrections to previously published information are possible. Such features lead to a different type of information with respect to traditional media.
As a result of the free standing TC8 National Representatives meeting in Zurich in 1983, the Australian National Representative, Cyril Brookes, raised the question of whether it was meaningful to travel “half way round the world” to attend a two day business meeting. He offered that, in April 1984, the Australian Computer society would organize an open conference on informationsystems at which selected TC8 National Representatives would give presentations [12]. The TC8 National Representatives meeting would then be held in conjunction with this conference. (This formula was successful and was repeated in Australia in 1988 and in 1993).
Normally, there are many types of models available in water resources organizations, but current computational models of hydrologic systems are, for the most part, isolated from each other. This is because simulation models software has not been developed in an integrated context of informationsystems, taking benefits of the information, tools and interactions existing in them. There are not still conceptual models that allow the integration of simulation models with informationsystems .
Whitley [30] argue that some methodologies assume that understanding can be built into the method process. They call this ‘method-ism’ and believe it is misplaced; Insufficient focus on social and contextual issues: The growth of scientifically based highly functional methodologies has led some commentators to suggest that we are now suffering from an overemphasis on the narrow, technical development issues and that not enough emphasis is given to the social and organizational aspects of systems development [31]; Difficulties in adopting a methodology: Some organizations have found it hard to adopt methodologies in practice, partly due to the resistance of users to change; No improvements: Finally in this list, and perhaps the acid test, is the conclusion of some that the use of methodologies has not resulted in better systems, for whatever reasons. This is obviously difficult to prove, but nevertheless the perception of some is that ‘we have tried it and it didn’t help and it may have actively hindered’. The work of IFIP WG 8.6 on the diffusion of technology has much to teach us here.
In this research subjects were first-semester students, but there are several other important stakeholders to consider when evaluating CWIS. Included among these are sophomore students, senior students, virtual students, faculty members, teacher assistants, educational administrators, financial administrators, library staff, system developers, system operators, system administrators, scholarship administrators, parents and student candidates. All of these CWIS stakeholders interact with one another and with the CWIS and these interactions will likely result in non-linear links leading to both expected and unexpected behaviors. This suggests research questions such as, “Do virtual students have the same rate of CWIS adoption as in-classroom students?”, “Is the rate of CWIS adoption in real-time systems’ students different than the rate of adoption for asynchronous systems’ students?”, “What kind of material should the library have available to support various stakeholders?”, “What kinds of evidence about academic work should a student collect in his/her electronic
For instance, an application that only uses RDF(S) ontologies may not need any lifecycle services at all. Imagine a web application, which simply presents FOAF pro- files manually imported from external sources. Then only core ontology services are needed to import, store and retrieve information from the profiles. A more sophisticated version may employ agents to crawl profiles from the web. Even then, only population and basic cleansing is needed, because due to the use of RDF(S), no inconsistencies can arise that would require engineering services. Now, imagine an application using OWL ontologies to manage resources of a digital library. Resources are annotated with ontology concepts that can be defined by the user. Most annotations are extracted auto- matically and even new concept descriptions are suggested by the system to capture the knowledge contained in new library resources. Clearly, this application would need a wide range of usage and engineering services and hence, an integrated application with lifecycle support.
Many companies have automated their inventory management processes and now rely on informationsystems when making critical decisions. However, if the information is inaccurate, the ability of the system to provide a high availability of products at the minimal operating cost can be compromised. In this paper, analytical and simulation modelling demonstrate that even a small rate of stock loss undetected by the information system can lead to inventory inaccuracy that disrupts the replenishment process and creates severe out- of-stock situations. In fact, revenue losses due to out-of-stock situations can far outweigh the stock losses themselves. This sensitivity of the performance to the inventory inaccuracy becomes even greater in systems operating in lean environments. Motivated by an automatic product identification technology under development at the Auto-ID Center at MIT, various methods of compensating for the inventory inaccuracy are presented and evaluated. Comparisons of the methods reveal that the inventory inaccuracy problem can be effectively treated even without automatic product identification technologies in some situations.
This research aims to provide a tool for small and medium businesses that need to collect field data and exchange, to be processed and stored in informationsystems. Small and medium enterprises lack of tools for managing the business efficiently. The problem posed by the lack of information for the owners of such businesses has a direct impact on business costs. Several case studies argue that technological modernization in enterprises aims to increase productivity and costs up to 50 percent. Medway Plastics case study is presented in [1]; it shows that the implementation of an Information Technology (IT) solution simplifies management causing money savings and increased productivity by 50 percent. Another case study is the company Peaks [2]; it demonstrates that online collaboration enabled the company reduce the costs of consultants also by 50 percent. Figure 1 shows the three main stages of the business process, for small and medium enterprises:
second stage a selection of articles related to the subject is carried out using key words to search for titles, abstracts, and key words of articles also taking into account the academic reputation (based on the number of citations). At last, the third stage involves a systematic bibliometric analysis of the article portfolio. Based on this structure, articles aligned with the subject of informationsystems and Business Intelligence were gathered sequentially from the Scientific Periodicals Electronic Library (SPELL) database. At the international level, the Coordination for the Improvement of the Higher Level Personnel (CAPES)'s Journal Database was searched for national and international journals. The keywords searched were "information system," "health sector," and "business intelligence", in the title, abstract, and keyword fields.
Caballé Llobet, S., Juan Perez, A. A., & Xhafa, F. (2008). Supporting effective monitoring and knowledge building in online collaborative learning systems. In EMERGING TECHNOLOGIES AND INFORMATIONSYSTEMS FOR THE KNOWLEDGE SOCIETY, PROCEEDINGS Book Series: LECTURE NOTES IN ARTIFICIAL INTELLIGENCE (Vol. 5288, pp. 205–214,5288). (De 5 a 6 cites)
In SOMAS middle agents provide different kinds of matchmaking functionalities. If no adequate services are available for a specific request, a planning functionality can be used to build up composite services. This problem has subtle differences with the classical AI planning problems as service composition plans need not be very deep but, in turn, can be built up from a vast number of services (operators) that are usually registered in the directory. In order to take advantage of recent advances in the field of AI planning for this purpose, we propose exploiting the organisational information available in SOMAS to heuristically filter out those services that are probably irrelevant to the planning process. In this section, we first present an abstract framework for service-class based filtering. We then show how it can be instantiated to a particular MAS domain based on role and interaction ontologies, and finally present a quantitative evaluation of this approach.
economic environments. It should also be remembered that, although North America academic libraries are the driving force behind much innovation in the LIS field and are the source of much new thinking in the discipline, librarians in other countries have sometimes to deal with certain issues before they become critical in the United States or Canada; hence there will be times that the flow of information will travel in the other direction (Cullen & Calvert, 2001:394).
Abstract: Membrane computing is a recent area that belongs to natural computing. This field works on computational models based on nature's behavior to process the information. Recently, numerous models have been developed and implemented with this purpose. P-systems are the structures which have been defined, developed and implemented to simulate the behavior and the evolution of membrane systems which we find in nature. What we show in this paper is a new model that deals with encrypted information which provides security the membrane systems communication. Moreover we find non deterministic and random applications in nature that are suitable to MEIA systems. The inherent parallelism and non determinism make this applications perfect object to implement MEIA systems.
The third case can be exemplified with NGOs that are involved in facilitating collaborative management of protected areas by convening the different stakeholders, providing mediation structures between government agencies and local people, and enabling a sharing of information for all stakeholders to negotiate use and access to resources within existing legal frameworks. In this case the basis of the service is not just information provision, but facilitation of the coming together of different parties to negotiate under structured conditions (Fisher, 1995). Table 4 describes several ICT demand and supply issues that require clarification. We are referring here to demand and supply of ICT infrastructure (the hardware), though the drivers for these are the services (applications). As mentioned before, communication services are often a major driver for infrastructure. On the demand side, those farmers with market access will ‘go at it alone’ and buy the information and communication equipment and services that they can afford, initially cell phones. For the telecommunication carriers these customers are the easiest to grab- what is known in the industry as “cherry picking”, though in rural areas they are often only a fraction of the public with a measurable willingness to pay. The other user groups will have strength in numbers, with an individual spending capacity limited to about 3% of their total monthly expenditures (Kayani and Dymond, 1997; Song and Bertolini, 2002). When aggregated, this population of users can become a substantial driver for rural phone expansion, especially in high density areas; Bangladesh being one well documented example (Richardson et al., 2000). The business case to attract infrastructure
In this paper we present a framework for deployment and use of dialogue-based applications that require using Web technologies, such as REST services. In a dialogue, a service may ask the user for additional information, and next steps depend on the nature of the information supplied. For example, a medical diagnosis service typically requires different information (e.g. analytic measures) depending on the values of other parameters (symptoms) already analysed. We cover the main issue of such applications: services do not necessarily need to be used only in one- h w kfl w . H w , h c y p f m g g w kfl w. Th h c , f r example, of a medical diagnosis service, where it is not necessary to send the whole patient health records but just the requested measure.
To homogenize common information across different clinical settings, such as clinical trial management (CTMS) systems, electronic health records (EHR), laboratory information management systems (LIMS) and others, in this work we propose a standard-based SIL including one common information model (CIM) and a set of services as homogenous endpoints to access data. As shown in Fig. 1, the proposed SIL is defined by the interaction between the CIM and services for data access. The CIM is composed of three main components: (i) the common data model (CDM), (ii) the core dataset (terminologies) and (iii) the linking between them (terminology binding). The SIL was designed as the basis for software services and tools developed within the project, which are focused on enhancing clinical research with genetic information.
On the other hand, students will be faced with the study of intelligent techniques for the design of systems for parameters estimation and pattern recognition, for those areas where other no so advanced techniques have proved inefficient. Dimensionality reduction, machine learning, and computational intelligence with special emphasis in neural networks, decision trees, and evolutionary algorithms, are examples of such techniques.
The VLIR-UOS Network Cuba is developing an information network for research and education developing integrating different platforms for libraries, education and research. To integrate the different platforms standards have to be developed. Following the ideas of the web of data, the network will identify data elements in a standard way over the different platforms: people, organizations and content will get standardized unique identifiers. To support the standards a central authority system is developed: eSFàcil Authority. The use of this approach is now integrated in the different platforms used: Moodle, ABCD, DSpace, VIVO. For DSpace a specific submission module, eSFàcil, is in development that uses metadata autoextraction tools and the authority system of eSFàcil Authority to move the standard repository metadata from text to uniquely identified objects.