To summarise, we found that the main processes underlying qualitymanagement are planning, improvement and control processes.Theories like Juran trilogy, 12 Deming points, Kaizen theory, six sigma, circles of quality, TQM, TOC, and so on, have in common the objective of improve quality based in the compliance of customers requirements and necessities, each method is different from the other and companies can combine or use just one of them. In the food industry due to the existence of obligatory qualitymanagementsystems (GMP, GHP, HACCP), it could be recommended the combination of three theories, Juran’s trilogy because of its relation with HACCP, Kaizen theory because it's important to improve quality in every process with small steps but incremental ones, and Ishikawa theories regarding to suppliers qualitymanagement, because as we described before, quality of a final product has to be managed through the supply chain.
Today, many organizations are implementing MSs not just to fulfill the requirements of individual MSSs, but to operate in a more combined, efficient and effective way (Asif et al., 2010). And in doing so, organizations can look to achieve significant internal benefits as well as meeting any external demands (Asif et al., 2010). Thus, there has been a growing recognition of the value that IMSs can bring to the business (Karapetrovic and Willborn, 1998; Wilkinson and Dale, 1999b; Douglas and Glen, 2000; Renzi and Cappelli, 2000; Casadesus and Karapetrovic, 2005; Zutshi and Sohal, 2005; Zeng et al., 2007; Salomone, 2008; Asif et al., 2009; Khanna, 2010 and Asif et al., 2010). The major improvements related to having an integrated system presented by these authors include aspects such as costs savings, operational benefits, better external image, improved customer satisfaction and enhanced employee motivation. However, it is important that firms manage the difficulties associated to the implementation and maintenance of an IMS in order to avoid its failure (López-Fresno, 2010). These challenges are numerous and involve aspects such as the lack of human resources, the lack of government support, departmentalization of functions and individual concerns of the people involved (Karapetrovic and Willborn, 1998a; Karapetrovic, 2003; Zutshi and Sohal, 2005; Karapetrovic et al., 2006; Zeng et al., 2007; Salomone, 2008; Asif et al., 2009; Karapetrovic and Casadesus, 2009 and Asif et al., 2009).
Remote sensing and GIS analysis results showed a dramatic decrease in the area of grassland from 1975 to 2014, accompanied by an increase in the area of croplands and settlements in the same period. Rapid population growth and their demand for diverse products and soil fertility deterioration over time forced farming families to change part of their land to other forms of land use/cover. Declining soil fertility due to soil erosion and lack of financial capacity for its restoration leave majority of the household food insecure in general, poor households in particular. Therefore, future attempts in soil fertility management in the region should not only entail application of technologies that add nutrients to the soil, but also should be complemented by measures that reduce nutrient losses through runoff and soil erosion. Due to the proximity of the study area to major market i.e. Addis Ababa city, building public-private partnerships around market- oriented barley production can be an entry point for encouraging investment in use of external nutrient inputs to improve soil fertility and boost agricultural productivity. The availability of too many religious holidays in the study area also contributes directly or indirectly to the current seasonal food shortages of the community. The government/local officials should intervene by discussing the issue with religious leaders and community elders to reduce the number of religious holidays in the area. In general, the results of our study provide compelling evidence that the local community in the study area is beset with a host of social, economic and institutional challenges which need to be properly addressed to come to grips with problems of food insecurity. Therefore, we recommend the involvement of interdisciplinary stakeholders and policy framework to curb these dire situations, looking from both biophysical and social perspectives. Particularly, enabling and capacity building of the local people with different agricultural technologies not only help them become food secure but also greatly contribute to environmental protection in the future.
Cabanero-Pisa, C.; Serradell-Lopez, E.. (2010). Computer Sciences Applied to Management at Open University of Catalonia: Development of Competences of Teamworks. A 1st International Conference on Reforming Education, Quality of Teaching and Technology-Enhanced Learning: Learning Technologies, Quality of Education, Educational Systems, Evaluation, Pedagogies, May 19-21, 2010, Athens, Greece; Technology Enhanced Learning: Quality of Teaching and Education Reform, Book Series: Communications in Computer and Information Science, 73, 237-243.
Differences in TTR and quality of anticoagulation control have been reported previously between patients controlled by GPs with those controlled by haematologic clinics. In a meta‐ analysis by Baker et al, 25 8 studies with more than 22 000 patients demonstrated a mean TTR of 55%, being significantly lower when the control was carried out by GPs vs Hematology (63 vs 51%). In our study, the mean TTR estimated by the Rosendaal method had higher values than in the American study. In contrast, mean TTR in our analysis was significantly higher in patients controlled by GPs than by haematologists. In addition, the higher propensity towards poorer control associated with being the haematologist responsible for control of anticoagulation was not modified when the analyses were adjusted for the variables associated with poor control of anticoagulation such as a high HAS‐BLED score or history of bleeding. It should be noted that we are not comparing professionals (haematologist vs primary care physicians) but two differing anticoagulation managementsystems (including number of visits, patient accessibility and availability of important clinical information). However, there are other models to assess the quality of anticoagulation. Hou et al 26 performed a systematic review of 8 clinical trials and 9
In total, one hundred publications were analysed 6 ; the list of publications is given in Appendix A. Tables 2.11, 2.12, 2.13, 2.14, and 2.15 show the results of the analysis for ontology engineering tools, ontology matching tools, reasoning systems, semantic search tools and semantic web service tools, respectively. Each table shows the number of papers in which a specific type of semantic technology is evaluated, together with the quality characteristics (shown in bold), which were extracted from the SQuaRE standard, that are evaluated for the related technology type. For every characteristic found in the literature for particular semantic technology type, a set of measures that can be used for measuring such characteristic is presented and, in those cases where different terminology was used, measures are referred to by the names used in the literature for them and are grouped under an arbitrary common name (shown in italics) that semantically denotes the measure. For example, classification correctness for reasoner systems is found in one publication as “number of successes” and in another publication as “number of solved tasks” and, therefore, these two measures are grouped together under “classification correctness”. In some cases, however, it was not possible to determine the exact classification (e.g., in the case of time behavior of reasoning systems). The number of papers in which a specific measure was found and the number of papers that evaluated a specific characteristic are shown in brackets. Numbers are omitted when only one occurrence appears.
Web applications are characterized by the presentation to a wide audience of a large amount of data, the quality of which can be very heterogeneous. There are several reasons for this variety, but a significant reason is the conflict between two needs. On the one hand information systems on the web need to publish information in the shortest possible time after it is available from information sources. On the other hand, the most relevant dimensions are, form one side, accuracy, currency, and completeness, relevant also in the monolithic setting, form the other side a new dimensions arises, namely trustworthiness of the sources. With the advent of internet-based systems, web information systems, and peer to peer information systems, sources of data increase dramatically, and provenance on available data is diﬃcult to evaluate in the majority of cases. This is a radical change with respect to old centralized systems (still widespread in some organizations, such as banks), where data sources and data flows are accurately controlled and monitored. So, evaluating trustworthiness becomes crucial in web information systems. Several papers deal with this issue, see e.g.  and . These two requirements are in many aspects contradictory: accurate design of data structures, and in the case of web sites, of good navigational paths between pages, and certification of data to verify its correctness are costly and lengthy activities, while publication of data on web sites requires stringent times. Web information systems present three peculiar aspects with respect to traditional information sources: first, a web site is a continuously evolving source of information, and it is not linked to a fixed release time of information; second, the process of producing information changes, additional information can be produced in diﬀerent phases, and corrections to previously published information are possible. Such features lead to a diﬀerent type of information with respect to traditional media.
Possible weakness in security of a wireless system should be recognized so that the right measures can be taken to improve the user’s confidence. AmI systems require data privacy, security, and physical security. A very small carelessness in the AmI security could really have a big impact to everyone involved . Problems like authorization, authentication, and accounting are important while considering the data security. Different devices and standards for communications should be studied properly. Security requirements in AmI systems are studied from the data integrity, authentication and confidentiality point of view.
Received: 18 December 2019; Accepted: 13 February 2020; Published: 24 February 2020 Abstract: In the last years, software engineering researchers have defined sustainability as a quality requirement of software, but not enough effort has been devoted to develop new methods/techniques to support the analysis and assessment of software sustainability. In this study, we present the Sustainability Assessment Framework (SAF) that consists of two instruments: the software sustainability–quality model, and the architectural decision map. Then, we use participatory and technical action research in close collaboration with the software industry to validate the SAF regarding its applicability in specific cases. The unit of analysis of our study is a family of software products (Geographic Information System- and Mobile-based Workforce ManagementSystems) that aim to address sustainability goals (e.g., efficient collection of dead animals to mitigate social and environmental sustainability risks). The results show that the sustainability–quality model integrated with the architectural decision maps can be used to identify sustainability–qualityrequirements as design concerns because most of its quality attributes (QAs) have been either addressed in the software project or acknowledged as relevant (i.e., creating awareness on the relevance of the multidimensional sustainability nature of certain QAs). Moreover, the action–research method has been helpful to enrich the sustainability–quality model, by identifying missing QAs (e.g., regulation compliance, data privacy). Finally, the architectural decision maps have been found as useful to guide software architects/designers in their decision-making process.
The design of the research entailed analysing the types of flexibility through manage- rial perceptions. This particular assessment method should be borne in mind when consid- ering the results. Here we analyse managerial perceptions of QM and the consequences it has on flexibility. The point of view will therefore vary from an employee perspective and this could have a bearing on some differences in the research results. For example Chow (1998) finds differences between employers and employee perceptions of human resources practices, since HR managers claimed that empowerment was being exercised (using own judgement, making own rules), while employees did not see it the same way. The use of employee representatives as respondents in previous studies could be a factor in explaining the differences in the results. From the cognitive bases of strategy, according to Jackson and Schuler (1995, p. 253), human resource managers, acting individually or as a group, interpret their environments, and this process impacts on the actions they put into practice. In this vein, this research considers managers’ underlying values when they take decisions on human resource systems and practices, and thus provides a better understanding of the decisions human resource managers take. Future research could include other information sources in order to incorporate employees’ points of view into the study. Similarly, we did not analyse the implication on the organisational results that considering QM as an antecedent of labour flexibility might have. Furthermore, our study is limited to service companies in Spain. As pointed out by the results of other studies such as that of Bacon and Blyton (2001), there is a danger of generalising single- country findings across national boundaries. The results should therefore be interpreted in the context in which they were obtained.
Continuing in a different vein, a considerable amount of work has been done on usability measurement based on a wide range of measures such as task completion, accuracy, error rates, preci- sion and understanding, among many others [34,24]. For example, the international standard ISO/IEC 25010:2011 Systems and soft- ware QualityRequirements and Evaluation (SQuaRE, ) deﬁnes a software quality model as a composition of ﬁve characteristics: effectiveness, efﬁciency, satisfaction, freedom from risk and con- text coverage. This standard is applicable to the quality evaluation of both systems and software products. Closely related to this qual- ity standard is the ISO/IEC 25060:2010 , which provides a gen- eral overview of the Common Industry Format (CIF) framework. The ISO/IEC 25062:2006 deﬁnes the structure of a usability test re- port  and standardizes the information that must be reported when performing usability tests with users, replacing the indus- try’s proprietary formats and providing several advantages, such as the reduction of staff training times or the standardization of test results documentation. From another point of view, and re- lated to the usability evaluation of model-driven (MDD) tools, Con- dori-Fernández et al.  presented a framework for usability evaluation of MDD tools under similar conditions. In this frame- work, the authors propose a combination of methods to measure whether the test participants perform the tasks used to evaluate the usability of the tool in an optimal way, as well as an example of how user satisfaction can be measured by observing their facial expressions. A different framework for the evaluation of CASE tools is proposed in , which focuses on the evaluation of learnability, considered as one of the usability sub-characteristics in . This framework makes the evaluation by means of a questionnaire on the learnability of CASE tools based on several criteria, such as ease of learning, familiarity, consistency and predictability.
Firstly, we present some of the known metadata manage- ment issues in the context of the my Grid project 2 , which provides a rich service-based middleware infrastructure for the bioinformatics domain (Section II). Secondly, we analyze some of the existing approaches and technologies that can be used to address these issues (Section IV). Our main contribu- tion is a novel proposal for managing metadata as first-class resources in distributed systems, known as S-OGSA (Sec- tion V). Designed as a non-disruptive extension to the OGSA architecture, S-OGSA provides a service-oriented approach to large-scale, uniform metadata management on the Grid. After presenting the core S-OGSA service, called “Semantic Binding Service” (SBS), we conclude by arguing that the proposed architecture addresses the management issues listed above. A prototype version of the SBS has been implemented, and is deployed as a Grid service within the Globus Toolkit v.4 service container.
adaptive systems has a growing interest in the academy and the industry. As a result, currently there exists different proposals for the specification of requirements for self-adaptive systems. Despite the momentum that this area has received in recent years, in the works reported in the literature we have identified shortcomings. We propose a new framework to represent the requirements of self-adaptive systems. This framework seeks to manage uncertainty and to be sufficiently expressive for self- adaptive systems, including the representation of all the relevant concepts. The concepts, represented in different views to be used in a 5-stage process. Specifically, we present: (i) a discussion of the challenges and problems encountered in the literature; (ii); our proposal to solve these challenges; and (iii) a case study of the problem and its application.
"Rationale. The scheme represents the conception of information science as the science of information society (focusing on information systems); it studies the information and its five basic sub-processes – generation, processing, communication, storage and use - in order to optimize them. These processes are related to information as immaterial product and are representing the information cycle (within a research system). It is similar to the well known product cycle (within an economic system) with its three basic processes: production, distribution, and consumption. This is a managerial and pragmatic approach (Dragulanescu, 1999)"  (Nicolae Dragulanescu)
Un último punto que se debe mencionar es que todo el campo de revisión está cambiando, en una manera que pueda afectar las agencias y revisores que trabajan en el sector de los servicios públicos/comunitario y aporte nuevos métodos de trabajo. Dos cambios en particular son dignos de mención. En primer lugar, en cierta medida, la revisión ha sido "brought in from the cold" [sacada del frío] y ahora se reconoce formalmente en los procesos de calidad de la traducción mediante la adopción de parámetros de calidad como la EN 15038 European Standards for Translation Quality [Estándares europeos para la calidad en la traducción], ahora ampliamente suscrito a muchas agencias que trabajan en áreas relacionadas con la UE. En los Estados Unidos, la ASTM15 Standard cubre más o menos el mismo terreno y ahora se trabaja hacia una estandarización de ISO. Aquí, la revisión es una parte obligatoria del proceso de calidad de la traducción. Algunas de estas mismas agencias trabajarán en el sector de los servicios públicos/comunitario y podrían aplicar los mismos estándares para trabajar allí. Incluso en países fuera de la UE o Estados Unidos, donde la traducción en los servicios públicos sigue siendo realizada por organismos independientes relativamente pequeños, fuera de los marcos formales de calidad o por profesionales individuales, la necesidad de especificar procesos y reconocer más formalmente los parámetros de calidad puede pueden llegar a ser más evidentes ya que los clientes del sector de los servicios públicos quieren asegurar una mayor consistencia en los resultados de la traducción, de la misma manera que han llevado a la exigencia de certificación en la interpretación en los servicios públicos/comunitarios.
Besides, the students will acquire enabling competences for the systemic development of the control logic of industrial process by applying normalised techniques expressed in formal languages and implementing them using standard design patrons or architectures. The students will also acquire competences to implement supervision and inspection systems by using industrial perception technologies and by applying techniques for parameters estimation and patter recognition for advanced signal processing.