Though accepting that the translation of specialized documents requires the sufficient specialized knowledge on the domain, specific subject fields can always be approached from different perspectives, or they change or develop in other directions, and translators need to be constantly training in new subjects. This requires a great effort and a non worthless amount of time, if we consider that they need to repeat this training in all their working languages. As Wilss states (1996: 58): Whether translators (…) understand an LSP text depends, apart from familiarity with the respective terminology, upon their knowledge of the respective domain. This may be a simple truth, but simple truths may imply consequences which are far from being simple of trivial. This is why translators, apart from having an excellent command of the terminology of a domain, they need to be in continuous process of training in new specific topics. However, existent lexical resources are of little use in helping translators to obtain the required multilingual knowledge.
Nobody can deny that school plays a crucial role in L2 learning, however our analysis has clearly highlighted the role of what happens outside the school walls. The promotion of intergroup contact is crucial for both fostering a better understanding and increased mutual feeling of openness and friendliness between the groups and for experiencing the L2 in a real context. The tendency to consider school as the only occasion where the L2 can be acquired, most typical for the Italian speaking community (cfr. Giudiceandrea 2006: 23-37), needsto be changed. As a matter of fact, placing excessive emphasis on the role of the school in L2 learning without considering the influence of extra-linguistic factors put L2 teachers in a difficult position and stirs up the ever-recurrent debate about L2 teaching methods, always in search of the best ever approach. The attempt by the Italian-speaking schools to introduce the Content and Language Integrated Learning method that dates back to the 1990s and which– after decades of polemics and legal struggles - has only recently been officially ratified for both the Italian and the German-speaking schools (cfr. BU n. 29/I-II of 16/07/2013, BU n. 27/I-II of 08/07/2014 and BU n. 5/I-II of 02/02/2016), is the most evident demonstration of the generalized approach to the topic in South Tyrol.
over the world to make their data publicly available. We will have to assume that when using meaningful local names, these agents will use their own languages. For certain domains, this is also natural, since conceptualizations differ from culture to culture, and this is reflected in the language used to describe them; for some thoughts about the reuse of available conceptualizations in different cultural and language settings we refer the interested reader to (Cimiano et al., 2009). Therefore, leaving technical problems aside, we believe that the use of opaque local names avoids any bias towards English (or any other language) and is a better option for ontologies that might support natural language descriptions in several languages. Indeed, this solution has been adopted by multilingual semantic resources such as EuroWordNet (Vossen, 1998) and the Agrovoc Thesaurus of the FAO (Liang, et al., 2008), and was also the solution provided in the transformation of the FRBR models and the ISBD standard into RDF, as reported in section 3.2.2. In this case, labels are to be used to document the ontology in natural languages, as also recently suggested by Tim Berners-Lee 10 .
This brings us to another concern, situated not at the level of the individual SP but at the organisational and institutional level. Against the backdrop of rapidly growing multi-ethnic and multilingual realities, public service organisations and governments in Flanders (Geldof, Connerty and Phillimore, 2016; Noppe et al., 2018) are faced with the limits of traditional ways of language bridging and a pressing need for additional measures to ensure mutual understanding. Technological resources addressing these challenges are slowly gaining territory. The initiatives, some of which have been mentioned in the introduction, show that there is an increasing recognition of the needs faced by public SPs. Yet, what is often missing is an awareness of the fact that digital innovation is always a struggle and that novelties come with a host of fears and insecurities. To facilitate the implementation of digital language support tools in an age of superdiversity (Vertovec 2007), “institutions need to adopt a bottom-up, empowering culture and a trial-and-error mind-set, embracing failure as part of the process and understanding that capabilities are built up through experience over time” (De Wilde, Van Praet and Van Vaerenbergh, 2019: 35). What is more, in the process of change, SPs need to be given the opportunity to take a step back for reflection (see also Iedema and Carrol, 2013). To exploit the full potential of technological mediation in language discordant health care encounters, it is highly recommended that public institutions invest in additional training sessions or facilitate peer learning network meetings to encourage reflection.
As said in reference , ”Data integrity, consistency, redundancy, connectivity, updatability, expandability and complex and ‘fuzzy’ queries are the problems associated with data integration, which arise from the nature of heterogeneous data and the lack of unified ontology”. Therefore, there is a need for integration systems that are able to recognize different ontologies and semantics of the data. Yet integration systems should also provide an environment that allows users to integrate their own data and customize the system. After analyzing this solution, we noted that there is a clear trend towards the use of semantic technologies. The semantic annotation, mapping and querying process is increasingly being used and offers a very suitable approach in data integration. Ontologies and other semantic technologies are a great tool; however, their quality should be improved, as well as their correct use in the data source annotation process. In Goble’s work , in order to defend the idea of light integration using mashups, it is argued that, for better or worse, in bioinformatics, application development is based on the “just in time, just enough” mantra. Moreover, it is also said that biology is what really matters, not engineering, because this last one presents too complex solutions that take too long to be developed and are not adequate to the user’s needs. After analyzing the existing approaches on biomedical data integration, it can be concluded that instead of basing the development of integration systems on a specific architecture or model, developing a methodology for biomedical data integration, which allows providing dimensioned, correct and adapted solutions to the problem’s needs, would be more adequate.
Enrique Torrejón presents a comprehensive view on the translation tools ecosystem, one that integrates technologies, standards, workflows and processes. He writes from the perspective of an experienced consultant in translation technology and describes, in the manner of a case-study, what is the truth behind the integration of tools: A blessing? A curse? Torrejón acknowledges the benefitsof automating clerical work in project management via translation management systems (TMS) but integrating them with other systems (translation memory, machine translation, quality assurance) might turn into a tremendous effort. Yes, APIs are there but interoperability levels among tools are variable. Sometimes translators need to move work to the cloud after integration has taken place, and they may find that their translation environment has completely changed. In this respect, Torrejón offers valuable advice by illustrating the ins and outs of integrating XTRF with memoQ server in the setting of a small-sized translation vendor. A word of warning here: look carefully into what you have already in place before engaging into any new implementation. Otherwise you might find that the struggle does not finally pay off.
Due to space limitations we cannot explain in detail the personal and professional profiles of each target group. We will, however, highlight some points. Most of the healthcare professionals work in Spain (84.21%), although we have received some responses from British, German, Argentinian and Canadian doctors. Their native languages coincide with the language of the country in which they work, although some of them are bilingual and almost all of them speak English as a second language fluently as well as some other European languages. Cooperation of female doctors is slightly higher (53,12 %) than that of male doctors. Mostly, they have over 15 years work experience (40.62%), while others have fewer than five years (31.25%) and between five and 15 years (28.13%). All are graduates and have degrees in medicine (except one student), and the most common medical speciality is family medicine, which is in accordance with the high rate of participation from members of the MEDFAM-APS list. Nevertheless, the variability of responses was wider than we had foreseen. In fact, 11 respondents chose the option “Others” and indicated specialities like intensive medicine, podiatry or palliative medicine.
work using GUM has shown that it can provide a solid basis for providing natural language generation capabilities where domain organization is insulated from the details of its linguistic realization  . Using GUM as an interface level therefore ensures that we do not have to import linguistically-motivated distinctions into our domain ontology in order to support natural language generation. This would compromise the domain model considerably and is generally recognized to be a violation of the desirable modularies of a complete system (cf., for example,  critique of such a violation in the LILOG project). The second reason is that previous work on developing multilingual linguistic resources for natural language generation has shown that such work can be significantly speeded if the linking mappings that are necessary between semantic representations and grammatical form can be largely reused. GUM allows this by providing a fixed anchor that is sufficiently general as to require only minor variations across languages. It is not necessary to adopt an interlingual position, but it is still possible to minimize the language-specific idiosyncratic aspects of the semantic description.
Ontologies play a decisive role in the development of the Semantic Web, since they are able to model the knowledge of a specific domain in a machine readable way. However, the need to provide multilinguality toontologies poses new challenges in the Ontology Engineering research. In this paper we attempt to offer an overview of available strategies for the localizing process of lexical resources and ontologies. Detailed steps in the localizing process of the multilingual lexicon EuroWordNet, the multilingual ontology GENOMA-KB, and the ontology translation software LabelTranslator are presented with the aim of illustrating three different localization approaches, their main characteristics and limitations.
In Table 3 we show the results achieved by the prototype in each experiment. The values are organized by target language. All the percentages of adequacy and fluency shown in this table correspond to those translations punctuated with a value greater than 4. The experimental results show that our system is a good approximation to enhance the linguistic expressivity of existing ontologies. For example, in average our system suggest the correct translation 72% of the times. Also, the values of recall suggest that a high percentage of correct translations are part of the final translations shown to the user in the semi-automatic operation mode. Moreover, the obtained results in each metric help us to analyze which components need improvement. The main limitations discovered are:
Abstract. This demo proposal aims at providing support for the local- ization ofontologies, and as a result at obtaining multilingualontologies. We briefly present an advanced version of LabelTranslator, our system to localize ontology terms in different natural languages. The current ver- sion of the system differs from previous works reported in [1, 2] in that it relies on a modular approach to store the linguistic information asso- ciated to ontology terms. Additionally, it uses a synchronization method to maintain the conceptual and linguistic information updated.
In environmental domain, the well-known AGROVOC thesaurus is used to develop the Agricultural Ontology Service (AOS) project (AGROVOC). AGROVOC is a multilingual thesaurus designed to cover the terminology of all subject fields in agriculture, forestry, fisheries, food and several other environmental domains (environmental quality, pollution, etc.). As presented in (AGROVOC), “it consists of words or expressions (terms), in different languages and organized in relation- ships (e.g. ‘broader’, ‘narrower’, and ‘related’), used to identify or search resources”. AGROVOC was developed by the FAO and the Commission of the European Communities, in the early 1980s. It is an excellent example of linguistic ontology resulting of a terminology agreement between a community. The terms of AGROVOC can be used to reference document contents (Wildemann et al. 2004) or to find the similarity degree between several words corresponding to the same idea. AGROVOC is available in the following languages: English, French, Spanish, Arabic, Chinese, Portuguese, Czech, Thai, Japanese, Lao Hungarian, Slovak, German, Italian, Polish, Farsi (Persian), Hindi, Telegu, Moldavian [see http://www.fao.org/aims/tools_thes. jsp for more detail].
Rural tourism, whilst it has the potential to provide significant benefitsto rural communities, if managed poorly can negatively impact on the socio -economic sustainability of townships. Some of the most common negative aspects of rural tou- rism reported include traffic congestion, parking problems, rising house prices, disturbance and litter (Page J and Connell, 2006).. In order for rural tourism to be beneficial it needsto be mana- ged appropriately balancing the economic benefits with the conservation of the environment and the needsof the community (Philips, 2003) The small settlements of Val d’Orcia and San Gimignano in Siena province of Italy have leveraged locality very effectively to develop a thriving economy based on tourism, however both need to ensure that the tourism industry is sustainable and does not lead to the social and ecological degradation of the local area. Daylesford and Castlemaine in Victoria, Australia are also thriving tourist destinations. In each of the four settlements vibrant and successful industries were created using the strengths of the local area. It was shown that the creation of successful industries often in conjunction with other unique characteristics or assets of an area are a major draw card for tourists. Subsequently, the benefit to the rural settlement is twofold with both the industry and the tourism generated as a result of the industry contributing to the socio- -economic sustainability of the area (Horan et al., 2013). Other key factors for a successful tourist industry, which each of the case studies possessed included a unique identity and being renowned for this, development of robust industries and services often unique to the area, and innovative community and government promotion of the area.
ontologies. Interest in multilinguality issues is growing within the scientific community from various perspectives: multilingual information retrieval, query answering systems, machine translation, etc. . OntoSelect , an online ontology library that registers ontologies published in the web in RDF(S), DAML and OWL formats, reports the existence of 36 multilingualontologies out of the total amount of 1420 ontologies that it contains, i.e., 2.5%. Nevertheless, and although this number is expected to rise in the immediate future, multilinguality in ontologies has not been deeply analyzed from a conceptual perspective, and current solutions to localize ontologies have been applied ad hoc in each specific case. Moreover, we have been able to state that from those ontologies which contain multilingual labels, most of them lack consistency in the languages which are not the original language of the resource, which is English in most of the cases, i.e., not all concepts in multilingualontologies have lexicalizations in all the languages the ontology lexicalizations cover. The Ontology Engineering Group (OEG 2 ) at the Universidad Politécnica de
This paper has presented a text normalization module to be integrated in a text to speech fully-trainable conver- sion system and its application to number transcription. The text normalization module proposed is based on sta- tistical machine translation techniques. This module is composed of a tokenizer for splitting the text input into a token graph , a phrase-based translation module and a post-processing module for removing some tokens. This architecture has been evaluated for number transcription in English, Spanish and Romanian. For all the languages, the reached performance has been very good, specially for numbers not including decimals. When increasing the amount of data used for training the system, the re- sults are better. Finally, it is necessary to comment that the system tuning, as the aligment of the token translator, must be adapted to the language in order to get the best results. Comparing to previous works, for example in , authors compare the language dependent (language spe- cific) - rule based approach with the SMT and suggest to post-correct the results of LS-rule based by applying the SMT. This paper directly use the SMT, without any rule or language specific interventions. The system, at the end, only does minor post-corrections at a very small amount of data (eg. 0.1%). In , for a larger training dataset (eg. 3000 sentences) they obtain a BLEU=94,4, while in these experiments, for the smallest training set of 200 sentences, the BLEU is 95.2 (RO), 97.3 (ES) and 98.3 (EN). In , for SMS and Twitter messages, the BLEU is 99,2 for a larger training set, 90.000 sentences.
The most important finding reported in the litera- ture referred to the dramatic changes in lifestyle that women, and those around them, must endure. As men- tioned previously, women with BC are often hindered from performing their normal activities, particularly those related to their family dynamics. As a conse- quence, roles within the household must be redistrib- uted (e.g., childcare for small and adolescent children and caregiving for family members who are ill or dis- abled). BC survivors, therefore, lose their autonomy and become dependent on the decisions of others. Erci and Karabulut observed that patients perceived a need for physical and practical support to perform a number of activities after leaving the hospital and during treatment 41 . Furthermore, according to Karaöz et al.,
The challenge is formidable for language teachers and schools (Vez, 2008: 2-3). First of all, they are faced with young people whose learning experiences succeed one another without ever coalescing to form a whole, and who play several roles and live in several time frames. Secondly, schools are faced with accelerating loss of community, which is weakening reference points both spatial and temporal (spatial, because the new communications media are bringing the distant close; temporal, because the collective memory based on the things that people have shaped and lived through together is being lost, scattered and fragmented into individual or group memories). This loss of community also leads to a break with the reality principle, as people surrender to the wish to follow their own urges and instincts. Thirdly, schools are faced with ‘virtualisation’, as the information networks detach themselves from human experience, with multimedia manufacturing an alternative reality, and the illustrated press increasingly relying on computer-generated images, rather than straight photographs. Lastly, schools are faced with the new emphasis on self-image, self-development and freedom of the individual, which disconnects people from group projects.
Several key indicators are measured in the BECA example: cold water, hot water, and heating. In order to capture these indicators and energy consumption in tenancies, we have reused the Semantic Sensor Network (SSN) ontology. The key class in this ontology is the ssn:Observation class. Time periods for the observation are represented with the dul:TimeInterval class from the DUL ontology, while the observed value of the consumption is modeled with the ssn:SensorOutput and ssn:ObservationValue classes. To capture the specific indicator for which the consumption is related to, the ssn:Property class from the SSN ontology and the ero:UsefulEnergy class from the Energy Resource Ontology are used, and several instances have been introduced (one for each indicator). For each indicator and measured value, the measurement unit is captured with the mo:Unit_of_measure class from Units of Measure ontology.
This privacy lack results to a user confidence loss, as from this he stops accessing certain services fearing that personal information, which has a considerable value for him, is disclosed or has a malicious use. This can be confirmed by Teltzrow  that says 64% of web users haven’t accessed some time a web site, or they haven’t bought something from it because they don’t know how their information would be used. Also 53% of the users don’t trust in commercial web sites that gather data, 66% of them don’t register in online sites fearing that their information may be used inappropriately, and 40% of them falsify data when registering online .
A convenience sample was obtained from 80 family mem- bers of patients diagnosed with BPD, all literate and over the age of 18, who considered themselves as the IPC, and agreed to participate in the study. According to Nunnally’s (1991) recommendations, the minimal sample size needed to eval- uate the internal consistency of each subscale of the instru- ment was calculated by multiplying the number of items in the area of Knowledge/information, which is the subscale with the most items (nine) by five participants. It can there- fore be assumed that the sample size was suitable for psy- chometric purposes since it included at least 45 participants.