Let us briefly discuss the role of the these properties. Restricted inclusion ensures that non-defeasible facts can be ontologically understood as empty arguments. Cummulativity allows to keep any argument obtained from a theory Γ as an ‘intermediate proof’ (lemma) to be used in building more complex arguments. Horn supraclassicality indicates that every conclusion that follows via Sld can be considered as a special form of argument (namely, an empty argument), whereas Horn right weakening ensures that strong rules preserve the intuitive semantics of a Horn rule (a strong rule y ← x makes every argument A for x also an argument for y) Finally, subclassical cummulativity indicates that two theories Γ and Γ ′
The Semantic Web is a project intended to create a universal medium for information exchange by giving semantics to the content of documents on the Web through the use of ontology defin- itions. Problems for modelling common-sense reasoning (such as reasoning with uncertainty or with incomplete and potentially inconsistent information) are also present when defining ontolo- gies. In recent years, defeasibleargumentation has succeeded as an approach to formalize such common-sense reasoning. Agents operating in multi-agent systems in the context of the Semantic Web need to interact with each other in order to achieve the goals stated by their users. In this paper we propose a XML-based language named XDeLP for ontology interchange among agents in the web.
The field of machine learning (ML) is concerned with the question of how to construct algorithms that automatically improve with experience. In recent years many successful ML applications have been developed, such as datamining programs, information-filtering systems, etc. Although ML algorithms allow the detection and extraction of interesting patterns of data for several kinds of problems, most of these algorithms are based on quantitativereasoning, as they rely on training data in order to infer so-called target functions.
This paper is motivated by extending the original notion of label in order to incorpo- rate probabilistic reasoningin the LDS AR framework. The success of argumentation-based approaches is partly due to the sound setting it provides qualitativereasoning. Numeric attributes, on the other hand, o®er an useful source of information for quantitative reason- ing in several knowledge domains. We think that combining both kinds of reasoning into an single argumentation framework would be highly desirable.
The Semantic Web (SW)  is a vision of the Web where resources have precise meaning deﬁned in terms of ontologies. The Web Ontology Language (OWL)  whose semantics is based on Description Logics  is the de facto standard for the SW. Agents in the SW are supposed to reason over web resources by using standard reasoning systems, thus being able to compute an implicit hierarchy of concepts deﬁned in an ontology and then checking the membership of indi- viduals to those concepts. Over the last few years an alternative approach to reasoning with ontologies called Description Logic Programming (DLP)  has gained interest. The DLP approach relies on translating DL ontologies into the language of logic programming, so standard Prolog environments can be used to reason on them.
Most applications for rational agents involve interacting with a dynamic world. To properly achieve this interaction, the agent must be continuously adapting to the changes in its environment. In this context, perception is a mandatory issue. We have tailored the DeLP system incorporating perceptions abilities into a new formalism called Observation based DeLP (ODeLP). The language of ODeLP is composed by a set of observations Ψ , encoding the knowledge the agent has about the world, and a set of defeasible rules ∆ , representing ways of extending the observations with tentative information (i.e., information that can be used if nothing is posed against it). The ODeLP program P structuring the knowledge of the agent is able to express the following doxastic attitudes with respect a query q:
• Analysis addresses the identification of essential features and the systemic description of interrelationships among them—how things work. In terms of stated objectives, analysis may also be employed to address—why a system is not working or how it might be made to work “better.”
The third phase appears to be more challeng- ing than initially expected. Note that most of the standards being sanctioned by the W3C closely match the point of view of the Description Logic community when it comes to represent knowledge and reason about it. In fact, one of the OWL di- alects, OWL-DL , is directly equivalent to a well known description logic. DeLP , in contrast, fol- lows a more classical approach, unfortunately not entirely compatible. This accounts for the dif- ficulties faced when trying to incorporate case- based reasoning to the semantic web (or any rule- based knowledge representation for that matter). As it has been discussed elsewhere , some as- pect impossible to capture under one approach are easily modelled under the other, and vice- versa. We believe, in turn, that both approaches can coexists in harmony: lets keep the stack of layers we already have (and their corresponding languages), and use those modelling tools to ex- press DeLP programs, argument structures, di- alectical trees, etc. The ontology under develop- ment constitutes the first step in this direction.
The study and development of argumentative frameworks has deserved special atten- tion in this regard, since argumentation constitutes a con°uence point for characterizing traditional approaches to non-monotonic reasoning systems, such as Gelfond's extended logic programming and Reiter's default logic [BDKT97]. In that context, Labeled Deduc- tive Systems (LDS) [Gab96] emerged as an interesting alternative that provides a °exible methodology to formalize complex logical systems.
As we reviewed in previous papers (2006b and 2014), we have found out that terminological variation is also rather frequent in this field. It is not to be forgotten that current methodologies under the scope of this paper are centered on a textual product, the translated text. Yet many authors find common ground in setting up a dichotomy in TQA methods. On the one hand, we have those methods, which analyse the microlinguistic features at sentence level. They are grounded in the notion of error and aim at pinpointing errors by comparison against a preset typology. Errors included in the typologies have an allotted number of discount points according to their relevance that will be deducted from the initial bonus points from which every translation departs. Williams (1989) refers to these as Quantitative methods; Waddington (2000) calls them Analytic and Colina (2008, 2009) refers to them as Anecdotal or experimental. Generally speaking, these types of methods include the SICAL, SAE, LISA, amongst others 3 .
A production system program consists of a collection of IfThen statements called pro- ductions. The data operated by the productions is held in a global database called working memory . By convention, the If part of the productions is called its left hand side (LHS), and the Then part its right hand side (RHS). The LHS of a production is composed by a sequence of patterns: that is, a sequence of partial descriptions of working memory elements. When a pattern P describes an element E, P is said to match E. The RHS of a production consists of an unconditional sequence of actions and some of these actions may change the contents of the working memory. The interpreter evaluates the LHS of the productions to determine which are satisfied given the current contents of the working memory and performs the actions of the selected productions.
Non relevant arguments for A are those contextual arguments not being able to avoid the inclusion of A in the grounded extension of the context. This is important en several scenarios. Following the introductory analogy if Justice trials, non-relevant arguments are the main target of lawyers. These arguments may be viewed as a useless argument used by a member of the juror. It is useless because, even when defeating an argument in the case, it is already defeated by an argument in that case. These arguments are important in different ways. For example, a defender lawyer may want to introduce enough arguments to defeat any contextual argument defeating an argument exposed by himself. He is trying to maximize the number of non-relevant contextual arguments in that sense. On the other hand, he also wants to avoid the defeat of juror’s arguments defeating arguments exposed by the District attorney. In this sense, he is trying to minimize the number of non-relevant arguments. Of course, they do not know a priori any of the contextual arguments. All they can do is to produce a set of arguments good enough to face any court.
The increasing growth of documents available in the World Wide Web has resulted in a diﬃcult situation for those end-users who search for a particular piece of information. A common approach to facilitate search is to perform document classification first, learning the topology of a document base as a set of clusters. Clusters will be labeled as relevant or irrelevant, and determining whether a new document belongs to a given cluster can help determine whether such document corresponds to the user information needs.
Example Dialogue 3 is about deciding on the content of a self-assessment activity that forms part of the course. It starts with the presentation of a simple argument by A (lines 1-4) followed by a simple counter-argument by J (lines 5-7). F goes on with a new, explanatory presentation (lines 8-13), on which J comments and agrees. After this short “grounding break”, A gets into the game again with a long contribution (lines 20-38) which gets until the end of the sequence. His contribution is not a by nature argument move, in the way we defined it, but it serves to support A’s initial presentation, and moreover, with “success”, as it receives the agreeement by both J and F (line 39). However, A’s “success” is something different than persuasion; as it can be seen in lines 34 and 36, J, who had initially expressed an antithesis, she is now adding to A’s proposal. This change might be due to F’s intervention (lines 8-13), which was near to J’s viewpoint, but at the same time it helped her reflect on it in a different way (lines 15 and 17). This is an example of argumentation as consensus or co-construction.
On the other hand, other studies have stood for the development of reading strategies, as the most suitable way to improve language grammar and vocabulary. In terms of vocabulary, for example, Pulido (2009) in How involved are American L2 learners of Spanish in lexical input processing tasks during reading? examined how learners’ reading proficiency and background knowledge affected their L2 lexical input processing and retention. He used a questionnaire on self-reported strategy. Results revealed that greater general reading skills and familiarity with a passage topic led to more successful lexical inference. Certainly, reading is a valid way of language input that enhances retention and transference, the task teachers have is to make it motivating and attractive enough for students.
El libro que reseñamos ha llegado ya a su cuarta edición. En su prime- ra edición (1994), se denominaba solamente Research design: Qualitativeandquantitative approaches. Desde la segunda edición (2003), adopta el título de- finitivo de Research design: Qualitative, quantitative, and mixed methods ap- proaches, el cual se mantendrá en la tercera (2009) y en la presente edición (2014). Con respecto a los capítulos, vemos que entre la tercera y la cuarta no hay modificaciones, que sí hubo entre la segunda y tercera edición. Asimismo, desde la segunda edición, se introdujeron consideraciones relacionadas con la ética de la investigación, así como la novedad de los métodos mixtos. La década de diferencia entre las dos primeras ediciones nos facilita identificar el período de emergencia de los métodos mixtos.
If we analyze the problem form the other perspec- tive, we can think in this investigation line as a pro- posal to change the reasoning mechanism of event and/or situation calculus. Changing the current one based on circumscription from a more open mecha- nism as argumentation. Circumscription is appreci- ated for being closed to logic programming, but forces to considerate any abnormal situation. Any time we want to infer something we must ask for its normal- ity, while default reasoning released us from that. Of course the final result is the same, but in the version we are working on we will give more freedom to the representation, and we will also offer the possibility to discuss about the possible results. This last is im- portant when we deal with agents and negotiation.
hA, H i is a minimal non-contradictory set of ground defeasible clauses A of ∆ that allows to derive a ground literal H possibly using ground rules of Π. Since arguments may be in conflict (concept cap- tured in terms of a logical contradiction), an attack relationship between arguments can be defined. A criterion is usually defined to decide which argument of two conflicting arguments is preferred.In order to determine whether a given argument A is ultimately undefeated (or warranted), a dialectical process is recursively carried out. Given a DeLP program P and a query H, the final answer to H wrt P takes such dialectical analysis into account. The answer to a query can be either one of: yes, no, undecided, or unknown.