In asymmetric informationgames, the referent is unobservable from the receiver’s vantage point, so the receiver uses the literal meaning conveyed by the message to ascertain the intended action. The sequence is as follows. First, the priors P about the possible moves W are given by an equilibrium of the game without communication, which in most examples below is the most diffuse mixed strategy Nash equilibrium. Second, the sender sends a message " m " "M" about W if unilateral communication is possible. Third, the receiver updates its priors through the decoding and inferential steps described shortly. Fourth, the receiver picks A and the sender picks W . Finally, : WxA ! is the utility function of player " , . If W and A are finite sets, a finite set of messages " M" suffices to communicate. Strategies and beliefs are given by ( $ , % , % , µ , where:
Investment decisions can be associated to options and standard models for their calculation assume that their exercise is simultaneous and uninformative, and that agents perfectly know the true value of the parameters to be used in the formula and hence can decide whether or not to invest 5 . However, there are many situations in the context of real assets where agents are in possession of imperfect or private knowledge of relevant information, meaning that the estimation of value that they are doing could be subject to differences. In this context of imperfectinformation investors can react by calibrating their expectations taking into account what they see other agents do. This happens in every situation where decisions under uncertainty must be taken: agents tend to see what others do, therefore decisions made by participants release private information to the market and may change expectations. The herd behavior is usually associated to such environments, where informational cascades may obtain. In the context of investment and abandonment decisions regarding real assets (real options), decisions made by participants have to take into consideration the decisions made by others, given the uncertainty regarding the true value of parameters (as opposite to financial options where parameters may be well known, in real options the parameters have to be calculated) and therefore can give rise to deviations from optimal conditions of exercising.
The first scenario considers the minimization of the transmit power in a BC where the BS is able to allocate several streams for each user. This is an extension to the model considered in Chapter 5, where the Multiple-Input Multiple-Output (MIMO) BC was considered with only one stream per user. Such a scenario is interesting if the objective of the MIMO feature is to increase reliability, since the probability of being affected by fading in all the independent paths at the same time is low. Compare this situation with that of multiplexing several streams for every user, hence taking advantage of the MIMO spatial multiplexing to increase the speed of the communication link. Considering multiple streams means an important change in the system model, since the dimension of both the transmit and receive filters have to be adapted accordingly. Therefore, this extension has a big impact on the problem formulation taking into account that we can choose different per-stream target rates without changing the per-user target rate. Thus, we end up with a nested optimization problem where we do not only have to find the optimal precoders but also the optimal per-stream rate constraints.
Social and economic networks play an important role in many situations. Con- tributions to microeconomic theory have used network structures to formalize such diverse issues as the internal organization of firms, employment search and the struc- ture of airline routes 1 . Most of the models of network formation in these scenarios share a common characteristic: perfect information. However, many real-world situa- tions suggest that agents who interact in a network may ignore relevant characteristics that affect the final outcome of their interaction. For example, consider a buyer-seller network 2 in which agents ignore important information to estimate the value of their commercial relationships, for example, the quality of the products exchanged. One can also think of a network of contacts in a job search 3 . In this case, agents may not be able to observe to whom the links of their neighbors lead to, and therefore be unable to estimate the value of a personal contact as a means to finding a job. In gen- eral, social networks are a clear example of the relevance of imperfectinformationin these frameworks; all of us are involved in a set of interpersonal connections in which not only we are unaware of relevant features of our neighbors but also about the links of a large part of them. These examples suggest a need to extend the theoretical framework of network formation to allow for limited information.
There is a growing literature addressing the impacts due to the massive penetration of intermittent power generation upon prices and investment in restructured electricity markets (see (6), (20), (13), and (2)). Of these references (6) is perhaps the closest to our work. By extending the theory in peak-load pricing subject to (supply/demand) uncertainty, it is shown (Proposition 5) that equilibrium investment is socially optimal provided electricity prices equal marginal outage costs during events of scarcity. The market fails to deliver appropriate investment levels when price caps are set at values below marginal outage costs. A simulation model of renewable energy investments is used as evidence for the claim that the incorporation of renewable capacity (along with real- time pricing) reduces the average cost of electricity. This is due to energy consumption shifting from peak to off-peak hours. Renewable capacity should therefore be seen as a substitute to baseload technologies and complementary to peak generation technologies. The interaction of policies directed to foster renewables in power markets and capacity adequacy markets are also studied in (2). The equilibrium investment mix in “peak” versus “baseload” conventional technologies is computed assuming investment in wind power is exogenous. Using data from the western I.S., the author concludes that as the level of wind penetration increases, the equilibrium investment mix of conventional capacity shifts towards less “baseload” and more “peaking” capacity. The simulation- based conclusions reported in (20) for the German electricity market are similar. Finally, in (13), a numerical testbed of a two-stage model of Cournot competition (with no entry) is used to analyze the effects of increased renewable capacity in the Istaeli electricity market. The authors report that in certain cases, average market prices may increase with increased renewable capacity.
ABSTRACT: Apparently the behavior during a basketball game, as in other team sports, shows tremendous variability manifested in both individual and collective ways. However, when a significant number of games are studied, we can observe the unpredictability that characterizes the game. The degree of complexity of the game is not stable. Patterns change during all the game time, but the last minute is completely different reality. Our aim was to test and evaluate the existence of these patterns and their apparent complexity, by analyzing the NBA games scoring and substitution dynamics. Therefore, we examined the difference between the last minute and the rest of the game from the collected scores (1, 2 and 3 points), substitutions and timeouts. The underlying chaotic behavior of non- linear interactions is inherent in Complex Systems. The data showed the existence of symmetries and repeated patterns of play during basketball games of the NBA but the last minute, which can be considered a completely different game.
In passing, we note that most work on games with incomplete preferences focuses only on the problem of the existence of equilibrium (cf. Ding (2000), Shafer and Sonnenschein (1975), and Yu and Yuan (1998), and the references cited therein). By contrast, our objective here is to obtain operational characterizations of Nash equilibrium sets of such games. In this sense, our paper is closer in spirit to that of Shapley (1959), who characterizes the set of all mixed strategy Nash equilibria in vector-valued two-player zero-sum games. This characterization has been extended by Aumann (1962) to a larger class of matrix games. In particular, we show here that the set of Nash equilibria of any game with incomplete preferences can be characterized in terms of certain derived games with complete preferences. Provided that all playersz preferences can be represented by concave functions, we can sharpen this result further; in this case it su¢ces for the characterization of the equilibrium set to look at games with complete preferences that are derived from the original game by a simple linear procedure. We conclude with a discussion of trembling hand perfect equilibria ingames with incomplete preferences.
Non-myopic spatial algorithms Previous work in the class of inﬁnite horizon spatial algorithms are based on the assumption that the environment is static over time. Under this assumption, it suﬃces to traverse the environment once, while ensuring that the informativeness of the observations made along the path is maximised. Since visiting the same location twice does not result in new information, these algorithms will attempt to avoid this. This is in contrast with our assumption that the environment varies in time as well as space, in which case revisiting locations is a necessary requirement for optimality. Algorithms found in this non-myopic spatial class consist primarily of approximation algorithms for the single-sensor non- adaptive  and multi-sensor adaptive  setting with energy constraints (e.g. ﬁnite battery or mission time). Both works exploit an intuitive property of diminishing returns that is formalised in the notion of submodularity: making an observation leads to a bigger improvement in performance if the sensors have made few observations so far, than if they have made many observations. This property holds in a wide range of real-life sensor applications, and is an assumption that our work shares with that of Singh et al. However, apart from solving a different problem (i.e. single traversal vs. continuous patrolling) the solution proposed by Singh et al.  also differs algorithmically from ours. While they deﬁne a two-step algorithm for computing high quality single traversals through the environment, our solution is a full divide and conquer algorithm. In more detail, in the ﬁrst step the algorithm of Singh et al. divides the environment into clusters, and computes high-quality paths through these clusters. In the second step, these paths are concatenated to yield the desired traversal. The two steps bear similarity to the ﬁrst two operations used in our algorithm ( Divide and Conquer ). However, our algorithm uses completely different techniques (sequential decision making) for concatenating paths within a single cluster into inﬁnite-length patrols, which, unlike their solution, are recursively applied to increasingly smaller subdivisions of the environment, until the patrolling problem within these subdivisions becomes eﬃciently solvable.
females to use computers. However, the current generation of girls raised with computers as ambient may not consider computers, and games, to be naturally the province of boys. A related problem is that games environments are peopled with violent and stereotyped characters, and the roles game players adopt may require or satisfy the need for aggression and extreme control seeking. Because there is no opportunity for reflection on this ‘behaviour’ during or after the game, aggression and violence are implicitly condoned and indeed seem essential. In a games environment this is not a problem: it is in the transfer of attitudes or beliefs about acceptable behaviour to reality that the difficulty lies. Fortunately, most games players do not transfer their game skills in shooting, physical violence and vicious destruction to their everyday lives. In education, it may be critical to acknowledge the place of games and to discuss these difficult issues without trying to transgress too far into the privacy of the games culture.
considering the surroundings of atoms on the surface and in the interior of a solid. To bring an atom from the interior to the surface, we must either break or distort some bonds - thereby increasing the energy. The surface energy is defined as the increase in energy per unit area of new surface formed. In crystalline solids, the surface energy depends on the crystallographic orientation of the surface - those surfaces that are planes of densest atomic packing are also the planes of lowest surface energy. This is because atoms on these surfaces have fewer of their bonds broken or, equivalently, have a larger number of nearest neighbors within the plane of the surface. Typical values of surface energies of solids range from about 10 –1 to 1 J/m 2 . Generally, the stronger the bonding in the crystal, the higher the surface energy.
The purpose of financial accounting is to satisfy the users’ needs of financial information that is helpful in decision making. Therefore, managers prepare and present financial statements, which represent the main source of information. According to IASB (1989), the objective of financial statements is to provide useful information about financial position, performance and changes in financial position of a firm. The usefulness of accounting information have been constantly expressed in the literature by the term “value relevance”, which measures the utility of accounting figures from the perspective of equity valuation (Beisland, 2009). Watts and Zimmerman (1990) described this concept as “information perspective”, which views financial statements as a provider of information for the valuation models. The value relevance reflects the main function of accounting, which relates to the supplying of useful information that enables investors to value securities and make rational decisions (Dumontier & Labelle, 1998). The objective of value relevance research is to relate financial statement figures to a measure of firm’s value and, to assess the relation of such information to the determination of value (Dahmash & Qabajeh, 2012). The value relevance measures the ability of financial statements to capture and summarize information that is reflected in firm’s value (Francis & Schipper, 1999). Under this concept, to be value relevant, accounting information must be associated with the current company value.
From the early stages of this query the student´s role was different from that one of classic English classes, where their voice and opinion is least heard, especially when it comes to making the decision on which program, approach to take on. Considering that the basis of this query relies on a students´ needs analysis survey, the following steps of the pedagogical implementation had to be learner-centered, and especially if under the CLT umbrella, in which students are active participants in their own learning process. “Learners now had to participate in classroom activities that were based on a cooperative rather than individualistic approach to learning. Students had to become comfortable with listening to their peers in group work or pair work tasks, rather than relying on the teacher for a model. They were expected to take on a greater degree of responsibility for their own learning “. (Richards, 2006, p.9).
In the area of Intellectual Capital (IC), when the objective was to measure the difference between market value and theoretical value (Edvinsson & Malone, 1997; Sveiby, 1997), several proposals were formulated for drawing up IC reports (Meritum, 1998, 2002), etc. There is, however, no consensus regarding a standard model of measurement and information for IC (Bronzetti & Veltri, 2013). Studying disclosure on innovation capital by 51 European companies, Bellora and Günther (2012) find that the information is mainly qualitative, not financial and historical, observing that the phenomenon appears to be European, rather than local. In this sense, we should anticipate that this can be the case also in Spain. With regard to information relating specifically to personnel, Kent and Zunker (2013) question the quality of employee informationin a sample of Australian listed companies, having found evidence that even when companies have access to previous adverse publicity, they do not include voluntary negative informationin the annual report.
There are ways in which traditional educational software is constrained by its purpose and ethos. The aim of such software is to provide information about curriculum subjects, and it may be difficult to be able to engage the user beyond this educationally specific purpose. Entertainment software does not have this constraint; indeed, one of its aims is to expand the experience of the user in new and innovative ways if the software is to be commercially successful. Therefore, entertainment software has had the space and the impetus to experiment and test new areas of the user experience. Combined with the development in the sophistication and data-handling capabilities of hardware, entertainment software can often offer the user near-immersive experiences which are not time or subject bound:
A possible interpretation of the debates surrounding the well-publicized series of accounting scandals of recent years is that the increased complexity of account- ing numbers and the prescriptive rules that have attempted to keep pace with rapid changes in business practices have made accounting numbers more remote from the common knowledge benchmark, depriving them of wider meaning. When meaning is fragmented for want of a common understanding, the bare numbers themselves take on added significance for no other reason than such numbers are observed by others. When the bare numbers take on such significance, there is potential for abuse. The potential (and temptation) for manipulation and abuse is symptomatic of the erosion of a common understanding of the accounting num- bers themselves. In an ideal world, accounting numbers are just a veil and would not matter. That they matter so much is indicative of the imperfections pervading financial markets.
for i ≠ j (an assumption introduced by Kingman  in his “house of cards” model), a global Lyapunov function can be found which excludes cyclic behavior and guarantees that all orbits converge to the set of fixed points [1,9]. However, this “nice” behavior is not the general case [1,3,14]. A well-known particular instance of the Replicator-Mutator Dynamics is the widely-used replicator dynamics. The Replicator Dynamics (RD) appears in this framework as the extreme case where replication is perfectly precise, i.e., where i-strategists can only give birth to i-strategists, leaving no room for mutation, error or experimentation. A crucial point in this regard is to note the difference between an evolutionary process without mutation (i.e., the RD or, equivalently, the RMD with µ ij = 0
In [Ortner & Graf, 2013], a deterministic model to minimise the hourly costs of the hydro- thermal and open-loop PSHP energy schedules for meeting the residual demand (after sub- tracting must-run generation), and the reserve and energy schedules for balancing services is presented. The model formulation is based on linear programming in order to obtain, as an endogenous result of the model via dual variables of the demand constraints, the prices of the reserve capacity (power), of the energy due to the real-time use of the reserves, and of the system demand energy. The balancing services comprises the secondary and tertiary regula- tion services, considered only as upward regulation reserve (downward regulation reserve is omitted). The case study minimises the costs of the hourly generation portfolio scheduling for system demand and secondary and tertiary balancing services of 2012 in Germany, Aus- tria and in a merged zone of Germany and Austria. In Germany, while hydropower plants provide reserves irrespective the storage capacity, PSHPs are more committed to participate in balancing services the more storage capacity they have. A similar analysis is presented for Austria. It is important to remark that PSHPs considered in this work are not able to provide neither secondary regulation nor tertiary regulation reserves in pumping mode. The solution of the merged zone does not bring any further important conclusion. In addition to this, results show certain correlation between water inflows and the secondary regulation marginal costs in Germany and Austria. In general, when water inflows in both systems are high, secondary regulation marginal costs are low. As the Austrian power system presents a higher relative installed capacity of hydropower technology in the generation portfolio than the German one, the studied correlation is stronger in Austria than in Germany.