1 1 1
( ) := ( ( ; , ) obs , , ( ; M , M ) M obs T ) ,
R D K T C D K T C
where D ( ; , ) is the solution of Dupire's equation (21). 11 **In** (Coleman et al., 1999) two possibilities to efficiently compute the Jacobian of R are explored, i.e. the use of automatic differentiation (AD) (see (Griewank and Walther, 2008; Coleman and Verma, 1996)) and an approximation of the Jacobian via a secant update formula. He et al. (2006) extended the **calibration** method of Coleman et al. (1999) to a jump diffusion **model** coupled with local volatilities. When applying a classical steepest decent algorithm or quasi-Newton approach, the use of adjoint equations to compute the gradient of f at low costs is proposed **in** (Achdou and Pironneau, 2005; Egger and Engl, 2005; Loerx et al., 2010, unpublished data). An optimal control framework also using adjoints is applied by Jiang et al. (2003) to recover the local volatility surface. Turinici (2008) chose an SQP method to solve optimization problem (21). Note that the SQP method already needs some second order information of f which comes at a cost of solving M 1 Black-Scholes PDEs. According to Coleman et al. (1999), optimization approaches, which do not require the calculation of second order information, typically converge very slowly, such that the additional computational effort to obtain at least some second order information can be profitable **in** the overall computation time of the optimization routine (see also (Loerx, 2011)). Schulze (2002) developed an inexact Gauss-Newton method to recover a non-parametric local volatility function. Since, **in** general, the Jacobian of the residual function cannot be stored **in** the non-parametric setting, the Gauss-Newton subproblems are solved with an iterative method (CG method). The matrix-vector products, which are needed within the CG framework, can be provided via sensitivity and adjoint equations. This approach was further improved **in** terms of computational efficiency by Loerx et al. (2011, unpublished data). A reduced order **model** technique, known from fluid dynamics, is used **in** (Pironneau, 2009) and applied to the local volatility framework. **In** (Sachs and Schu, 2008, 2010) reduced order models using proper orthogonal decomposition (POD) are used to solve the PIDE problem of jump diffusion models including local volatility.

Mostrar más
19 Lee mas

7 Lee mas

The profound financial crisis generated by the collapse of Lehman Brothers and the European sovereign debt crisis **in** 2011 have caused negative values of government bond yields both **in** the U.S.A. and **in** the EURO area. This paper investigates whether the use of models which allow for negative interest rates can improve **option** **pricing** and implied volatility forecasting. This is done with special attention to foreign exchange and index options. To this end, we carried out an empirical analysis on the prices of call and put options on the U.S. S&P 500 index and Eurodollar futures using a generalization of the Heston **model** **in** the stochastic interest rate framework. Specifically, the dynamics of the option’s underlying asset is described by two factors: a stochastic variance and a stochastic interest rate. The volatility is not allowed to be negative but the interest rate is. Explicit formulas for the transition probability density function and moments are derived. These formulas are used to estimate the **model** parameters efficiently. Three empirical analyses are illustrated. The first two show that the use of models which allow for negative interest rates can efficiently reproduce implied volatility and forecast **option** prices (i.e., S&P index and foreign exchange options). The last studies how the U.S. three-month government bond yield affects the U.S. S&P 500 index.

Mostrar más
36 Lee mas

This insurance approach means that from an actuarial viewpoint, premiums collected by options, adjusted by the time value of money, must compensate for the claims paid. To clarify, if we were to sell put options on one index, the options at expiration may end **in** the money, or out/at the money. Should it be the first case, the seller (labelled as the insurance company), must pay at the request of the buyer the difference between the spot price at expiration and the strike or exercise price; on the other hand, should the put **option** ends up at/out of the money, the seller or insurance company does not pay anything and gets the right to keep any premiums collected. Given that the possibility of short selling is restricted, we seek to calculate prices resorting to self-insurance, i.e. a firm that invests its own capital to afford the payment of claims. According to that, we seek to calculate what is the minimum price such a firm would charge to provide downside risk hedging **in** the domestic capital market. That price thus calculated will not include an extra charge for operative expenditure, transaction costs, or risk premium on the capital needed. However, it may be useful as a starting point to start offering such contracts **in** the domestic market. As we said before, prices are a private matter between buyers and sellers, and at an appropriate price we can find both parties, so we stand from the side of the seller **in** the real domestic capital market.

Mostrar más
16 Lee mas

The main subject of this thesis is to give a modern and systematic treatment for **option** **pricing** models driven by jump-telegraph processes, which are Markov-dependent models. The telegraph process is a stochastic process which describes the position of a particle moving on the real line with constant speed, whose direction is reversed at the random epochs of a Poisson process. The **model** was introduced by Taylor [T 22] **in** 1922 (**in** discrete form). Later on it was studied **in** detail by Goldstein [G 51] using a certain hyperbolic partial differential equation, called telegraph equation or damped wave equation, which describes the space-time dynamics of the potential **in** a transmission cable (no leaks) [W 55]. **In** 1956, Kac [K 74] introduced the continuous version of the telegraph **model**, since the telegraph process and many generalizations have been studied **in** great detail, see for example [G 51, O 90, O 95, R 99], with numerous applications **in** physics [W 02], biology [H 99, HH 05], ecology [OL 01] and more recently **in** finance: see [MR 04] for the loss models and [R 07a, LR 12b] for **option** **pricing**.

Mostrar más
122 Lee mas

Comparing these results with the values **in** Table 1, we can observe that for the frequencies of 500 Hz, 2 kHz, 3 kHz and 4 kHz, the results are not compliant with the specified uncertainty values **in** the standard. As shown **in** Figure 3, for these frequencies, the contributions caused by the dependency of the response on the placement of the devices ( u(x22)) and the dependency of the response on the temperature (u(x23)), represents near a 50% of the whole uncertainty value. This fact is specially outstanding for the u(x23) contribution for high frequencies values. Therefore, and since this contribution has a B type evaluation, it appears as a good **option** to consider this dependency through a temperature variation coefficient, with which the measurements could be rectified. Moreover, and as it was stated before, the contributions with an A type evaluation (this is the case of u(x22) contribution) can be improved by increasing the number of measurements to be done.

Mostrar más
6 Lee mas

13 publications reviewed –article, Aggarwal, Saini, and Kumar (2009b) also compare ‘time series’ and ‘neural network’ papers. They classify EPF models as falling into one of three categories (although differently from Aggarwal et al., 2009a): heuristics (naïve, moving average), simulations (production cost and game theoretical) and statistical models, where the last category – somewhat surprisingly – includes both time series (regression) and artificial intelligence models. They expand the analysis to include quantitative comparisons of (i) the forecasting accuracy and (ii) the computational speed of different forecasting techniques. **In** our opinion, the value of (i) is disputable. Even if the forecasting accuracy is reported for the same market and the same out-of-sample (forecasting) test period, the errors of the individual methods are not truly comparable if different **in**-sample (**calibration**) periods are used. Moreover, the implementation of the algorithms differs between software packages, and is generally very sensitive to the initial conditions **in** the case of nonlinear or multi-parameter models. It may be impossible to replicate the results, even given the exact **model** structure, as was reported by Weron (2006) for the case of the multi-parameter transfer function (ARMAX) **model** of Nogales, Contreras, Conejo, and Espinola (2002). On the other hand, a table with the computation speeds of different forecasting techniques is interesting. Unfortunately, though, it cannot be used to draw quantitative conclusions, due to the differences **in** processors used, software implementations, **calibration** periods, etc. Finally, Aggarwal et al. (2009b) conclude th at “there is no hard evidence of out -performance of one **model** over all other models on a consistent basis” and that longer “test periods of one to two years should be used”. We cannot argue with these conclusions.

Mostrar más
71 Lee mas

As with other numerical method applications, prices from our trinomial CEV **model** is converge rapidly with all a value to the closed-form solution as the number of steps n increases. When n = 25, on average, the value from our trinomial **option** **pricing** approach is closer to close-form solution than approximation solu- tions using other numerical method developed by Boyle, Tian and Bin. For n = 100, the results show that the difference between the trinomial method and the closed-form solution is less than or equal to 0.01. The result with time step 250 is almost approach to close- form solution. The accuracy of results is similar to other numerical methods and the efﬁciency of recursive algorithm **in** the trinomial CEV **model** is higher than other numerical algorithm.

Mostrar más
6 Lee mas

This thesis has only scratched the surface of the vast field of numerical **option** **pricing**. An introduction to the field has been made through comparison of the fundamental methods for the valuation of the most popular derivatives. The binomial **model** is very important because it shows how to get around the reliance on closed-form solutions **in** a simple and accurate manner. The greatest advantage of the binomial **model** is that it can easily deal with early exercise. The code for calculating the price of an American put **option** was made as fast as it can be **in** MatLab. Binomial **model** proved to be the fastest and most accurate out of all numerical methods presented **in** this thesis. However, this is true only for basic American **option**; there are far better choices if one wants to price some exotic options, for example barrier or look-back options. Also, author’s opinion is that the **model** of stock price behavior is poor, since the assumption that the asset price can either go up or down by a known amount is clearly unrealistic. Indeed, the intuition that one gets from the binomial method is useful.

Mostrar más
47 Lee mas

J. C. Cox, S. Ross [4]-[5] and R. C. Merton [11] initiated the research of the **option** **pricing** models with jump diffusion processes, but these models are usually motivated by empirical ad- equacy. **In** addition, most of these models are incomplete market models, and there is no perfect hedging **in** this case. **In** this paper the basic idea behind the use of jump processes is that the jumps eliminate arbitrage possibilities. This is the complete market **model** and hedging is perfect.

21 Lee mas

for Reliability Charge, re-sampling the historical data year by year, using a block bootstrap method. For a single year, it can be seen as the money that should have been paid to acquire the **option** given the performance of the price during that year, to be exerted at a specific Scarcity Price. This curve will serve as benchmark for the models. It is also important to consider that Scarcity Price has been **in** the 200 to 300 COP/kWh range since its establishment. Behaviour between models 1 to 5 and between models 5 and 6 is practically indistinguishable. Even though models 1 to 4 presented lower magnitudes **in** every error measure presented **in** section 5, they under-perform comparing to Real curve. This could happen due to the high volatility of the prices not being totally captured by a simple mean reverting process. Which means it may exist a fat tails non Gaussian behaviour of the innovations **in** each period. On the other hand, models 5 and 6 have higher estimations than the mean reverting models. While for low scarcity prices the estimation lies under the real curve, for high scarcity prices it’s the other way around. Anyway, for the interest region, the performance of the estimation is very close to the real curve which validates the models. Based on the results obtained it appears to be better to **model** electricity price with mean reverting jump models for estimating the Reliability Charge.

Mostrar más
14 Lee mas

ϕ represents the amount of the risky asset held over time and ψ is the same for the bond. We suppose the processes ϕ and ψ to be adapted with the driving Poisson process. To take the jumps **in** account we will constrain the processes ϕ and ψ to be left-continuous.

14 Lee mas

En las restantes zonas altiplánicas, la producción de quinua acompañada de mayores precios y productividad tiene un impacto modesto sobre los ingresos y pobreza, dado que los tipos d[r]

23 Lee mas

). Given this conjecture, the uninformed have to solve a Kalman Filtering problem **in** order to compute their estimates of g. As the risk they face is given by the conditional variance of next period price, and next period price will depend on the value of the fundamental process, the precision of their estimates regarding the process g will affect directly the perceived risk of holding the stock. **In** turn, this determines the risk premium and liquidity **in** the market. Clearly, the insider takes all this into account when optimizing over her strategy. The same incentives to restrict her trading for lower values of µ and feedback effects via information leakage onto conditional risk described **in** **Model** 1 apply to this context as well. Here we provide a numerical example to illustrate that the qualitative results described by means of closed form solutions **in** **Model** 1 are robust to this dynamic environment. The particular values of the parameters chosen are meant to be illustrative and should not be considered an attempt to calibrate the **model** to real world data. Following our interpretation, define w = 1+µ 1 as the degree of asymmetric information. Let the risk premium be the unconditional expectation of the excess return per share RP = E [P t+1 + RD t − RP t ] , and Depth be the inverse of the price impact

Mostrar más
103 Lee mas

There are several studies that show the empirical evidence avail- able concerning the distribution of the variable θ 0 real =θ 0 expected, including those of Flyvbjerg et al. (2005) and Næss et al. (2006). The latter gives a high dispersion of initial real traffic **in** relation to the expected initial traffic, though it referred to a sample that includes both toll and toll-free motorways. More recently, Bain (2009) provided the results for a large sample of more than 100 projects **in** various countries, on toll motorways. According to the evidence provided by this study, the relationship between the initial traffic and the expected traffic on toll motorways follows a normal distribution, with an average equal to 0.81 and a standard deviation equal to 0.24, for countries with a certain tradition **in** the construction of toll roads. This distribution has been adopted **in** this work. This would mean that there is a bias **in** the estimates of initial traffic by concessionaires, who tend to overestimate such traffic to some extent. Furthermore, the dispersion of the distribution of the variable, given by the standard deviation, is high, which shows the difficulty **in** estimating traffic. Note that, according to the work cited above, the optimistic bias **in** traffic predictions is considerably higher **in** countries with little experience **in** toll- motorways concessions.

Mostrar más
9 Lee mas

Using a literature review, four methods are applied for calculating betas **in** a sample of eleven companies that, between 2010 and 2012, were listed on the Stock Exchange of Argentina. Using each method, it was identiﬁed that one can be relied upon to determinate the betas of Small Business not quoted on the Stock Exchange. It is concluded that to calculate the beta values and interpret the risk of each company, it is technically necessary to analyze the method used and the variability of the time series used, as well as having knowledge of the future prospects of both the company analyzed and the industryto which it belongs.

Mostrar más
9 Lee mas

To see the problem, suppose that Mexico was an isolated economy that was neither importing nor exporting gas. Assume that as a result of some intertemporal maximization there was a well-defined price that was a correct measure of the opportunity cost of discovering and producing gas. **In** a dynamic programming problem, this would be the costate variable associated with non-associated gas reserves. This costate variable is the value to Mexico of adding or subtracting on unit of gas at that time. It is easy to show that the correct price of gas for gas is that intertemporal price. At the margin Mexico should be indifferent to consuming a unit of gas or adding it to its non- associated gas reserves. Now suppose that Mexico is now linked to an external market where there is a different price of gas. Further assume that this different price reflects quasi-rents caused by some temporary bottlenecks. Using the netback rule to set the price of gas **in** Mexico would mean that Mexico would stop using the price of gas that correctly measures the tradeoff between the consumption of gas now and the consumption of gas **in** the future. The surprising result that we obtain **in** this paper is that,even **in** this case, the netback rule is optimal. The simple intuitive argument is that Mexico could capture some of the quasi-rents by reducing its consumption, and exporting gas.

Mostrar más
24 Lee mas

Planeamiento Estratégico Dinámico Richard de Neufville, Joel Clark, y Frank R.. Field Massachusetts Institute of Technology El Modelo CAPM Transparencia 1 de 35.[r]

18 Lee mas

The Region of Madrid has been one of the most dynamic areas **in** Spain, and more generally **in** Europe, **in** terms of urban growth over the last two decades (Hewitt and Escobar, 2011; Plata Rocha et al., 2009). The Metropolitan Area of Madrid has grown beyond the limits of its own boundaries, merging with other cities or towns without any vacant space **in** between. This may indicate the need for urban plans involving several municipalities **in** a common planning process. **In** order to test the **model** presented here, three municipalities were selected (Meco, Los Santos de la Humosa and Azuqueca de Henares) **in** the east of the current functional Region of Madrid, which exceeds the official Region of Madrid and extends into the province of Guadalajara (Figure 1).

Mostrar más
24 Lee mas

Madrid, 27 de Enero de 2015 Control, Supervisión y Mantenimiento de Subestaciones Eléctricas Madrid, 27 y 28 de Enero de 2015 Gestión Técnico-Económica de Parques Eólicos Madrid, 29 de[r]

7 Lee mas