overall impact of real rigidities on inflation dynamics. This issue is particularly important, as real rigidities have became popular among New Keynesian theorists precisely because they provide a mechanism to amplify the effect of nominal disturbances and, all else being equal,to reduce the size of the Phillip curve’s slope. In light of these features, real rigidities in price-setting, also refereed to as strategic complementarities, are now recognized as important theoretical ingredients of modern-day New Keynesian models. For instance, Eichenbaum and Fisher (2007) have shown that extending the canonical Calvo model by assuming firm-specific capital and demand functions to have non-constant elasticity of demand (quasi-kinked demand) allows one to recover estimates of the Phillip’s curve’s slope with a realistic degree of nominal rigidities. Smetz and Wouters (2007) have used quasi-kinked demands function in an estimated monetary DSGE model. Sbordone (2008) extends the Kimball model to study the effect of globalization on inflation dynamics. Our analysis casts some doubt regarding the robustness of such conclusions, showing that abstracting from non-pricecompetition, as canonical model do, may potentially overstate the overall impact of strategic complementarities on inflation dynamics. This therefore suggests that enriching the New Keynesian framework to include non-pricecompetition among firms may be a promising feature in order to improve our understanding on the key determinants of inflation dynamics. This should be particularly true in economy, as the US one, in which non-pricecompetition appears to be an important dimension of the inter-firm rivalry.
In addition, we perform three robustness tests using different subsamples. Results are presented in Appendix 2.8, for both the linear andnon-linear specifications. First, we eliminate products in which the matching across stores is not perfect. In particular, we exclude meat, bread, among other categories. Quantile regressions yield identical patterns as when using the complete dataset. Second, we use all products but eliminate the outliers, defined here as those whose price is above three times (or a third below) the median price. This approach is more conservative that the one typically used in the literature. For example, Gopinath and Rigobon (2008) and Klenow and Kryvtsov (2008) eliminate prices that are more than 10 times higher or less that a tenth of the median price. Still, our rule only excludes 11.2 thousand in 32.8 million, or just 0.034% of the observations. Once again, the patterns are almost identical to the ones obtained using the complete number of observations. The only minor difference is that, for a given percentile, the border effects are smaller in absolute terms. In other words, the estimated implied distances are smaller than those in Panel (a) in Figure 2.8.
Another argument for cross-country heterogeneity in the marginal eect of foreign aid on economic growth, that has been popular in both academic and policy circles, are nancing constraints (see e.g. Sachs, 2005). Domestic and, in particular, rural nancial markets are often ill-functioning (or simply non- existant) in many of the LDCs so that high return projects go unrealized be- cause (rural) investors fail to obtain nance for their projects. An aid inow may have a high return if it successfully targets high return projects and eases nancing constraints in the (rural) nancial markets. Figure 4, Panels A-C ex- plore the role of such nancing constraints by plotting the country-specic slope estimates against various indicators proxying the severity of nancial market im- perfections. Panel A plots the relationship between the country-specic slope estimates and the World Bank credit information index that captures the avail- ability of credit information from either a public registry or a private bureau to facilitate lending decisions. Panel B plots the relationship between the country- specic slope estimates and the percentage share of individuals and rms listed in a public or private credit registry with current information on repayment his- tory, unpaid debts, or credit outstanding. And, to capture that credit market
The empirical evidence for Europe is relatively sparse. Studies by Lefranc (2003) for France, Carneiro and Portugal (2006) for Portugal, Eliason and Storrie (2006) for Sweden find the long-term losses to be large and concordant with the earlier studies for the US. Other results for Germany, confirm these findings. Burda and Mertens (2001) and Schmieder et al. (2010) found wage losses to be around 4 and 14%, respectively. For the British economy, Arulampalam (2001) reaches similar conclusions. The author also stress the importance of the source of unemployment and report significant scarring not only after dismissals and layoffs, but also after non renewal of temporary contracts and among workers from declining industries. More recently, Garcia Perez and Re- bollo Sanz (2005) and Arranz et al. (2010) using the European Community Household Panel (ECHP) data analyze the effects of job mobility on wages, and particularly the effects of a spell of unemployment and inactivity on reemployment wages. The results found confirm that workers experience important changes in their real wages as a conse- quence of involuntary job mobility. According to Garcia Perez and Rebollo Sanz (2005), German workers tend to experience larger wage losses compared to the rest of countries (Spain, France and Portugal). When compared to stayers, German workers have much larger wage penalties, around 22%, followed by French, Spanish and Portuguese work- ers, who suffer wage losses of 10%, 9% and 8% relative to stayers, respectively. At the same time Arranz et al. (2010) found that spells of both, unemployment and inactivity, scar future wages. These scars are deeper in France if individuals move between jobs due to inactivity. Unemployment (but not inactivity) also brings about wage losses in Ger- many, Italy, Spain and Portugal. This study focuses on wage losses after a mass-layoff using a unique dataset from social security records distinguishing between workers hold- ing permanent and fixed term contracts.
To the best of our knowledge, this paper is the first in the international real-business-cycle literature to consider the role of trade finance. We go a step forward into the understanding of international trade performance in a two-country, three-sector, micro-founded model by introducing a simple representation of the financial sector. Our model is able to shed light on many persistent contradictions between theoretical business-cycle volatil- ities and their empirical counterparts. First of all, we find that imports are twice as volatile as output in our simulations. Though this is still low compared to US data, it represents an important improvement, for previ- ous models generally generate import volatility lower than GDP volatility. Terms of trade volatility in the model is larger than that of GDP and closer to the actual value compared to the existent literature. Our model is ca- pable of generating consumption that is as volatile as in the data without the need to resort to non-standard preferences, thereby correcting the ex- cess in consumption smoothing found in past literature. Furthermore, we overcome the “consumption/output anomaly” by producing cross-country correlations in consumption smaller than in output, as in the data. It turns out however, that these improvements are independent of the presence of a credit constraint, but are rather associated to modifications in the struc- ture of the model economy, such as a separation between importing and exporting firms in combination with the monopolistic competition setting in which importers carry out their activities.
The main result is that the combination of a high degree of price stickiness with a large share of rule-of-thumb agents generates indeterminacy. The intu- ition of this results can be illustrated with the following example. Let’s consider a transitory but persistent increase in the region H’s production due to a non fundamental shock. Sluggish price adjustment induces a decline in the markups which allows real wages to go up - even if labor productivity declines given the higher employment. Higher real wages generate a boom in rule-of-thumb con- sumption. Hence when the share of those agents is high enough their increase in consumption more then offset the decrease in Ricardian consumption and invest- ment (the latter is generated by a monetary rule that reacts with a coefficient bigger than one on inflation). On the other hand, exports to the foreign country are very sensitive to changes in the relative prices. Under the baseline calibration the increase in region H output generates a positive spillover on the neighbor country stimulating its output, employment and so foreign rule-of-thumb con- sumption. When the terms of trade does not appreciate enough foreign imports (home exports) barely decreases at impact and then increases. A higher share of foreign rule-of-thumb agents further mitigates this effect. This means that aggregate demand for output produced in region H increases, making possible to sustain the persistent boom in output the was originally anticipated by agents. This result is similar to that found by Gali et al. (2005) for a closed economy.
Besides problems with output prices, missing information on sectoral factor inputs is another serious issue for the standard approach, especially for developing countries. This problem can be overcome to some extent by assuming Cobb-Douglas production functions, mobility of factors across sectors and perfect competition in goods and factor markets. These assumptions allow to back out sectoral factor inputs by using aggregate factor endowments and sectoral factor income shares. Nonetheless, in addition to the distortions that arise from missing data on output prices this adds another potential error margin, the size of which is unknown because the model is exactly identified. 2 While we need to make similar assumptions on factor mobility andcompetition in factor markets, our methodology allows us to evaluate the statistical fit of our model. Since we use cost functions to measure input costs, our approach is also related to dual growth accounting, a method originally developed by Jorgenson and Griliches (1967) and applied to aggregate TFP accounting in levels for a cross section of OECD economies by Aiyar and Dalgaard (2004). Their procedure requires assuming constant returns to scale and perfect competition in goods and factor markets and needs infor- mation on sectoral input and output prices, as well as sectoral factor income shares. The main obstacle for applying this method at the sector level is, again, missing data on sectoral price indices.
This thesis consists of three self-contained essaysonnon-stationary panel data. We propose novel approaches to both cointegration and unit root analysis in panel data models. The main contribution of this thesis is allowing for the presence of cross- section dependence through the speciﬁcation of an approximate common factor model. Early studies assumed that time series in the panel data were either indepen- dent or that cross-section dependence could be controlled by including time effects. In macroeconomic, microeconomic and ﬁnancial applications, cross-section depen- dence is more a recurrent than a rare characteristic and it is usually caused by the presence of common shocks (oil price shocks or ﬁnancial crises) or the existence of local productivity spillover effects. Ignoring these factors can lead to spurious statistical inference. More exactly, in the case of unit root testing, the unaccounted cross-section dependence might lead one to conclude that panel data is actually I(0) stationary when in fact it might be I(1) non-stationary. Similarly, the panel data cointegration test statistics might indicate than there are more cointegrating relations than there exist. Thus, recent studies proposed several alternatives to over- come this limitation. One popular approach is the factor structure applied to the error process, an approach that we employ throughout this thesis.
If the increase of the overall risk of the economy due to noise traders is big enough, risk averse traders will demand less of the risky asset for any given dividend than what they would demand if the economy was fully ra- tional. This implies that the riskless asset is overdemanded with respect to the rational benchmark. This is a stylized way of understanding the equity premium puzzle. Since Mehra and Prescott 1985, the literature has tried to understand why risky assets are less demanded than what rational mod- els would imply. All the different explanations that have been proposed to solve the puzzle had to depart from the standard rational framework by assuming for example habits in consumption as in Constantinides 1990. What we propose here is that the inherent risk that noise traders repre- sent for the economy might be enough to offset the potential gains rational traders can obtain due to the erroneous beliefs noise traders have. Noise traders make the demand of the risky asset vary too much due to the fact that their proportion varies with the economy, thus amplifying the inher- ent risk of the market and making non-profitable for rational traders to bet too much on the risky asset. The animal spirits present in financial markets make them too risky, thus depressing the demand. Thus, a higher vari- ability of the price implies two things. First it can make returns higher, as explained in the analysis of result 4, but second it depresses the demand of risk averse traders, thus giving an alternative way of understanding the equity-premium puzzle.
One main advantage of our approach is that we do not require information on inputs at the sectoral level to compute productivities but just need data on aggregate factor prices. Another point is that our model generates predictions on differences in sectoral prices so that we do not depend on information on sectoral price indices. Fi- nally, we estimate sectoral productivities, which allows us to evaluate their reliability. Our results provide evidence that cross country TFP differences in manufacturing sectors are large, on average of about the same order of magnitude as the substan- tial variation across countries at the aggregate economy level that has been found in the development accounting literature (for example, Hall and Jones (1999), and Caselli (2005)). In addition, we show that productivity differences between rich and poor countries are systematically larger in skill labor and R&D intensive sectors. Productivity gaps are far more pronounced in sectors such as Scientific Instruments, Electrical- andNon-electrical Machinery, and Printing and Publishing, than in sectors such as Apparel, Textiles, or Furniture.
The perspective of our paper is related to, but different in several aspects, from the view taken in Shepard (1987) and Farrell and Gallini (1988) who show that a monopolist may have incentive to create competition in order to be able to com- mit to a low price or a high quality. First, their models are essentially static in the sense that they describe transactions in commodity goods whose value to the buyer is constant over time. This stands in sharp contrast to the service-centered perspective of our model where customers need to decide strategically which ser- vice provider they choose and how much effort they exert in the joint co-creation of value because their future payoffs are directly affected by these choices. Sec- ond, unlike Shepard (1987), we assume an environment in which knowledge is tacit and not transferable to rival firms through licensing, for instance due to lim- ited enforceability of intellectual property rights. As a consequence, in our model, the incumbent cannot use a fixed fee to extract the benefit that accrues to a rival firm from an increase in the latter’s stock of knowledge. This situation is quite coherent with what happens when an incumbent makes a software available as open source.
government. 7 Although over recent years there has been a convergence in the regional funding mechanisms, differences persist 8 . However, two common types of funding can be identified across regions: 1) basic funding, which considers variables related to both demand and costs of production factors and 2) non-recurrent funding, which supports program-contracts tied to output-performance, e.g. in terms of research outputs or graduation rates (Consejo de Coordinación Universitaria, 2007). In order to illustrate the main regional features of budget allocation we describe two examples - Madrid and Catalonia, which represent about 34.6% of Spanish university expenditure and account for 31% of its university students. The Autonomous Community of Madrid allocates resources to universities according to the following model: 59.5% to teaching 9 , 25.5% to research, 10% to enhance aspects such as teaching performance, undergraduate job placement, quality of services, and finally, 5% to accomplish other objectives. In turn, Catalonia’s funding model comprises five elements: 1) Fixed - lump-sum payment for each university, 2) Basic - linked to scale of university activity, 3) Derivative - policy promoting academic personnel, 4) Strategic – linked to objectives, and 5) Competition - through official announcements.
Natural capital can have two components: On the one hand, its living component (biomass, biodiversity, dynamics) and, on the other hand, its non-renewable component, including the biophysic ability of a particular place to accommodate important ‘living’ natural capital (both included in our natural capital variable). Hence, marine substrate complexity can buffer anthropogenic effect on reef systems (Cinner et al., 2009a), and hence allow a better transition to economic development while preserving reef resilience, allowing potential virtuous circles. Maldives, The Bahamas and some archipelagos in the Pacific may be examples of such nature-rich sustainably developing areas. In contrast, in Florida, in the Antilles and in certain African coastal countries since coral reefs are more fragile and hence less resilient to external pressures, economic development induces a loss in natural capital. These elements contribute to explain the complex relationship between economic growth and coral reef health, often described as U-shaped (Cinner et al., 2009a). While resilient marine natural capital can hence allow a better transition to economic development, we find that a strong dependence on natural capital wealth can hinder economic growth in countries that are less developed as commonly highlighted in the empirical literature (van der Ploeg, 2010 and Ross, 2015). In particular, a strong dependence on the export of non-renewable natural resources can hinder economic growth.
The remainder of the chapter proceeds as follows. Section 1.2 presents the model. Section 1.3 derives an analytical expression for the welfare losses and shows some im- portant analytical results. Section 1.4 introduces the standard Calvo price-setting in order to compute the welfare losses when the pricing decisions are time-dependent. In this case, I show analytically that the dispersion of output gaps across goods de- pends, in the long run, on aggregate in‡ation andon the variance of the idiosyncratic productivity shock. The part of the dispersion that is due to idiosyncratic shocks is independent of aggregate macroeconomic variables, and consequently, independent of monetary policy. Therefore, it is concluded that there does not exist any monetary policy that can reach the ‡exible price allocation when some …rms cannot adjust prices to their idiosyncratic shocks. The welfare losses are computed under di¤erent plau- sible calibration exercises and assuming that price stability is followed. Section 1.5 presents a modi…ed version of the Generalized Ss model developed by Caballero and Engel (2007). This model is used in order to compute the welfare losses when the pricing decisions are state-dependent. Section 1.6 concludes.
There is also some evidence on counter-cyclical lending standards as can be seen in the Loan Officer Survey of the Federal Reserve Survey or the recent work by Dell’Ariccia et al. (2012) but these do not directly imply the same for private information produc- tion. 8 Largely due to data constraints, contributions to the literature which try to mea- sure the extent to which private (or soft) information is produced by banks typically focus on the cross-section rather than the time dimension. For example, Agarwal and Hauswald (2010) use loan data over a 16-month period starting in January of 2002 from a large U.S. bank to examine the relationship between distance and soft information production. They regress an internal credit score rating by the bank on publicly avail- able estimates of credit worthiness (credit bureaus) and use the residual variation as a measure of private information. Degryse and Ongena (2005) conduct a similar exercise although they focus on the effect of distance onprice discrimination using loan data from a large Belgian bank initiated since 1995 and still on the bank’s loan portfolio by August 1997. Although the authors do not highlight the result, in a regression of loan rates on loan, borrower, and relationship characteristics as well as distance, they find that the unexplained variation in loan rates is decreasing in the size of the loan, whether the loan is collateralized, and larger for sole proprietorships. More recently, Cerqueiro et al. (2014) use loan data from a large bank in Sweden around a change in Swedish laws that reduced the value of collateral for some loans to determine the effects of col- lateralization on loan rates and monitoring. They use loan data of about one year before
The academic literature (among others, Afonso et al., 2010; Doerrenberg and Peichl, 2014; Wolff and Zacharias, 2007) generally views fiscal policy as a measure to address growing income inequality, which is a widespread concern nowadays (e.g., discussed in the popular book by Piketty (2014)). Although the income distribution could also be affected by monetary policy, the distributive effects of monetary policy have not broadly been discussed in the literature (Coibion et al., 2012; Saiki and Frost, 2014; Villarreal, 2014). Taking this into account, the objective of Chapter 3 is to contribute to the discussion in this research area by evaluating the effect of monetary policy on income inequality. The distributional effect of monetary policy is estimated in the case of the USA, where the dynamics in income inequality has mainly been driven by the variation in the upper end of distribution since early 1980’s (Congressional Budget Office, 2011). The chapter uses an inequality measure that represents the whole distribution of income, and in this respect, it complements the work by Coibion et al. (2012) who use economic inequality measures that do not cover the top one percent. To identify a monetary policy shock, the chapter employs contemporaneous identification with ex-ante identified monetary policy shocks as well as log run identification. In particular, a cointegration relation has been determined among the considered variables and the vector error correction methodology has been applied for the identification of the monetary policy shock. The obtained results indicate that contractionary monetary policy decreases the overall income inequality in the country. These results could have important implications for the design of policies to reduce income inequality by giving more weight to monetary policy.
Rich parents are likely to prefer private early education, given that a large share of public education expenditures would have to be financed out of their pockets. Rela- tively high voter turnout among the educated, as in the US, might bias policies in their favor. In contrast, relatively high voter turnout among the less educated could increase public expenditures on early education due to its redistributive nature. In the model economy public education expenditures are endogenous and households vote via prob- abilistic voting. This allows me to exploit the skewness of voter turnout by age and level of education across countries to explain variations in education expenditures and the effects on inequality and mobility. The weights of individuals in the voting process are assigned according to voter turnout by age group and level of education using the voting supplement of the Current Population Survey (CPS) of 2006 for the US, and the European Social Survey 2010 (ESS) and the Canadian Election Study of 2008 for the experiments. I find that observed patterns of public and private education expenditures, inequality, and intergenerational mobility can be reconciled by voter turnout. On aver- age 23% of differences in intergenerational mobility and 21% of differences in the Gini index compared to the US can be explained by voter turnout. As a robustness check, I repeat the analysis while weighing voters by the fraction of party members per age group and education level. The data is obtained from the World Values Survey 1981- 2007 (WVS) and the results exhibit similar patterns. This indicates that the political participation of a society, whether through voting or through party membership, shapes public policy, and thereby influences inequality and intergenerational mobility.
Patrinos and Vadwa (1995) analyse the production costs of introducing local language material in the context of Guatemala and Senegal. The estimates for Guatemala are based on 500,000 textbooks developed by Direccion General de Educacion Bilingue Intercultural (DIGBI), for the four majority Mayan languages. The authors estimate that the introduc- tion of Mayan curriculum increased the unit cost of primary education by 9 percent, over the cost of Spanish-only curriculum. This however overestimates costs for the future years, as this includes the curriculum development costs, accounting for 37% of the total cost, which would not have to be borne in the later years. In the case of Senegal, the estimates suggest, whereas the cost of producing a French textbook is US$ 0.35, this increases to US$ 0.84 in the case of textbooks in Wolof. An important point to be noted is the estimates for cost per textbook for French is based on producing around 150,000 books, whereas for Wolof the number of books produced were only 4,140. The authors point out that the per unit cost would decrease significantly as the number of books produced increase, as the associated fixed cost per unit would decrease. They estimate that economies of scale in production can be achieved by printing around 10,000 books and in such a scenario there would be no difference in the cost of a French or a Wolof textbook. Using the above esti- mates we assume that in the first year there is an increase of 10% in per capita spending per pupil and from the year onwards there is no difference in the cost of provision of local or foreign language instruction.
This paper also sheds light on the relation between marital sorting, inequality, and economic growth. Although inequality is widely recog- nized as an important economic outcome, marital sorting has not received much attention as one of its potential determinants. Kremer (1997), Fer- nandez and Rogerson (2001), Fernandez et al. (2005), and Greenwood et al. (2014) establish a theoretical and empirical correlation between the degree to which spouses sort in the marriage market, economic inequality, and per capita incomes. 10 Therefore, any process that increases inequality (e.g., skill-biased technological change) or reduces search costs for part- ners (e.g., Internet dating) could well lead to greater sorting and hence greater inequality. Because my paper considers a historical setting, I am able to analyze this relation in the very long-run. Understanding the long- run trend in inequality is important given the enormous concerns over this as a policy issue. Piketty and Saez (2006) use historical tax statistics to construct a long-run series for income and wealth concentration. For most Western democracies, they find a trend of increasing inequality over the last 25 years. High inequality, in turn, may have dramatic effects on im- portant economic outcomes such as taxation (Persson and Tabellini 1994) or the provision of public education (Sokoloff and Engerman 2000), ulti- mately affecting the growth process.
Our work is similar in objectives to these two papers, but it diﬀers mainly in two features. First, our model does not incorporate peer group eﬀects in the production of human capital or as a determinant of school quality. Instead, school quality is measured by per-student expenditures. School quality and child’s own ability are the sole determi- nants of student’s earnings. We have chosen this framework to focus the analysis on the role of vouchers in allowing poor families to invest an eﬃcient amount in their children’s education. The complementarities existing between student’s own ability and school qual- ity allow us to characterize the eﬃcient allocation of students to schools. Secondly, we characterize the equilibrium level of vouchers chosen by majority for a given level of public school quality, whereas these papers abstract from political economy issues raised by the introduction of vouchers.