Dynamic principal component analysis

Top PDF Dynamic principal component analysis:

Fault Detection and Diagnosis in a Heat Exchanger Using Dynamic Principal Component Analysis and Diagnostic Observers-Edición Única

Fault Detection and Diagnosis in a Heat Exchanger Using Dynamic Principal Component Analysis and Diagnostic Observers-Edición Única

• Physically, the outlet temperature of water depends of its inlet temperature, water feed and steam feed. Thus, the value of the outlet temperature has a direct correlation with all used measurements. Vijaysai et al. [58] and Luo et al. [60] define that the main limitation of PCA is that it assumes normality and independence of the samples. Ku et al. [43] propose the dynamic PCA algorithm for overcoming the shortcomings of conventional PCA.

121 Lee mas

Principal component analysis applied to continuation power flow

Principal component analysis applied to continuation power flow

The authors in [1] and [2] present approaches for real time dynamic vulnerability assessment for power systems and detection of islanding conditions. In [3] and [4], PCA is used to analyze the steady state operational power grid data and expose some correlations. In [5], the researchers describe an algorithm for transformer differential protection based on pattern recognition of the differential current. In summary the potential of PCA for data reduction and feature extraction for power systems is high.

7 Lee mas

Feature selection in pathological voice classification using dinamyc of component analysis

Feature selection in pathological voice classification using dinamyc of component analysis

This paper presents a methodology for the reduction of the training space based on the analysis of the variation of the linear components of the acoustic features. The methodology is applied to the automatic detection of voice disorders by means of stochastic dynamic models. The acoustic features used to model the speech are: MFCC, HNR, GNE, NNE and the energy envelopes. The feature extraction is carried out by means of PCA, and classification is done using discrete and continuous HMMs. The results showed a direct relationship between the principal directions (feature weights) and the classification performance. The dynamic feature analysis by means of PCA reduces the dimension of the original feature space while the topological complexity of the dynamic classifier remains unchanged. The experiments were tested with Kay Elemetrics (DB1) and UPM (DB2) databases. Results showed 91% of accuracy with 30% of computational cost reduction for DB1.
Mostrar más

5 Lee mas

Principal Component Analysis to study spatial variability of errors in the INSAT derived quantitative precipitation estimates over Indian monsoon region

Principal Component Analysis to study spatial variability of errors in the INSAT derived quantitative precipitation estimates over Indian monsoon region

Principal Components Analysis is a multivariate procedure which rotates the data such that maximum variabilities are projected onto the axes. Essentially, a set of correlated variables are transformed into a set of uncorrelated variables which are ordered by reducing variability. The uncorrelated variables are linear combinations of the original variables, and the last of these variables can be removed with minimum loss of real data. The main use of PCA is to reduce the dimensionality of a data set while retaining as much information as possible. It computes a compact and optimal description of the data set. The first principal component is the combination of variables that explains the greatest amount of variation. The second principal component defines the next largest amount of variation and is independent to the first principal component. There can be as many possible principal components as there are variables. It can be viewed as a rotation of the existing axes to new positions in the space defined by the original variables. In this new rotation, there will be no correlation between the new variables defined by the rotation. The first new variable contains the maximum amount of variation, the second new variable contains the maximum amount of variation unexplained by the first and orthogonal to the first, etc.
Mostrar más

11 Lee mas

Dynamic scheduling in heterogeneous multiprocessor architectures . Efficiency analysis

Dynamic scheduling in heterogeneous multiprocessor architectures . Efficiency analysis

Usually, scheduling/mapping algorithms are used to obtain the best assignment of the processes that make up an application to the processors of the architecture in which it will be run. In this paper, the DCS_AMTHA (Dynamic Concurrent Scheduling) algorithm to carry out the scheduling of multiple parallel applications on heterogeneously distributed architectures (cluster) is defined. This algorithm is based on AMTHA and the goal is to optimize the efficiency achieved by the whole system.

10 Lee mas

Dynamic analysis of runout correction in milling

Dynamic analysis of runout correction in milling

In order to determine milling parameters, a procedure similar to that presented in [33] was followed. Kt and kr were considered to be potential functions of chip thickness along the cu[r]

9 Lee mas

ECG signal analysis using temporary dynamic sequence alignment

ECG signal analysis using temporary dynamic sequence alignment

This paper shows a feature extraction method for electrocardiographic signals (ECG) based on dynamic programming algorithms. Specifically, we applied local alignment technique for recognition of template in continuous ECG signals. First, we encoded the signal to characters based on the sign and magnitude of first derivative, then we applied local alignment algorithm to search for a complex PQRST template in target continuous ECG signal. Finally, we arrange the data for direct measurement of morphological features in all PQRST segment detected. To validate these algorithms, we contrasted them with conventional analysis by measuring QT segments in the Massachusetts Institute of Technology (MIT) data base. We obtained processing time at least 100 times lower than those obtained via conventional manual analysis and error rates in QT measurement below 5%. The automated massive analysis of ECG presented in this work is suitable for post- processing methods like data mining, classification, and assisted diagnosis of cardiac pathologies.
Mostrar más

5 Lee mas

Analysis of basic features in dynamic network models

Analysis of basic features in dynamic network models

A large variety of complex systems can be analyzed by constructing a model that relies on some network structure [1–4]. The model may be dynamical, meaning that the values of some (state) variables do change with time and, depending on the nature of such variables, we can have different types of network models. The first type corresponds to dynamic graphs that follow evolution laws defined explicitly on the network [5–8]; the second type gathers dynamical systems where the state variables are defined on a network [9,10]; finally, the third type refers to co-evolution models that combine evolving networks and dynamical systems. In the first and third type, the underlying network structure changes with time, defining a time-varying or evolving network [11,12]. In the present work, we first characterize the basic features of some simple models of evolving networks whose evolution does not depend on network structure; the time evolution of these features serves as a reference baseline signature of the behavior of simple models. Then, a model that makes use of network structure is proposed to reflect some real network characteristics. The analysis of this model shows several regimes that indicate a sophisticated behavior; for some regime, the network reaches a high clustering coefficient/link density ratio [13] (when compared to the ratio values of baseline signatures), a common feature in many real networks.
Mostrar más

15 Lee mas

Dynamic model of a ball bearing, vibration analysis

Dynamic model of a ball bearing, vibration analysis

When spike energy increases, it usually means that bearing, gear, or other component problems are developing. It also means that acceleration and velocity trends should be more closely observed for changes; if acceleration readings exceed their allowable vibration limits but velocity readings are still acceptable, vibration spectrum analysis should be performed to confirm the problem. Repairs should be scheduled for a convenient future time.

66 Lee mas

Principal component analysis for automatic tag suggestion / Enrique Estellés Arolas, Fernando González Ladrón de Guevara and Antonio Falcó Montesinos

Principal component analysis for automatic tag suggestion / Enrique Estellés Arolas, Fernando González Ladrón de Guevara and Antonio Falcó Montesinos

In an attempt to solve these problems, some SBS have reached an agreement on the use of a limited vocabulary. A way to use this limited vocabulary is applied in the SBS Delicious: when a user starts introducing a tag, the system shows her those tags that start in the same way and that have been introduced previously, thereby allowing a direct selection. However, the use of a limited vocabulary as well as the suggestion of tags previously annotated in order to keep uniformity have also their drawbacks, because it happens that occasionally the same tag is used with different meanings and the use of synonyms and acronyms makes it more unclear. 3.2 Principal Component Analysis (PCA)
Mostrar más

31 Lee mas

Regionalization and classification of bioclimatic zones in the central-northeastern region of México using principal component analysis (PCA)

Regionalization and classification of bioclimatic zones in the central-northeastern region of México using principal component analysis (PCA)

In Figure 3 we show the results for the covariance matrix of monthly means of the accumulated precipitation and temperature of 173 stations. We consider only the first two principal components because they explain 85 per cent of the variance. The analysis indicates an interesting distribution for the first principal component (PC1), showing an extraordinary precipitation period from June to September (Fig. 3c). Further, an inspection of the variability of the monthly accumulated precipitation, observed in PC1 axis, reveals that two principal seasons can be distinguished: a relatively dry period from October to May and a rainy season from June to September. The precipitation variability included in PC2 shows a dry period in early summer and a light to moderate rainfall in the rest of the year (Fig. 3d). This mid-year drought in early summer is associated with the known phenomenon called canícula/(dog days) (Magaña et al., 1999; Vázquez, 2000; Cavazos et al., 2002). In late fall and early winter, PC2 reflects the influence of hurricanes and events of El Niño Southern Oscillation (ENSO) (Cavazos and Hastenrath, 1990). The PC1 for temperature shows high variability from August to November with a maximum in September (Fig. 3a). It can also be concluded that, climatically, the most stable months are March and April. The variability observed in PC2 for temperature can be explained by cold fronts moving into the area between late summer and early winter (Fig. 3b).
Mostrar más

14 Lee mas

Molecular Categorization of Yams by Principal Component and Cluster Analyses

Molecular Categorization of Yams by Principal Component and Cluster Analyses

by inspection patterns, trends, clusters, etc. in the objects. Principal components analysis (PCA) is a technique, extremely useful to summarize all the information contained in the X-matrix and put it in a form understandable by humans. The PCA works by decomposing the X-matrix as the product of two smaller matrices P and T. The loading matrix P with information about the variables contains a few vectors, the so-called principal components (PCs), which are obtained as linear combinations of original X-variables. The score matrix T, with information about the objects, is such that every object is described in terms of the projections onto PCs, instead of the original variables: X = TP’ + E where ’ denotes transpose matrix. The information not contained in the matrices remains as unexplained X-variance in a residual matrix E. Every PC i is a new co-ordinate expressed as a linear combination of the old features x j : PC i = S j b ij x j . The new co-ordinates PC i are called scores or factors while coefficients b ij are called loadings. The scores are ordered according to the information content with regard to total variance among all objects. The score-score plots show the positions of compounds in the new co-ordinate system, while loading-loading plots show the position of features that represent compounds in the new co-ordination. The PCs present two interesting properties. (1) They are extracted in decaying order of importance. The first PC F 1 always contains more information than the second F 2 , F 2 more than the third F 3 , etc. (2) Every PC is orthogonal to one another. There is no correlation between the information contained in different PCs. A PCA was performed for yams. The importance of PCA factors F 1–14 for properties is collected in Table 2. In particular the use of only the first factor F 1 explains 36% of the variance (64% of the error), the combined application of the first two factors F 1/2 accounts for 64% of variance (36% of error), the utilization of the first three factors F 1–3 rationalizes 80% of variance (20% of error), etc.
Mostrar más

12 Lee mas

Classification of Food Spices by Proximate Content: Principal Component, Cluster, Meta-Analyses

Classification of Food Spices by Proximate Content: Principal Component, Cluster, Meta-Analyses

Proximate composition of six food spices commonly used in South-East Nigeria are classified by principal component analysis (PCAs) of constituents and spices cluster analysis (CAs). Samples are grouped into two classes. Compositional PCA and spice CA permit classificating them and group the similar ones. The first PCA axis explains 61% of the variance; first two, 93%; first three, 99; etc. Different behaviour of species depends on ash, fibre, fat, moisture, etc. Macronutrients (protein, carbohydrate, fat) contents are adequate. Carbohydrate amounts are high. Fat quantities are moderate. Fat is closer to protein than to carbohydrate.
Mostrar más

12 Lee mas

Analysis of the static and dynamic behaviour of hydraulic fills

Analysis of the static and dynamic behaviour of hydraulic fills

In summary, most of  assumptions  adopted  to solve the  consolidation  problem by different authors  are  not  representative  of  what  is  observed  in  the  field.  Although  some  assumptions  such  as  a  variable  hydraulic  conductivity  and  compressibility,  and  the  consideration  of  large  strains  and  self‐ weight  forces,  etc.  have  been  included  throughout  the  evolution  of  the  consolidation  formulation,  other assumptions have not accurately been taken into account. The complexity is still manifolds; for  instance,  constitutive  equations  relating  effective  stress  and  void  ratio  with  a  constant  compressibility  coefficient  are  not  representative  of  the  real  behaviour  of  hydraulic  fills.  This  constitutive  behaviour  is  sensitive  to  several  phenomenon  such  as  the  multidimensional  consolidation (lateral deformation and dissipation of pore pressure), pore pressure increment due to  dilatancy,  the  dynamic  affection,  or  time  dependent  effects  (i.e.,  secondary  consolidation  due  to  rheology  phenomena).  Accordingly,  the  newest  and  most  advanced  constitutive  models  require  introducing accurately  these  points. This  will be one  of  the main lines  of  investigation  developed in  this study. 
Mostrar más

307 Lee mas

The Spanish technical change : a regional and a dynamic analysis (1994 2007)

The Spanish technical change : a regional and a dynamic analysis (1994 2007)

We can see in Table 1 that technology differences in the Spanish regions are skilled-labor augmenting, both in relatively and absolutely senses. It is relatively, because richer regions use skilled worker more efficiently than poor countries (regression coefficients are positives for skilled workers) and it is absolutely, because they use unskilled worker less efficiently than poor zones (regression coefficients are negative for unskilled workers). Our reference case (an skilled worker is one who has completed high school studies and the elasticity of substitution between skilled and unskilled labor is 1.7) is statistically significant. From now on, we use our reference case in subsequent analysis.
Mostrar más

40 Lee mas

Analysis of short-term dynamic behavior of an electricity market

Analysis of short-term dynamic behavior of an electricity market

Abstract: In this paper it is presented the conceptual approach of a model for representing short-term dynamic analysis of the agents’ behavior in the electricity market. The main purpose is to explore the interaction among generation companies through the offers systematically sent to a day-ahead single-node uniform-price market. The conceptual framework is aimed to shed light on two main questions: i) How the results of medium-term models (i.e., market share ob- jectives, hydro scheduling, system marginal price) are internalized into daily offers or, alterna- tively, how to reach medium-term objectives by means of short-term offers and ii) How to ana- lyze in detail the market dynamics in case of severe perturbations of agents’ behavior (i.e., price wars).
Mostrar más

7 Lee mas

Dynamic analysis of payloads and structures with intermediate modal density

Dynamic analysis of payloads and structures with intermediate modal density

system are partitioned into those that exhibit long wavelength behaviour and those that exhibit short wavelength local behaviour (and possess signicant dynamic uncertainty). Recently, Shorter and Langley [2005a] developed a general method for coupling FE and SEA based on wave concepts, and implemented in the commercial software VAOne [ESI, 2011]. Zhao and Vlahopoulos [2000] propose another hybrid approach by using a deterministic FE model for the global long wavelength components and an energy nite element model for the local short wavelength components. The global FEM and local EFEM equations, together with the coupling interface equations, are solved simultaneously through an iterative but computationally ecient process. So far, this approach has only been validated for co-linear beam networks. Other authors [Grice and Pinnington, 2002; Hong et al., 2006; Ji et al., 2006] also developed hybrid methods in a series of papers in which the short wavelength components are described statistically by eective impedances applies to the wavelength components. The approaches dier in the way the eective impedance is computed and in the way the response of the short wavelength components is recovered.
Mostrar más

263 Lee mas

Analysis through dynamic temporal sequence alignment in SpO2 signals

Analysis through dynamic temporal sequence alignment in SpO2 signals

The following work uses the dynamic temporal sequence alignment to adjust or contract the different registered segments of the PPG signal in order to determine the maximum value of each of the waves that make up the registered signal and thereby obtain the HRV this way. The method makes a temporary (local) signal alignment of the PPG, in order to temporarily file the signal data into an array belonging to the waveform of each registered signal [34] [35]. In this way it could recover the timing with the maximum amplitude values of the pulse waves without using other reference signals.
Mostrar más

5 Lee mas

A Data driven Study of RR Lyrae Near IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

A Data driven Study of RR Lyrae Near IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

combination of PCs, we have chosen to normalize each light curve independently to have a mean of 0 and scatter of 1. These aligned, normalized input light curves can be seen in Figure 3. We carry out PCA by utilizing singular value decomposition, adopted from the PCA module of scikit-learn ( Pedregosa et al. 2011 ) . Figure 4 shows the fi rst six PCs, according to our decomposition. As we have chosen not to normalize in each phase point, the fi rst PC contains the average light-curve shape of the normalized light curves. Including further components to describe the light curves modi fi es this average shape, and this can be easily understood in the context of the individual light curves, for PCs of low order: the second component can make the bump at the end of the rising branch ( around phase 0.15 ) more or less pronounced, while the third component is important to reproduce the double-peaked light curves displayed by some of the variables.
Mostrar más

16 Lee mas

Classification of Fruits Proximate and Mineral Content: Principal Component, Cluster, Meta‑Analyses

Classification of Fruits Proximate and Mineral Content: Principal Component, Cluster, Meta‑Analyses

Fruits from Nigeria are classified by principal component analyses (PCAs) of proximates and minerals, and plants cluster analyses (CAs), which agree. Samples group into three classes. Compositional PCA and fruit CA allow classification and concur. The first axis explains 39%, the first two, 59%, the first three, 73% variance. Moisture and K contents are high; ash and carbohydrate, low. Fruit behaviour depends on ash, fibre and K. Most nutritional constituents are grouped into the same class.

12 Lee mas

Show all 8362 documents...