Abstract—Nonlinear system identification based on support vector machines (SVM) has been usually addressed by means of the standard SVM regression (SVR), which can be seen as an implicit nonlinear autore- gressive and moving average (ARMA) model in some reproducing kernel Hilbert space (RKHS). The proposal of this letter is twofold. First, the explicit consideration of an ARMA model in an RKHS (SVM-ARMA ) is proposed. We show that stating the ARMA equations in an RKHS leads to solving the regularized normal equations in that RKHS, in terms of the autocorrelation and cross correlation of the (nonlinearly) transformed input and output discrete time processes. Second, a general class of SVM-based system identification nonlinear models is presented, based on the use of composite Mercer’s kernels. This general class can improve model flexibility by emphasizing the input–output cross infor- mation (SVM-ARMA ), which leads to straightforward and natural combinations of implicit and explicit ARMA models (SVR-ARMA and SVR-ARMA ). Capabilities of these different SVM-based system identification schemes are illustrated with two benchmark problems.
6 Lee mas
A highly effective compromise between stability and simplicity of adaptation can be provided by the g-ﬁlter, which was ﬁrst proposed in . The g-ﬁlter can be regarded as a particular case of the generalized feed forward ﬁlter, an inﬁnite impulse response (IIR) digital ﬁlter with restricted feedback architecture. The g-structure results in a more parsimonious ﬁlter, and has been used for echo cancellation , time-series prediction , and system identiﬁcation . Two main advantages of the g-ﬁlter are claimed: it provides stable models and it permits the study of the memory depth of a model.
7 Lee mas
scrutinize the statistical properties of the data in the original domain, and then, to decide which is the most suitable transformation. This will be specially adequate in time-series problems, where knowledge of the statistical properties (auto- correlation, ergodicity, or nature of interferent noise) is fundamental for their processing. In this setting, we have recently formulated the autore- gressive moving average (ARMA) system identiﬁ- cation , and the non-parametric spectral analysis  according to SVM principles. This paper concentrates in generalizing this previous work to propose an SVM framework for linear signal processing (LSP) problems. Extensions of SVM LSP formulations to the non-linear case can be easily treated by using Mercer’s kernels, as usual in the SVM literature.
11 Lee mas
Nevertheless, relevant information could be masked by the long-term averaging in this calculation procedure, both from a clinical and from a signal analysis points of view. First, relevant short-time fluctuations in the TS along the day  could be hidden by the 24-hour template averaging. Second, several influences of the physiological state can affect the HRT, such as the described effect of HR level that precedes to the PVC on the HRT oscillation amplitude , . More specifically, the vegetative tone is probably controlling both the HR level and the HRT oscillation amplitude, but nevertheless, averaging along the different states during the day could result in a reduction of the true magnitude of the HRT fluctuation and in a smoothing not only in noise level, but also in signal level , . And third, averaging precludes the comparison of HRT in a given moment to other fluctuating physiological variables. For instance, comparison of long-term Heart Rate Variability (HRV) to long term HRT has been reported , but the short-term regulation of the autonomous nervous system on HR can not be studied jointly with the HRT.
9 Lee mas
In this paper, we assess the effect of different environmental and individual factors capable of triggering the physiological stress response of wild wood mice. Our goal is to identify which of these factors could most significantly affect the GC level, and hence be the principal stressors in A. sylvaticus. Taking into account the above-mentioned premises and the properties of the study area, we analyze the relationship between FCM con- centrations and the following factors: year, month, season, rainfall, temperature, relative air humidity, habitat, moon phase, sex, breeding condition and body weight. We consider year, season and month to be time periods grouping a set of factors that could produce together a characteristic effect in the stress response (e.g., favorable/ unfavorable weather conditions or reproductive period). We hypothesize that abundant rainfall, high humidity and warm temperatures would favor an increase in food availability and, therefore, a smaller physiological stress response. We argue that a higher predation risk perception would elevate the animals’ stress response during adverse climatic periods and in habitats with a reduced vegetation cover. Similarly, full moon would elevate GC levels due to a higher exposure of the animals during bright nights. We also hypothesize increasing GC levels in breeders, due to a greater energy demand, and in juveniles, which are frequently displaced to poorer quality home ranges with less vegetation cover and higher predation risk. Finally, taking into account the strong corre- lation between weight and body condition, we suppose higher GC levels in smallest and weakest animals due to intraspecific competitions and a poorer defense system against predators.
14 Lee mas
2. Ljung, L.: System Identiﬁcation. Theory for the User. Prentice Hall, NJ, US, 1987. 3. Vapnik, V.: The Nature of Statistical Learning Theory Springer–Verlag, NY, 1995. 4. Sch¨olkopf, B., Sung, K.: Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classiﬁers IEEE Trans. on Signal Proc. Vol 45, n 11. 1997. 5. Pontil, M., Verri, A.: Support Vector Machines for 3D Object Recognition. IEEE
6 Lee mas
Many classification problems associated with real world systems vary over time. For example, a system may change because of physical reasons as the season of the year, or because there is a change in the expectations or interests of its users. In most cases the cause and characteris- tics of the change are not obviously present in the data under analysis. In these situations, the classifier needs to learn not only the correct input- output function but also of to detect the change in the concept and to adapt to it.
9 Lee mas
The present automated systems go from the simplest to the most complex. Some of them can be formed by single-process automated manufacturing cells as a basis. A set of these are grouped in automated production lines which at the same time are grouped in production systems. These systems can be made up only by machines or may include an individual or a set of industrial robots requiring a minimum or no human interaction when working. These are controlled by autonomous programmable controllers that handle electrical process inputs and outputs and implement the necessary logic and calculus to control them. From the variety of automated industrial processes, even single automated manufacturing cells require automation and the implementation of a control strategy. The complexity of process automation depends on the process. For example, automation complexity increases in automated production lines and is even more complex in automated production systems that commonly require complete network architecture of control devices. In these networks, control devices communicate with each other receiving process signals that come in from machine sensors and send out signals to machine actuators that are part of the process. In addition, communication signals with high or low level controllers inside the network are also received and sent out. Then, despite these, automated systems require a minimum of human interaction when working. During initial commissioning, the sending and receiving of communication data, the recognition of sensors signals and the sending of signals to actuators when the process is being automated have to be programmed in control devices. In addition, when failure or process variation – such as product change, addition or replacement of machines, changes in automation hardware, software upgrades, etc. occurs, it is also necessary to change the programmed automation logic. Human interaction is thus needed for these tasks since the programming is done by control and automation engineers or by staff with special training, knowledgeable of the process, the control hardware and the programming software.
199 Lee mas
Abstract. This paper focuses on prediction and prevention of seismic risk through a system for decision making. Data Warehousing and OLAP operations are applied, together with, data mining tools like association rules, decision trees and clustering to predict aspects such as location, time of year and/or earthquake magnitude, among others. The results of the data mining and data warehouse application help to confirm uncertainty about problems behavior in decision making, related to the prevention of seismic hazards.
10 Lee mas
Since the EMA data used in this work is sampled at 100 Hz, the speech parametrization is done at a rate of 10 ms. Parameters are calculated on 16 ms @)+$46( N4!( &"$)#%'&$,"8( #,*CD%"&$),*( &$( the current time depends on the context, thus it is desirable to use additional frames such that the regression system takes into account the adjacent information.
7 Lee mas
1. Tolosa, Gabriel Hernán; Peri, Jorge Alberto; Bordignon, Fernando. Experimentos con Métodos de Extracción de la Idea Principal de un Texto sobre una Colección de Noticias Periodísticas en Español. En XI Congreso Argentino de Ciencias de la Computación. 2005. 2. Botsis, Taxiarchis, et al. Text mining for the Vaccine Adverse Event Reporting System:
10 Lee mas
In this paper, we have presented an algorithm that it- eratively grows a support vector classi"er (GSVC), in a problem-oriented form. GSVC algorithm is simple, eJcient, and allows to control the trade-oP between machine com- plexity and performance in terms of classi"cation error. Ex- perimental results in several benchmark problems point out that GSVC generalizes better than standard SVC, mostly due to its problem-oriented growing criterion, stopped by a cross-validation procedure. These experiments also show signi"cant reductions on the "nal machine size built by GSVC with respect to the original SVC.
10 Lee mas
Under-sampling the frequency of the majority class, e.g. random undersampling, has its drawbacks and results in information loss. Support Vector Machine selects a subset of instances along the hyper-plane, so-called support vectors, and used them as the set of x i within the decision function (1). These support vectors lie within the margin, and their α i s are non-zero, 0 < α i < C . That is: as the hyperplane is completely determined by the instances closest to it, the solution should no depend on the other examples .
8 Lee mas
Data warehouses and OLAP systems help to interactively analyze huge amounts of data. These data, extracted from transactional databases, frequently contains spatial information that is useful for the decision-making process . In this process, the information collected is used to create a data warehouse using SQL server technology. Then, the data warehouse gives us the ability to add data according to defined hierarchies, in order to apply data mining techniques. Figure 4 shows the decision support system proposed in this work for seismic risks.
7 Lee mas
Robust HRV analysis. The natural oscillations of the time between consecu- tive heart beats are known as HRV, and it is related with the modulation of the sympathetic and the vagal nervous system on the heart rhythm . In healthy conditions, the power of the oscillations observed in the LF band (from 0.04 to 0.15 Hz) is balanced with the power in the HF band (from 0.15 to 0.4 Hz). The association of the time between two consecutive beats to the time where the ﬁrst couple beat happens leads to an non-uniform sampled series. Besides, ectopic beats frequently appear, but they are not related with the modulation of the autonomic system and they should be excluded from the analysis.
6 Lee mas
Pitale et al.  describe two steps for the implementation of classification algorithms: the definition of the model and the selection and application of a method to classify it. For our study, the first one included the processing of the information given by the IoT system to obtain the entries of the classification algorithm, which were diverse variables on the domains of time, frequency and non-linear methods. Among the first were the nnxx which is the number of successive R-R intervals that differ by more than xx mil- liseconds and pnnxx, which is its corresponding in percentage . In the domain of frequency, the HF and LF were taken, due to their direct relationship with the activity of the sympathetic and parasympathetic systems of the organism . Finally, variables from nonlinear methods such as SD1 and SD2 were analyzed, which are the standard deviations of the Poincaré plot perpendicular and along the identity line respectively . In addition, alpha1 and alpha2 were obtained, short and long-term fluctuations of the detrended fluctuation analysis . The expected results were a reduced or in- creased HRV as explained by Task Force et al. .
9 Lee mas
Automation simulation gives production operations engineers the capability to build virtual production systems based on real automation events 2 . It also makes it feasible for engineers to virtually model conveyors, workstations, and controls as well as the right physical and logical interface and material handling operations that can occur between the components of work cells and production lines. An important feature is that it permits the development of control strategies or the construction of production scenarios for experimentation that would otherwise be expensive and/or time-consuming. This empowers engineers to try ideas in a dynamic, synthetic environment while collecting virtual response data to determine the physical responses of the control system. This feature, in addition to validation, provides a collaborative workspace for mechanical design, manufacturing, and control engineers to share knowledge, exchange system features and attributes, integrate process information, and react to engineering changes and version updates. The collaborative work around a virtual model shortens the ramp-up of production lines during commissioning and product launch, as well as the designing/building process, cost, time, design changes, and risk of errors. All this facility aspects represent critical factors in product delivery and, ultimately, a company’s profit or loss. These capabilities have made of automation simulation a key piece for the manufacturing industry since manufacturers have validated the plant’s control systems before production starts 4 .
201 Lee mas
Another set of techniques estimate solar radiation with soft- computing techniques. These techniques are within the framework of arti ﬁ cial intelligence that has received much attention for dealing with practical problems (Gopalakrishnan et al., 2011). Soft computing includes arti ﬁ cial neural networks (ANN), genetic algorithms (GAs), fuzzy logic (FL), adaptive neuro fuzzy inference systems (ANFIS), support vector machines (SVM) and data mining (DM). These methods o ﬀ er advantages over conventional modeling, including the ability to handle large amounts of noisy data from dynamic and nonlinear systems, especially when the underlying physical processes are not fully understood (He et al., 2014).
9 Lee mas
José Luis Rojo-Álvarez received the bache- lor’s degree in telecommunication engi- neering from the Universidad de Vigo, Vigo, Spain, in 1996, and the Ph.D. in tele- communication from the Universidad Politécnica de Madrid, Madrid, Spain, in 2000. He is an assistant professor at the Departamento de Teoría de la Señal y Comunicaciones, Universidad Carlos III de Madrid, Spain. His research interests in- clude statistical learning theory, digital sig- nal processing, and complex system modelling, focusing on EKG and intracardiac EGM signal processing, ar- rhythmia-genesis mechanisms, robust anal- ysis of HRV, echocardiographic imaging, and hemodynamic function evaluation.
9 Lee mas
Received: 21 April 2018; Accepted: 31 May 2018; Published: 4 June 2018 Abstract: Land use and cover changes (LUCC) have been identified as one of the main causes of biodiversity loss and deforestation in the world. Fundamentally, the urban land use has replaced agricultural and forest cover causing loss of environmental services. Monitoring and quantifying LUCC are essential to achieve a proper land management. The objective of this study was to analyze the LUCC in the metropolitan area of Tepic-Xalisco during the period 1973–2015. To find the best fit and obtain the different land use classes, supervised classification techniques were applied using Maximum Likelihood Classification (MLC), Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs). The results were validated with control points (ground truth) through cross tabulation. The best results were obtained from the SVMs method with kappa indices above 85%. The transition analysis infers that urban land has grown significantly during 42 years, increasing 62 km 2 and replacing agricultural areas at a rate of 1.48 km 2 /year. Forest loss of 5.78 km 2 annually was also identified. The results show the different land uses distribution and the dynamics developed in the past. This information may be used to simulate future LUCC and modeling different scenarios. Keywords: Maximum Likelihood Classification; Support Vector Machines; Artificial Neural Networks; significant transitions; urban growth; Nayarit (Mexico)
15 Lee mas