In this paper, we propose to combine two strategies for addressing the class overlap and the class imbalance for the classification of remote sensing data. The problem is of great relevance since very few approaches to deal with this challenge. In order to face such a problem, this work focus on the joint use of editing techniques and a modification in the mean square error (MSE) cost function for a multi–layer Percetron (MLP). This approach can be considered a two–stage method. Firstly, we remove noisy and border- line samples of the majority classes by application of editing techniques. Secondly, the edited data set is used for training a MLP with a modified MSE cost function, which overcomes the class imbalance problem.
8 Lee mas
For the separation of the signals in the vector Broadcast Channel (BC), some information about the channel state is necessary at the transmitter. In many cases, this Channel State Information (CSI) must be fed back from the receivers to the transmitter. We jointly design the channel estimators and the quantizers at the receivers together with the precoder at the transmitter based on a precoder-centric criterion, i.e., the minimization of a Mean Square Error (MSE) metric appropriate for the precoder design. This is in contrast to our previous works, where the quantizer design was based on a CSI MSE metric, i.e., based on the minimization of the MSE between the true channel and the channel recovered by the transmitter using a feedback channel. Interestingly, the estimators resulting from this joint formulation are independent of the used codebook. The codebook entries are the employed precoders. Therefore, each receiver feeds back the index of a set of precoders and the intersection of the sets gives the appropriate precoder. Since the quantizers of the different receivers have to work separately, the metric for the computation of the partition cells cannot be expressed as a simple squared error depending on the quantizer output. The proposed system based on a joint optimization clearly outperforms previous designs with separate optimization of feedback and precoding.
21 Lee mas
Abstract. La voz es la principal forma de comunicación que posee el ser humano, el presente trabajo busca ayudar a las personas con proble mas de audicion, mediante el realzado de la voz. Para esto se utilizan las tecnicas de beamforming y Direction of Arrival (DOA) en un arreglo de micrófonos semiesferico. Mediante el analisis de los algoritmos adap- tativos Linearly Constrained Minimum Variance (LCMV) con sus tres familias: Constrainedd, Generalized Sidelobe Canceler (GSC) y House- holder, utilizando los algoritmos unconstrained: Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS), Recursive Least-Squares (RLS) y Conjugate Gradient (CG). Se pretende verificar las cualidades y desven tajas de cada uno de los algoritmos tratando de optimizar sus desempeños variando sus respectivos paróametros de adaptacióon, a fin de obtener una rápida convergencia sin comprometer el Mean Square Error (MSE) ni au mentar el costo computacional. Los algoritmos adaptativos se comparan y en base a los resultados obtenidos se selecciona el algoritmo mas idoneo, tomando en cuenta la velocidad de convergencia, costo computacional, todos los resultados son analizados a fin de obtener las conclusiones y recomendaciones.
10 Lee mas
The four models used were good fit (determination coefficient, root mean square error and Pearson correlation coefficient), for the oak and pine-oak forests; but, the opposite was verified for the pine forest, in which the coefficient of determination was lower with respect to the other two types of forest. In this sense, the Negative Simple Exponential Model had the best statistical adjustment and a strong correlation of the Pearson coefficient.
16 Lee mas
The training of neural networks is carried out in a similar way to what FXLMS algorithm do. The main difference is that instead of adapting the weights each time a new sample arrives, a significant number of samples are used to calculate the mean square error (FXLMS uses the instantaneous square error). The main advantage of using the mean square error is that faster minimum search methods such as quasi-Newton methods can be utilized (figure 4). These
6 Lee mas
The correlation technique method for measuring the scattering coefficient in diffuse field is already well developed and an ISO standard is about to be published. However, further investigations remain to be done. These are related to geometrical aspects of a sample. Some of these aspects were investigated, as the number of periods (for periodic structures) and an option for measuring square samples. In this case a discussion about edge effects is presented. Also actions in order to have a numerical reference for a sample will be reported.
6 Lee mas
36 Lee mas
Aproximación Error absoluto Aproximación Error absoluto π = 3,1 0,041592653.... < 0,1 π = 3,14 Error: 0,001592653.... < 0,01 π = 3,141 0,000592653.... < 0,001 π = 3,1416 Error: 0,00000734.... < 0,000008 Este error es menor que 8 millonésimas, lo que da una buena aproximación para 4 cifras decimales.
7 Lee mas
Low seedling emergence (1-20%) of Agave macroacantha (Arizaga and Ezcurra, 2002) and Yucca brevifolia (Reynolds et al., 2012) have been found under and outside nurse plants after two years; moreover, this low seedling emergence is also consistent for Banksia species from Mediterranean ecosystems under temperature soil increases and rainfall variations using OTCs (Cochrane et al., 2015). However, Pérez-Sánchez et al. (2011), found high seed germination (75%) in Agave lechugilla after exposure for 2 h at 70ºC and then at room temperature every day for 14 d. These results are contrary for germination recorded in A. striata; thus, it is possible that future temperature increases could put at risk sexual reproduction of this species. Furthermore, we found a high tolerance to physic stress in Y. filifera even inside OTCs, which was reflected in a high germinability; however this result do not agree with low germinability (48%) after exposure for 2 h at 70ºC and then at room temperature every day for 14 d by Pérez-Sánchez et al. (2011) for Y. decipiens. Our findings indicate that Y. filifera and A. striata do not form SSB. Therefore, our hypothesis about that the increments in mean soil temperature will shift the seasonal dynamics and persistence of SSB, as well as germinability across time, was only confirmed for A. striata.
199 Lee mas
Finally, an ANN model was built for the categorization of the wines according to their oxygen consumption rate. This model was trained by correlating the chemical data and the oxygen consumption curves of the 108 model wines, and tested using the 32 real white and red wines, as further explained below. Due to the wine matrix complexity and the multiplicity of chemical compounds linked with the wine’s antioxidant capacity (Zúñiga et al., 2014), is that several ANN were proposed in order to categorize the oxygen consumption rate of the wine on the basis of its basic chemical composition. The multilayer perceptron ANN used in this work presents 7 input variables (input layer, Figure 1 supplementary) of each network: a binary variable indicating whether the wine is red or white and the six chosen chemical parameters of the wines (i.e. A%, TA, SO2T, Fe, Cu, and TP). The output variable (output layer, Figure 1 supplementary) of each ANN was a numeric-type variable that represents the wine’s oxygen consumption rate and was used to compare wines with different rates. Both the input and output variables were normalized so that the mean of each variable is zero and the standard deviation is one.
19 Lee mas
De esto se sigue, necesariamente, que las circunstancias de hecho que configuran el objeto del dolo en el sentido del § 16 solo se dejan identificar por medio de los ele- mentos del tipo. En el caso de la lesión corporal propuesto en el ejemplo, A conoce las condiciones de aplicación de la expresión «maltrato corporal», de modo que él mismo podría determinar sin más que al cachetear resonantemente a V en el oído lo maltrata corporalmente. Sin embargo, son muy pocos los tipos del StGB cuyos com- ponentes conceptuales resultan evidentes (para el lego). Antes bien, si se quisiera de- jar la calificación de las circunstancias de hecho que son relevantes para la realización del tipo a las representaciones del autor, los errores resultantes serían muy graves, ya que estas representaciones podrían distanciarse considerablemente del sentido de los elementos del tipo y, con esto, modificar la norma de comportamiento vinculante para el autor. Las consecuencias serían caóticas. Si, por ejemplo, la representación de A con arreglo a la cual solo los trozos de papel que se encuentran estatalmente certificados fueran documentos resultara determinante para la calificación del caso, entonces no solo la prohibición de supresión de documento tendría otro contenido, sino que además la destrucción de una boleta no constituiría supresión de un docu- mento. En otras palabras: todo error de prohibición conduciría a una determinación incorrecta de las circunstancias de hecho relevantes para el dolo.
20 Lee mas
independent realizations falling within a very nar- row range of φ values for any given Γ. In the past, this led to the conclusion that this was a truly re- versible process, where lowering or raising Γ would lead to the same steady-state φ. In the inset of Fig. 1(a) two of the independent realizations are shown for the low-tap-intensity region. From this picture, it is clear that steady-states correspond- ing to a given Γ can differ from one realization to another. Notice that in the inset the error bars for the two isolated realizations correspond to the standard error of the mean (SEM), which gives an estimate of the uncertainty of the mean value re- ported rather than the size of the φ fluctuations. For these two realizations, although the mean φ seems to agree within the estimated error for inten- sities Γ > 1.5, it is clear that they are different for
7 Lee mas
A primary control on the geodynamics of rifting is the thermal regime. To better understand the thermal regime of the northern Gulf of California we systematically measured heat-flow across the Wagner Basin, a tectonically active basin that lies at the southern terminus of the Cerro Prieto fault. Seismic reflection profiles show sediment in excess of 5 s two-way travel time implying a sediment thickness > 5 km. The heat flow profile is 40 km long, has a nominal measurement spacing of ∼ 1 km, and is collocated with a seismic reflection profile. Heat flow measurements were made with a 6.5 m Fielax violin-bow probe. Most measurements are of good quality in that the probe fully penetrated sediments and measurements were stable enough to perform reliable inversion for heat flow and thermal properties. We have estimated corrections for environment perturbations due to sedimentation and changes in bottom water temperature. The mean and standard deviation of heat flow across the western, central, and eastern parts of the basin are 220 ± 60, 99 ± 14, 889 ± 419 mW m −2 , respectively. Corrections for sedimentation would increase heat flow across the central part of basin by 40 to 60%. We interpret the relatively high heat flow and large variability on the western and eastern flanks in terms fluid discharge, whereas the more consistent values across the central part of the basin is suggestive of conductive heat transfer. This interpretation is consistent with the seismically imaged pattern of faulting showing faults near the seafloor across the western and eastern flanks of the Basin. Based on an observed fault depth of 1.75 km we estimate the Darcy velocities through the western and eastern flanks at 2 and 8 cm/yr, respectively.
99 Lee mas
Prediction of the evolution of GHGs in the atmosphere re- quires an understanding of their sources and sinks. Therefore, inverse modelling techniques applying atmospheric concen- tration measurement monitored at global surface networks are used (Bousquet et al., 2011). The in-situ surface measure- ments show very high precision and absolute accuracy (ap- prox. 0.1 %), but they are strongly affected by local processes like small-scale turbulences or nearby sources or sinks. It is very difficult for the inverse models to capture these small- scale processes. In this context, vertically averaging the con- centrations can be helpful. For instance, Olsen and Rander- son (2004) document that total column-averaged observa- tions of GHGs are significantly less affected by small-scale processes, but still conserve valuable GHG source/sink infor- mation. However, total column-averaged data are affected by the stratospheric contribution, the correct modelling of which is a significant error source when investigating the GHG cy- cling between the atmosphere, the biosphere, and the ocean. Ground-based high spectral resolution FTIR measure- ments allow a precise determination of the atmospheric abun- dances (total column amounts and vertical profiles) of many constituents, including GHGs. The ground-based FTIR total column data are essential for the validation of GHGs mea- sured from space by current and future satellite sensors (e.g.
17 Lee mas
deberá hacer frente en el próximo ciclo económico. Incluso las empresas que trabajan Just in time precisan una previsión de ventas para realizar los planes de producción aunque posteriormente la programación se rija por el sistema kanban. Para las áreas funcionales de fi nanzas y/o contabilidad, las previsiones proporcionan la base de la planifi cación presupuestaria y del control de costes, mientras que para marketing, la previsión sirve para planear el desarrollo de nuevos productos, remunerar al personal de ventas y otras decisiones. En defi nitiva, como toda la actividad de la empresa está, en última instancia, orientada hacia la venta de sus productos y, condicionada por ésta, la previsión de las ventas constituye un elemento fundamental para su gestión. La previsión constituye la base de la planifi cación a medio y largo plazo y, por tanto, constituye una decisión estratégica (Heizer y Render, 2001). Un error de previsión puede provocar bien un exceso de capacidad de mano de obra, equipamiento y materiales o bien, todo lo contrario, una falta de capacidad para hacer frente a la demanda lo que generaría en el primero de los casos un exceso de costes y en el segundo un exceso de demanda a la que no poder atender o atender con una baja calidad de servicio. Por otro lado, no existe ningún método perfecto de previsión dado que no es posible saber exactamente lo que ocurrirá en el futuro, por tanto disponer de previsiones no suple la toma de decisiones ni exime de riesgo.
16 Lee mas
These solutions attempt to model the effect of SA by increas ing the number of states in the dynamic model of the INS/GPS. The reason behind this approach is that, for a given autocor relation function, there always exist a gaussian random process with the same autocorrelation function :1). Thus, if the noise input is modelled as a zero mean white stochastic process then the modification of the spectrum is left to a shaping fil ter, being it necessary to augment the dynamic model of the plant. Our approach, on the contrary, keeps constant the number of states by considering that the modification may be incorporated in our assumptions about the noises, in this case, the measure ment noise.
5 Lee mas
One of the earliest video quality metrics based on a vision model was developed by Lukas and Budrikis . In this quality prediction, the first stage of the model constitutes a nonlinear spatio-temporal model of a visual filter describing threshold characteristics on uniform background fields. The second stage incorporates a masking function in the form of a point-by-point weighting of the filtered error based on the spatial and temporal activity in the immediate surroundings in order to account for the non-uniform background fields. The processed error, averaged over the picture, is then used as a prediction of the picture quality. The model attempted to predict the subjective quality of moving monochrome television pictures containing arbitrary impairments. The MPQM by van den Branden Lambrecht and Verscheure  simulates the spatio- temporal model of the human visual system with a filter bank approach. The perceptual decomposition of the filter accounted for the key aspects of contrast sensitivity and masking. Since the eye’s sensitivity varies as a function of spatial frequency, orientation, and temporal frequency, and the perception of a stimulus is a function of its background, the authors jointly modelled the contrast sensitivity function and the masking function to explain visual detection. The metric also accounted for the normalization of cortical receptive field responses and intra-channel masking. Pooling of the prediction data from the original and coded sequences in the multi-channel model justifies higher levels of perception. The authors present a global quality measure and also metrics for the performance of basic features, such as uniform areas, contours, and textures in a video sequence. The metrics were tested for applications of high bitrate broadcasting using the MPEG-2 coder and low bit rate communication using H.263. The sequences used were Mobile, Calendar, Flower Garden, and Basket Ball for MPEG-2 and Carphone and LTS Sequence for H.263. Conducting encoding experiments, the metric’s saturation effect is compared with PSNR and found to be in correlation with aspects of human vision.
112 Lee mas
The loss function (2), in addition to providing a natural measure of estimation quality, namely generalized mean symmetric ratio, can be representative of incurred cost in specific applications. In spite of this, it has not been used previously in the context of estimation problems, to the author’s knowledge. As an example of appli- cation, consider the production of a certain device which is subject to manufacturing defects, such as image sensors for digital cameras. Several factors in the production process (such as the presence of dust particles) may result in a sensor with specific pixels systematically showing incorrect information. Since it would be too expensive to discard all sensors that have some defect, the commonly adopted solution is as follows. Each produced sensor is tested, and if the number of defective pixels is not too large it is accepted. The location of such pixels is permanently recorded in the camera, so that they can be corrected as a part of the processing applied by the camera to generate the image.
18 Lee mas
A first step in the design process is to choose the base shape of the reflector, which will be square in our case. Design involves the selection of an optimal set of parameters that characterise the reflector. Fig. 3 shows an example of parametric modelling for a square serrated-edge reflector. In this scheme, 2·r is the inner side length, h is the serration depth and Nver is the number of serrations per side.
5 Lee mas
Abstract. This paper presents results from comparing different Wi-Fi fingerprinting algorithms on the same private dataset. The algorithms where realized by independent teams in the frame of the off-site track of the EvAAL-ETRI Indoor Localization Competition which was part of the Sixth International Conference on Indoor Positioning and Indoor Navigation (IPIN 2015). Competitors designed and validated their algorithms against the publicly available UJIIndoorLoc database which contains a huge reference- and validation data set. All competing systems were evaluated using the mean error in positioning, with penalties, using a private test dataset. The authors believe that this is the first work in which Wi-Fi fingerprinting algorithm results delivered by several independent and competing teams are fairly compared under the same evaluation conditions. The analysis also comprises a combined approach: Results indicate that the competing systems where complementary, since an ensemble that combines three competing methods reported the overall best results.
32 Lee mas