Acousto-optics is a branch of physics which studies the interaction between both acoustic and optic waves. Both optics and acustic have a history of almost same duration. Nevertheless, the acousto-optic effect has had a relatively short history, beginning with Brillouin predicting the diffraction of light by an acous- tic wave being propagated in a medium of interaction, in 1922 . In the past decades great progress has been made in acousto-optics, and now it is a widely used technique in the field of signal processing [2,3]. Acousto-optics has progressed thanks to several technological developments in different areas. The first impor- tant developmet was the laser , which has made available sources of intense monochromatic coherent light. The laser made the acousto-optic effect easier to observe and measure. A second area of developments has been dedicated to the search of new materials to fabricate acoustic wave devices. The research of trans- ducer design has allowed the development of large-bandwidth, large aperture delay lines with good light-diffraction efficiency. Moreover, a notable portion of mod- ern technical achievements in a high-bit-rate optical dataprocessing is directly connected with utilizing such nonlinear phenomena as, for example, wave-mixing, various cross-and self-actions, etc. [5, 6]. Recently, two-cascade processing based on a three-wave interaction between coherent waves of different natures (optical and non-optical) had been successfully realized . Then, in parallel, potential performances connected with using a collinear wave mixing in the specific case of
decades great progress has been made in acousto-optics and now it is a widely used technique in the field of dataprocessing [2.11, 2.12]. Nevertheless, recently the existence of a new branch in studies and applications of collinear acousto-optical interaction, which is associated with acousto-optical nonlinearity, for example, in the form of three-wave coupled states, has been manifested [2.13-2.15]. That is why we believe that it is a worthwhile investment to develop these investigations, because the objects being under consideration here are closely connected with the above-mentioned nonlinearity in the regime of weak coupling. Within this consideration, we develop the exact and closed analytical model of collinear light scattering by continuous traveling acoustic waves of finite amplitude in a birefringent material with moderate linear acoustic losses. The main attention is paid to the distribution of the scattered light intensity, which can be considered as the transmission function of this process in the context of our analysis. In its turn, the width of the transmission function can be directly associated with the frequency resolution of the equivalent collinear acousto-optical filter. In so doing, we analyze the peculiarities of the effect conditioned by the acousto-optical nonlinearity, which leads to a measurable dependence of the transmission function on both the applied power density of acoustic waves of finite amplitude and the linear acoustic losses in crystalline material. Theoretically assumed novel properties of collinear acousto-optical interaction accompanied by nonlinearity and moderate linear acoustic losses are investigated experimentally with an advanced acousto-optical cell made of calcium molybdate ( CaMoO 4 )
Structure inspection is a tedious process that can take a long time when performed by humans. The use of computer programs is crucial to speed up the inspection without compromising ac- curacy. There are examples on which vision-based models are used to inspect linear elements and structures like pipelines based on their features [Rathinam et al., 2008]. Image processing helps to identify characteristics of a structure. However, the visible spectrum is not enough to determine failures on the material. Although this kind of data cannot be acquired using common means, the analysis on a wider region of the spectrum helps to improve the failure detection. Multispectral sensors allow identifying flaws on structures through the collection of pictures corresponding to different wavelengths and the analysis of each image to detect changes on their response against its normal behavior. This process allows determining if a specific problem exists on the structure depending on the composition of the images collected [Henrickson et al., 2016]; for example, by using both infrared and RGB cameras to analyze and estimate deterioration on bridges to detect subsurface delamination [Khan et al., 2014].
Object recognition algorithms have explored various ways of handling the available data. For example, in [TSSF12] a pixel-wise approach is proposed. That is to say, they process every pixel independently, obtaining the probability of that pixel belonging to a given part of a human body. When all pixels are classified, they are matched to a canonical body pose called the Vitruvian manifold. On the other side, the NNGM approach proposed in Chapter 8 handles the whole amount of information at once (depth point cloud of the hand) and exploits it to obtain the class (gesture) and some of its parts (fingertips). Between these two extrema, many approaches have been proposed, with the granularity of information being a major difference. For example, meaningful body parts are used in [FGMR10]. Bigger parts called pose-lets are used in [BM09]. These pose-lets are body parts tightly clustered in both appearance and configuration space. We propose a discriminative approach which builds on the voting idea of Hough Forests [GYR + 11], and also on the idea of describing object parts instead of the whole object, proposed in [FGMR10]. The objective of both methods is to detect an object given its parts. We propose to invert the formulation of the problem, trying to detect object parts given an object. For this purpose, we propose to describe object parts with the Oriented
Provenance is information about entities, activi- ties and people involved in producing a piece of data or thing, which can be used to form assess- ment about its quality, reliability and trustworthi- ness. The main concepts of PROV are entities, ac- tivities and agents. Entities are physical or digital assets, such as web pages, spell checkers or, in our case, dictionaries or analysis services. Provenance records describe the provenance of entities, and an entity's provenance can refer to other entities. For example, a dictionary is an entity whose prove- nance refers to other entities such as lexical entries. Activities are how entities come into existence. For example, starting from a web page, a sentiment analysis activity creates an opinion entity describ- ing the extracted opinions from that web page. Fi- nally, agents are responsible for the activities and can be a person, a piece of software, an organisa- tion or other entities. The Marl ontology has been aligned with the PROV ontology so that prove- nance of language resources can be tracked and shared.
Traditionally, in order to carry a statistical study a statistician would have to collect relevant data to that study. In the 21st century society, statisticians do not need to do this data collection; it is automatically done by computers when we do anything. From Google searches to clicking the “Like” button at your best friend’s Facebook account, these actions are electronically recorded. Furthermore, traditional data sets used to be an ensemble of variables – either numerical or categorical. Today, data science analyses a much wider variety of data such as images, Geo location data or raw text – e.g. social networks’ profile pictures or tweets and its localisation.
There are previous attempts for novel dataprocessing languages such as DISPEL , which is an im- perative workflow-based scripting language aimed at supporting analysis of streamed data. The programmer describes workflows for data intensive applications, a workflow being a knowledge discovery activity, where data is streamed through and transformed into higher level knowledge. Apart from requiring the data scientist to learn a variety of new concepts, this language is heavily integrated with the ADMIRE platform and requires the features and workflow provided by the platform. The use of DISPEL is supported in the seismology e-science dataanalysis environment of the VERCE  project. DISPEL was ported from the ADMIRE platform to Python and dispel4py  was created, which shares the same concepts as DISPEL but it is integrated with a platform familiar to seismologists. This approach of building abstractions on top of an already familiar pro- gramming language has many advantages, such as an easy integration with existing tools, or a wider acceptance by community. The R  language is designed for statistical computing and graphics, one of its strengths being the integrated support for data manipulation and calculation; for instance, effective data storage and calculations on matrices. As a functional language, R might require, to some extent, the data community to learn new concepts, which may impact its acceptability. The SPRINT  project has developed an easy to use parallel version of R, aimed at the bioinformatics community. This technology allows the addition of parallel functions, without requiring in depth parallel programming knowledge. SPRINT has been very successful in this domain, but it is not designed for exascale and, although it works well on a thousand of cores, it is not designed to scale up to hundreds of thousands or millions of cores.
se cargue con las Bases de Datos de ese servidor. Clic en la opción “Usar Seguridad integrada de Windows NT”. Seleccionar la Base de Datos del Data Warehouse desde el menú desplegable inferior, para nuestro caso “TDC DW”. Clic en Probar Conexión. Si los pasos fueron correctos, la conexión será satisfactoria. Clic en Aceptar. Clic en Aceptar.
Many times, traﬃc signs appear occluded by other objects like trees, vehicles, other road signs, etc. This traﬃc signs analysis system gives an estimation of partial occlusions, and to have this information of each sign results crucial for a complete inventory system which measures the level of maintenance of a road, including the safety. To estimate this parameter, we use the information provided by the tracking subsystem: if the same road sign is detected in non-consecutive frames, we can determine that a partial occlusion was happen.
The RAW architecture  is the basis of the recently commercialized TILE64 processor  that statically partitions code onto multiple tiles and statically orders com- munications between these tiles so as to lower communication delay, enabling fine- grained parallelism. This architecture is suitable for stream processing by exploiting low overhead communications in the high performance 2D-mesh network . Other proposed architectures, such as TRIPS  and Smart Memories  can be dinamically configured to suit and exploit different application classes and granularities. However, they are not addressed to detect and exploit granularities among different packets. We add some functionality to similar multicore architecture, but we address parallelism ex- ploitation analysis for stateful DPI properties, which show different dependency features than general applications. If a particular processing layer is more stressed than others due to variation of network traffic, OS can monitor stats and re–assign cores to process- ing layers by updating its own lookup tables.
Algunas de las limitaciones que presentan en estos modelo econométricos es que se puede caer en mala especificación del modelo (lineal, no-lineal, logarítmico, etc.) Además hacen inferencias sobre tendencias, esto quiere decir que la eficiencia obtenida por medio de estos métodos se convierte en la eficiencia de esa unidad con respecto a la media de todas las DMUs de la muestra. Como alternativa para evitar los inconvenientes de los métodos econométricos antes citados, además de la estimación de los parámetros, surgen los métodos no-paramétricos. Data Envelopment Analysis es un método no-paramétrica como se mencionó anteriormente.
2. Design and analysis of interactive multi-objective algorithms. We propose in this thesis two interactive multi-objective algorithms. On one hand, we present for the first time in the state of the art, a dynamic multi-objective interactive algorithm, called InDM2, this is an interactive multi-objective optimization metaheuristic for solving dynamic multi-objective optimization problems. This proposal enables the DM to interactively change the region of interest that (s)he desires to approximate by giving and updating a reference point contain- ing her/his preferences. InDM2 incorporates a reference point-based evolutionary algorithm as a base optimizer, currently including the WASF-GA and R-NSGA-II algorithms, which indeed allows to change the reference points during the optimization process. To assist the DM, the approximations obtained by the algorithm are shown in a graphical window. A key component of InDM2 is that its internal design allows to specify different restarting strategies to be applied when changes in the reference point and/or in the problem configuration are detected, making it more versatile. We analyzed InDM2 for solving DMOPs (benchmark FDA family problems) and a dynamic version of the combinatorial bi-objective optimization Traveling Salesman Problem, built using real-world streaming traffic data from New York City. InDM2 is able to react and adapt when the problem and the reference points change. Furthermore, it can handle preferences interactively, which has been able to generate approx- imation adjusted to the given preferences (i.e., the region of interest), in real-time, while the problem also changes at the same time. On the other hand, we introduced SMPSO/RP, this is an extension of the SMPSO metaheuristic incorporating a preference articulation mecha- nism based on indicating reference points. Our approach allows changing the reference points interactively and evaluating particles of the swarm in parallel. We compared SMPSO/RP against other interactive metaheuristics like gSMS-EMOA, gNSGA-II and WASF-GA. The results showed that SMPSO/RP achieves the best overall performance when indicating both achievable and unachievable reference points. We have also measured the time reductions that have been achieved when running the algorithm in a multi-core processor platform. 3. Design and analysis of artificial decision maker for testing interactive meta-
There are two commonly used classes of methods for smoothing sparse functional data. The first class of methods assume that individual curves in a given population share the same covariance function. Then the problem of smoothing n individual univariate functions can be equivalent to the problem of smoothing a single bivariate function. Individual curves can then be predicted by this covariance function. Func- tional principle component analysis is among the first-line approaches of this class of methods (Besse & Ramsay 1986, Yao et al. 2005, Peng & Paul 2009), while other example of this class include Fan & Gijbels (1996), Xiao et al. (2017), Cai & Yuan (2010). The second class of methods, called the functional mixed effects models, assume mixed effects models which allow for strength borrowing among individuals. Brumback & Rice (1998) first proposed a penalized smoothing spline mixed effects model. Later developments include mixed effects smoothing splines (Berk 2012), semiparametric mixed effects models (Durban et al. 2005), and various methods em- ploying B-splines (James et al. 2000, Thompson & Rosen 2008, Wu & Zhang 2006).
En esta primera parte del marco teórico, se hará uso de dos libros escritos por Markowitz. El primer libro Portfolio Selection (Markowitz, 1959), se ha utilizado para explicar los principales conceptos relevantes en el análisis de carteras de valores y los efectos de la diversificación en los diferentes tipos de carteras. Tras este primer acercamiento a la selección de carteras, se presentarán diversos modelos que han sido considerados como los más relevantes para la selección de carteras extraídos del libro Mean-Variance Analysis in Portfolio Choice and Capital Markets (Markowitz, 2000). Esta primera parte, concluirá con una exposición del concepto de fondo de inversión y sus posibles tipologías. La idea de trasfondo de esta primera parte, es la introducción al mundo de selección de carteras desde una perspectiva histórica-práctica.
OLAP es el acrónimo en inglés de procesamiento analítico en línea (On-Line Analytical Processing). Es una solución utilizada en el campo de la llamada Inteligencia empresarial (o Business Intelligence) cuyo objetivo es agilizar la consulta de grandes cantidades de datos. Para ello utiliza estructuras multidimensionales (o Cubos OLAP) que contienen datos resumidos de grandes Bases de datos o Sistemas Transaccionales (OLTP). Se usa en informes de negocios de ventas, marketing, informes de dirección, minería de datos y áreas similares.
31 productos consume en mayor cantidad y crear una base de datos o “big-data”. Gracias a esto, podemos focalizar en mejorar la aplicación según los datos quelas estadísticas nos den. Además, la necesidad de registro podrá ayudar a las estadísticas turísticas en cuanto a edad, sexo, procedencia y productos consumidos. Es decir, gracias al registro sabremos qué tipo de público está consumiendo el producto en realidad y como mejorarlo para mantener ese segmento y atraer a otros. Para registrarse, el usuario tendrá varias opciones. Podrá crear un perfil expresamente propio de la aplicación o podrá complementar sus datos con otras redes sociales como facebook, google+, instagram o twitter.
Una vez completada la parte de data-mining lista de sets de datos, implementamos una serie de funciones que se basan en la interpretación de los parámetros de resultados. Dado que nos focalizamos en los parámetros de scagnostics para definir aquellos con L- pattern y no L-shape el algoritmo de clasificación, únicamente valoraremos los resultados de este paquete mientras que los otros diferentes métodos pero no serán analizados en profundidad. Para el primer paso será la implementación de una función, shape.param() , que dado una serie de valores por gen (en realidad comparará dos sets diferentes) mediante diversas técnicas las devolverá como un data frame con las diferentes medidas de correlación asociadas. Está función será la que se implemente en la aplicación shiny más tarde para la interpretación de resultados. Aun que no para la aplicación, si no que para el análisis, esta función tendrá asociadas diversas funciones auxiliares que permitirán la integración de los valores de los diferentes sets de datos por gen como la función merge.gene.expMeth() la cual realizará la función recién explicada. Utilizaremos estas funciones auxiliares para integrar, evaluar y filtrar nuestros data.sets (10 data.sets) dando como resultado una tabla con todos los valores por gen de los diferentes métodos. Habiendo integrado en ellos todos los sets de datos, como resultado obtenemos una tabla con 22.000 genes de humano de la versión de assembly hg19 con los diferentes métodos de correlación.
Other pipelines had been developed for the analysis of ChIP-seq data. Some of them had automatized processes from the peaks annotation to downstream functional assessments, such as motif discovery [33-35]. Others had automatized ChIP-seq analysis from the reads pre- prepossessing to peak annotation but lack the GO term enrichment analysis and motif discovery . Our pipeline includes tools that allow a more complete analysis needed for FAIRE-seq data and organized the results in a relational database that allows extracting specific information related with a biological question in which users may be interested. Also, the incorporation of three different peak callers and the analysis of the shared identified regions, provides users with more useful information, especially due to the challenges inherent to the FAIRE-seq signal characteristics. Finally, our pipeline generates different exploratory graphics that allow users to check, e.g. if the assay represents successfully the enrichment of specific genome locations.