The complex information encoded into the element connectivity of a system gives rise to the possibility of graphical processingof divisible systems by using the Graph theory. An application in this sense is the quantitative characterization of molecule topologies of drugs, proteins and nucleic acids, in order to build mathematical models as Quantitative Structure - Activity Relationships between the molecules and a specific biological activity. These types of models can predict new drugs, molecular targets and molecular properties of new molecular structures with an important impact on the Drug Discovery, Medicinal Chemistry, Molecular Diagnosis, and Treatment. The current review is focused on the mathematical methods to encode the connectivity information in three types of graphs such as star graphs, spiral graphs and contact networks and three in-house scientific applications dedicated to the calculation of molecular graph topological indices such as S2SNet, CULSPIN and MInD-Prot. In addition, some examples are presented, such as results of this methodology on drugs, proteins and nucleic acids, including the Web implementation of the best molecular prediction models based on graphs.
LPC has been used to perform formant analysis [Snell and Milinazzo, 1993], mu- sic/speech/noise segmentation [Mu˜ noz-Exp´ osito et al., 2005], speaker modification using poles warping [Slifka and Anderson, 1995], etc. Other systems based on LPC make use of a different representation of filter coefficients called Line Spectral Fre- quencies (LSF) or Line Spectral Pairs (LSP) [McLoughlin, 2008], which is more appropriate to perform spectral interpolations. However, LPC-based approaches have a sort of drawbacks that motivate research on alternatives. Specifically, the optimal order p of the LPC filter is hard to obtain, and it directly affects the use- fulness of the estimated spectral envelope. If the order p is too low, the resulting envelope may fit poorly the spectrum of the signal. In contrast, if p is too high, there may be a problem of overfitting. Moreover, even if the optimal order were known, it contains systematic errors due to the fact that the harmonic spectrum sub-samples the spectral envelope. These problems are especially manifested in voiced and high pitched signals.
These devices connected to the cloud, will create a new relationship between the users and their devices, and to improve the quality of that interaction, the information will be used for context and preferences of the user, such as location services, augmented reality in mobile devices and mobile e- commerce. Companies will be able to anticipate to the needs of the user and offer services and products more appropriate and personalized (for instance, marketing and commercialization based on geolocalization to find offers on real time or using promotional codes, or discounts, etc).
The goal is thus to combine the output of the different classifiers and to generate a single scalar score. This will be used to make the final decision and also give information on the confidence in the decision. Our ensemble consists of three classifiers based on implicit features, that is, PCA, V-HOG (i.e., the best cost-effective variation of HOG), and log-Gabor, and two classifiers using explicit features, namely, gradient- and symmetry-based classifiers. In addition a classifier ense- mble will be designed for each image region, according to the different performance of the above-mentioned classifiers region-wise. However, it must be taken into account that the nature of the output delivered by the classifiers is different. On the one hand, the gradient- and symmetry-based classifiers output likelihoods of the input samples are given the vehicle and the nonvehicle classes, as the distributions of the data have been modeled by known functions (bivariate Gaussian for gradient-based descriptor, Rayleigh, and 𝑡-Student for symmetry). Since there is no prior information on the classes, a priori probabilities are equal and posterior probabilities of each class are just the normalized likelihoods. In contrast, the other three classifiers, based on PCA, HOG, and log-Gabor, are built upon support vector machines and therefore do not provide probabilistic outputs. Instead, a soft value 𝑦 is output that measures the distance to the decision surface, 𝑦 = 0: if 𝑦 ≤ 0, the sample is classified as vehicle, if 𝑦 > 0 as nonve- hicle. Hence, a normalization scheme is necessary that trans- forms these values to a common range [0, 1] indicating the support for the hypothesis that the input vector submitted for classification comes from vehicle class. In Section 3.1, the used normalization schemes are described. Once the classif- ier outputs are in the same domain, normalized scores are combined through a combination rule, as discussed in Section 3.2.
optimization of a ‘bank’ of three spatial filters, which each cancel one of the most probable jammer signals, respectively. The short-time direction probabilities should then be used to switch between the spatial filters in order to cancel the jammer with the highest activity within a time frame, respectively. This approach forms a combination between spatial and envelope filtering. The disadvantages of unconstrained optimization of spatial filters are avoided. Furthermore, the artifacts of envelope processing are reduced, as this information is only used to switch between spatial filters, each of which preserves the target well. Interestingly, Peissig and Kollmeier  reported results from speech intelligibility measurements in spatial configurations with one target and two or three spatially separated noise sources, which are compatible with the findings from this paper. It was found that humans can separate not more than two spatially separated sound sources, i.e., only one noise source can be suppressed at a time. Two noise sources can only be suppressed in case the signals are modulated, i.e., the subjects seem to switch between two spatial filters depending on the activity of the sources. A similar behaviour would be expected from the combination of spatial filtering and envelope filtering outlined in this section.
In the recent years, the size of digital image collections has increased rapidly. Everyday, giga- bytes of images and sequences are generated. This information it has to be organized so as to allow efficient browsing, searching and retrieval. In the last years, increasing interest has been paid to the study of image retrieval. For that purpose, two different strategies have been used to retrieve data: one based on manual annotations and one based on visual features . Even many advances have been made in the field text based retrieval, there exist major difficulties, especially when the size of the image collection is large. One difficulty is the vast amount of labor required to manually annotate images and a second difficulty, the subjectivity of human perception. The perception subjectivity and annotation impreciseness may cause unrecoverable mismatches in the later retrieval process . Thus, the manual annotation strategy has become with the emergence of large image collections an acute problem.
14 groups to explore, investigate and analyze authentic problems (Area, 2005). Accordingly with the Buck Institute for Education (BIE) 2 , Project-Based Learning (PBL) leads students to carry out a search process which main aim is to answer a question, problem or challenge. From this approach the students not only learn from the content, but they bring into play a range of skills related to information (as searching, processing or dissemination), collaboration, communication , critical thinking and organization amongst others.
and memory, and evaluated the performance of the system on a predic- tion task. Memory and consistency properties were understood from a dynamical point of view and linked to the injection locking properties. Good prediction was observed where the system exhibited a balance be- tween consistency and memory properties, indicating that both of them are necessary for success on a prediction task. These results were consis- tently obtained as we evaluated the impact of other critical parameters in the photonic system. The parameter evaluation also showed an acceptable parameter tolerance for optimized operation. Subsequently, we extended the capabilities of the system. We demonstrated that this photonic reser- voir computer is capable ofprocessinginformation at 20 GSa/s, in our case limited by the instrumentation, with similar performance compared to slower speeds. Maintained performance at faster modulation indi- cates that informationprocessing devices based on semiconductor lasers might operate at even higher speeds. Fundamental properties were also evaluated in a much wider optical frequency span, showing that suitable properties and good prediction performance can be observed when the injection is hundreds of GHz detuned from the solitary emission of the semiconductor laser inside the reservoir. This opens the door to injection at multiple wavelengths, where the reservoir acts on a single or multiple injected channels simultaneously. Last, we demonstrated how one can create a delay-based reservoir with nodes with different sets of proper- ties, i.e. heterogeneous reservoirs. In this case, we accomplished it by additionally modulating the injection current of the semiconductor laser using a square waveform. The resulting reservoir exhibited an improved prediction performance and parameter tolerance compared to the same reservoir without the bias current modulation.
We performed experimental tests using a Pelican quadro- tor  and a moving colored target . The testbed shown in Figure 3 has a low-level stability controller based on PID that uses information from GPS, IMU, pressure altimeter and magnetometer fused using a Kalman filter. This controller is embedded, closed, unmodifiable but gains are tunable. On- board vision processing is achieved using a dual core Atom 1.6 GHz processor with 1 GB RAM, wireless interface and support for several types of USB cameras (mono or stereo). This computer runs Linux OS working in a multi-client wireless 802.11(a,b,g) ad-hoc network, allowing it to communicate with a ground station PC used for monitoring and supervision.
Combining image (field) capture and/or generation, with specific optical signal processing operations on the field, prior to capture or post- generation, remains a very useful optical system approach. This approach is quite conservative and mimics the development of microscopy. Zernike phase contrast microscopy is successful because it combines an understanding of the optical system physics with some specific application (a priori knowledge) that, for example, one is dealing with a weakly scattering phase or low contrast object. Suitable optical hardware based optical informationprocessing, combined with enhanced modern technology, i.e. CCD camera and image processing software, can produce results of sufficiently quality (and at a reasonable price) for many current biomedical laboratory applications.
should contain exemplar data describing the classes under study. This is of- ten composed of corpora of speakers portraying the investigated pathologies and normophonic cases. The data acquisition process should follow certain guidelines not to introduce unexpected variability, including the avoidance of external sources of noise or the maintenance of the same acoustic and instrumental conditions during the recording process. Moreover, the corpus should be large enough to contain all possible variations within pathologi- cal or normophonic conditions, while trying to maintain a balance among classes in terms of age, sex, dialect, etc. The idea by doing so, is to design generalist models that disregard the particularities related to the speakers themselves, and thus to identify the actual characteristic related to normo- phonic or dysphonic conditions. The management of the corpus for the storage and accessibility of recordings from a medical perspective should also be taken into account, as this permits the creation of synergies towards certain labours such as the diagnosis of the pathologies, or the assessment from a perceptual point of view as given by different evaluators. Some con- siderations referred to the management in a clinical setting for a large corpus of dysphonic and dysarthric speakers are discussed in .
An interesting aspect of this scenario is that in region IV one can have excitable behavior through two different mechanisms. On the one hand, and similarly to the behavior analyzed in Chapter 5 and Refs. [182, 183], close enough to the SL line one has excitability if the fundamental solution is appropriately excited such that the oscillatory behavior existing beyond the SL is transiently recreated. The second mechanism takes place close to the SNIC line, where the oscillatory behavior that is transiently recreated is that of the oscillations in region V. Both excitable behaviors exhibit a response starting at zero frequency (or infinite pe- riod), as both bifurcations are mediated by a saddle, whose stable manifold is the threshold beyond which perturbations must be applied to excite the sys- tem. In neuroscience terminology, both excitable behaviors are class (or type) I [107, 108], although there are important differences between them. The SNIC mediated excitability is easier to observe than the one associated to a saddle-loop bifurcation for two reasons. First, it occurs in a broader parameter range due to its square-root scaling law (6.3), with respect to the SL excitability ∗
With the goal of allowing for the construction of reading charts and to follow the design philosophy of optotypes, we created the font Optotipica , which is a True Type font suitable for any software system supporting Open Type fonts. It contains the Latin Basic encoding with lining figures and all the diacritics used in Western Europe. It includes upper and lower-case letters and numerals (Fig. 6), thus allowing for the construction of any type of near vision reading chart, as well as distance visual acuity testing letter charts. Upper-case letters are contained in a 5s×4s grid pattern with a stroke-width s. The em square dimensions are 3s×3s except in letters f, i, j, l, m, t and w. The dimensions of ascenders and descenders are 2s. Numbers are contained in the same grid as
Finally, our paper is also related to the literature on information theory and to a growing diverse economic literature that uses entropy-based measures to describe levels of informativeness. The concept of power of a signal that we use in this paper was originally proposed by Shannon (1948) in his seminal paper on communication. Subsequently, entropy-based measures have been broadly used by applied mathematicians to model a number of aspects of communication, ranging from data compression and coding to channel capacity or distortion theory. Nevertheless, such measures have remained seldom used by economists for decades. Recently a number of papers are incorporating entropy-based measures to model communication and levels of informativeness in several economic phenomena. For example, Sciubba (2005) uses the power of a signal to rank information in her work on survival of traders in financial markets under asymmetric information. Cabrales, Gossner, and Serrano (2013) propose, for a class of no-arbitrage investment problems under ruin-averse preferences, an entropy-based measure which they call entropy informativeness. Using entropy informativeness, they obtain the interesting result that one information structure dominates another if and only if when the investment project associated with the first one is rejected at some price, then so is the project associated with the second. Nevertheless, entropy informativeness is not a novel concept in information theory since it coincides with the power measure of the signal associated to the corresponding information structure.
This will have as a fundamental consequence the dissemination of this important historical heritage. In many cases, these Cultural Assets are difficult to access or are in an excessively poor state of conservation, which makes it impossible to visit them safely. In addition, in the most relevant cases, the provision of autonomous devices for the issuance ofinformation via Bluetooth (beacon) can be planned for on-site access to available information, independently of the internet network, a fact that would reinforce these itineraries and Cultural activities. The new possibilities of reference and location that the current technology offers us, together with the precise knowledge of each fortification and their territory, makes possible the citizens’ knowledge of these heritage assets in an integral way, facilitating a Cultural journey characterized by a greater immersion in the landscape and territorial values of the area and having quality information (texts, maps, ancient and current images, cartographic representations at different scales...) through a mobile device.
EBC method is the conclusion of a project which aims at the recognition and classification of the elements present in a liver tissue sample . In that problem, images were captured with a resolution of 8 bits per pixel in order to assure that the size of the patterns space had less than 256 elements. The paralleling of the algorithm allows its application into images of 24 bits per pixel, solving response time restrictions.
Main external variables or factors– these terms are used interchangeably in TAM (Davis, 1989) – are related both to individuals, design and contextual variables are: objective design characteristics, training, computer self-efficacy, user involvement in design, and the nature of the implementation process (Davis & Venkatesh, 1996), system’s technical design characteristics, user involvement in system development, the type of system development process used, cognitive style, training, documentation, user support consultants, system features, user characteristics, ultimate behaviour (Davis et al., 1989). Further analysis based on reviewed the articles published which notes that there is no clear pattern with respect to the choice of the external variables considered (Legris, Ingham, & Collerette, 2003). The authors also remarked the 39 factors affecting information system satisfaction (Bailey & Pearson, 1983) and proposed a factor’s classification (Cheney, Mann, & Amoroso 1986).
In particular, if this conditions refer to a peak activity that exceed system limitations, we are doing stress testing. Stress testing evalu- ates the behavior of systems that are pushed beyond their speci¯ed operational limits [Ngu01]. Stress testing requires an extensive plan- ning e®ort for the de¯nition of workload, and this involves the analy- sis of di®erent components that e®ect system behavior (e.g. memory resources, network bandwidth, software failures, database deadlocks, operational pro¯les). However, this analysis is usually performed ad hoc. We propose to use Fault Tree Analysis helps to de¯ne workload scenarios. The results of this analysis are composed with the spec- i¯cation statecharts, and we obtain a model that describes how a given workload can be reproduced.