to the reference system of the first image. This procedure is performed step-by-step. Every new image is assemble to the preceding one by determining the position of its coordinates origin respect to the coordinates origin of the preceding one. This procedure allows the integration of the series ofimages into a single image of the complete histological preparation. It also integrates the labels by means of coordinates transformation submitting them to the biological specimen reference system. Afterwards, the coordinates of the specific labels within the unique reference system are stored as a single text file and these data are used to create a new image. This image displays the spatial pattern of distribution of the whole label population of the biological specimen. The result of this procedure is illustrated in figure 3. It can be noted that an area of border overlapping of adjacent images is needed in order to perform an appropriate assemble ofimages. This allows to apply the cross- correlation between the overlapped areas. An optimal automatic assembly was obtained by cross-correlating the borders of the negatives of the original non- processed images. We also delete repeated labels during this process.
Digital Image Processing helps to increase the visualization of some important details in thermal images. The correct treatment of a thermogram can show a clear asymmetry pattern, and to be quantified it is necessary to traduce the image data into numerical data, this is where the challenge appears. Many studies have tried to traduce that information into a thermographic automatic classification, algorithms as Decision Tree, Support Vector Machine, and others, have been used (Mookiah, Data mining technique for breast cancer detection in thermograms using hybrid feature extraction strategy., 2012), but giving as a result low sensitivity and specificity. Also, Artificial Neural Networks have been implemented to classify normal and abnormal thermograms (Koay, Herry, & Frize, 2004), but with a small number of image samples, making hard the trust in the obtained results and leaving room for conclusions that a bigger number ofimages are needed to have a better performance of the classification.
In the recent years, the size ofdigital image collections has increased rapidly. Everyday, giga- bytes ofimages and sequences are generated. This information it has to be organized so as to allow efficient browsing, searching and retrieval. In the last years, increasing interest has been paid to the study of image retrieval. For that purpose, two different strategies have been used to retrieve data: one based on manual annotations and one based on visual features . Even many advances have been made in the field text based retrieval, there exist major difficulties, especially when the size of the image collection is large. One difficulty is the vast amount of labor required to manually annotate images and a second difficulty, the subjectivity of human perception. The perception subjectivity and annotation impreciseness may cause unrecoverable mismatches in the later retrieval process . Thus, the manual annotation strategy has become with the emergence of large image collections an acute problem.
Different studies on using data mining in the processingof medical images have rendered very good results using neural networks for classification and grouping. In recent years different computerized systems have been developed to support diagnostic work of radiologists in mammography. The goal of these systems is to focus the radiologist's attention on suspicious areas. They work in three steps: i. analogic mammograms are digitized; ii. images are segmented and preprocessed; iii. Regions of Interests (ROI) are found and classified by neural networks [Lauria et al, 2003].
By applying imagesdigitalprocessing techniques to satellite spectroscopy it is possible to “Differentiate diverse covers present on the Earth’s surface such as glaciers, volcanoes, vegetation, soils, water, types of rock outcrops, etc.”, (LoVec- chio, Lenzano, Richiano, & Lenzano, 2016). Based on those techniques a spectral response of maize crops is generated, giving the possibility to calculate areas of the same variety and time. This is an automated process (Galindo, et al. 2014). According to Figure 6, a lower spectral slope in the red and near-infrared bands is observed, which may indicate that when implementing methods for extracting parameter REP (Network Edge Position), which indicates physiological and phenological changes of any plant species and satellite image spectroscopy (Ángel, 2012), phytosanitary status of different crops can be validated, since spectral response of the cover would be dramatically affected by this parameter.
One of the most interesting fields in Digital Image Processing is the segmentation of an image into its different objects (Gonzalez and Woods, 1993). Segmentation plays a vital role in numerous biomedical imaging applications, such as the quantification of tissue volumes, diagnosis, localization of pathologies, study of anatomical structures and others (Glasbey 1995). The segmentation techniques can be divided into two groups: techniques based on contour detection which search for local grey level discontinuity in the image and those involving region growing which seek homogeneous image parts according to statistical measurements such as grey level and texture. The segmentation process of medical images is a difficult task to be accomplished in digital image processing (Chalama 1997).
Digital information technology is constantly developed using electronic devices. The three dimensional (3D) image processing is also supported by electronic devices to record and display signals. Computer generated holograms (CGH) and integral imaging (II) use liquid-crystal spatial light modulator (SLM). This doctoral dissertation studies and develops the application of a commercial twisted nematic liquid crystal display (TNLCD) in computer generated holography and integral imaging. The goal is to encode and reconstruct complex wave fronts with computer generated holograms, and 3D images using Integral Imaging systems. Light modulation curves are presented: amplitude and phase-mostly modulation. Holographic codes are designed and implemented experimentally with optimum reconstruction efficiency, maximum signal bandwidth, and high signal to noise ratio (SNR). The study of TNLCD into II is presented as a review of the basics techniques of display. A digital magnification of 3D images is proposed and implemented. 3D digital magnified images have the same quality of optical magnified images, but the magnified system is less complex. Recognition system for partially occluded object is solved using a 3D II volumetric reconstruction. 3D Recognition solution presents better performance than the conventional 2D image systems. The importance in holography and 3D II is supported by the applications as: optical tweezers, as dynamic trapping light configurations, invariant beams, and 3D medical images.
The program was coded in the image processing language IDL 5.3 and it could be applied on a series of n imagesof 512 × 512 pixels obtained from the same scanning line. The program was conformed by four modules. (i) The automatic sparks de- tection module whose goals were to calculate the fluorescence background (without events) and detect all the events exceeding a threshold value. The module carried out conventional filtering to increase signal to noise ratio and normalization based on image sequence standard deviation. (ii) The user intervention module provided possibilities for the user to directly interact with the signals selection, selecting or deleting events manually. (iii) Measurement and analysis module was used to extract position and morphological parameters on each selected area using binary masks. (iv) Storage of results module had the aim of store parameters an classify them according to a distribution patterns.
Quantitative morphological studies are currently being carried out in biological sciences. However, they are time-consuming and need a fair level of exper tise in handling them. Digital image processing techniques have been employed when analysing biomedical samples in an attempt to solve this kind of problem; they also enable improving characteristics ofimagesof interest, more information thus being obtained from them. The difficulty of obtaining morphometric data from tissue samples from different focal planes appears when analysing enzymatic histochemistry multi- focal images. Immunohistochemical and cytochemical techniques are par ticularly employed in neuron network studies. Samples are
The work presented in this paper proposes a simple pulsed interferometric technique of recording and digitalprocessing. With only three exposures (one with a phase object and two reference records), this technique allows the obtaining of simul- taneous interferometric records of variations of the refractive index at large and small scale, similar to the optical method proposed in . Each record is made with a parallel fringe interference pattern of high frequency (microinterferogram), of which spacing is between 15 and 20 lines/mm, or higher, depending on the CCD resolution. With digitalprocessing (as in ) of the three saved images, we generate interference patterns with fringes of finite and infinite width, similar to optical holographic interferometry, but instead of each exposure occurring on the same holographic plate, the record is done in consecutive captures of a digital acquisition system. It should be noted that a submillimetric inspection on the microinter- ferometric record (on an enlarged image) would allow one to measure changes in the refractive index to small scale inside a macroscopic phase object. On the other hand, the direct visualization of the plasma microinterferogram possesses the
As a fact, digital techniques using appropriate filters make feasible to observe key elements of the cervical tissue (Lee & Park, 1990; Shamir et al., 2008), modulating technical details such as brightness and contrast. Germane to this, segmentation techniques have allowed differentia- ting vascular anomalies of the cervix including vascular loops and webs (Srinivasan et al., 2009; Xue et al., 2010; Mehlhorn et al., 2012; Lee & Park, 1990; Shamir et al., 2008; Srinivasan et al., 2007; Dvir et al., 2006; Lehmann & Palm, 2006; Zimmerman-Moreno & Greenspan, 2006). However, at present, none of the above mentioned biotechnological improvements have properly made possible to highlight the vascular pattern of the uterine cervix. This research aims to fill this gap. Thus, we developed a novel method using digital image processing. This method looks for enhancing the vascular patterns of uterine cervical tissue in images obtained by digital colpos- copy; the results of this investigation will help to detect lesions of human cervix at earlier times than done at present.
12. Elaboration of a protocol and methodological criteria of intervention for the different typologies of fortifications and their environments. These will refer both to the necessary previous studies, with a view to an adequate and rigorous knowledge of these defensive elements and their landscape, as well as to the definition of procedures and actions that try to preserve and evaluate this Heritage from the point of view of its integral consideration as a Cultural Landscape. Regarding the intervention criteria, the aim will be to facilitate verification of compliance with the regulatory requirements established by the Spanish Technical Building Code (CTE), both by the agents involved and by the respective competent administration.
For example, Ohman, Flykt, and Esteves (2001) presented participants with 3 × 3 visual arrays with images representing four categories (snakes, spiders, flowers, mushrooms). In half the arrays, all nine images were from the same category, whereas in the remaining half of the arrays, eight images were from one category and one image was from a different category (e.g., eight flowers and one snake). Participants were asked to indicate whether the matrix included a discrepant stimulus. Results indicated that fear-relevant images were more quickly detected than fear-irrelevant items, and larger search facilitation effects were observed for participants who were fearful of the stimuli. A similar pattern of results has been observed when examining the attention-grabbing nature of negative facial expressions, with threatening faces (includ ing those not attended to) identified more quickly than positive or neutral faces (Eastwood, Smilek, & Merikle, 2001; Hansen & Hansen, 1988). The enhanced detection of emotional information is not limited to threatening stimuli; there is evidence that any high-arousing stimulus can be detected rapidly, regardless of whether it is positively or negatively valenced (Anderson, 2005;
Los métodos kernel son una familia de algoritmos de aprendizaje de máquina ampliamente utilizados en procesamiento digital de señales (DSP) . Su popularidad se puede atribuir a su sólida base matemática dentro de los espacios de Hilbert generados por kernel y porque han demostrado tener buen desempeño en la solución de problemas no lineales [39, 54]. Debido a estas propiedades, los métodos kernel representan una alternativa a los métodos tradicionales no lineales como las redes neuronales artificiales, las máquinas de vectores de soporte lineales y los filtros de polinomios. Sin embargo, en algunas áreas como: pruebas estadísticas para distribuciones de probabilidad, estimación de distancia entre distribuciones de probabilidad, medidas de dependencia entre variables aleatorias y medidas de dependencia entre procesos aleatorios, estos métodos no se han podido consolidar .
Digital hearing aids exhibit two key advantages when compared to analog hearing aids. The first one is that the input sound signal is “separated” into a number of frequency bands aiming at some frequency bands, such as, for instance, the high-frequency bands or the bands that contain the speech information being more amplified than other bands, like, for example, low-frequency bands or bands that contain a high level of noise. It is interesting to note that this technique solves one of the problems associated with analog hearing aids: providing the same gain value regardless the frequency. The second benefit is based on the dynamic range compression that plays the key role of estimating a desirable gain to map the wide range of an input sound signal (e.g. speech) into the reduced dynamic range of a hearing impaired listener. Basically, this strategy is an automatic gain control, in which the gain is automatically adjusted based on the intensity level of the input signal. In this scenario, frames with a high intensity level (loud sounds) are amplified less compared to frames with a low intensity level (soft sounds), since a comfortable listening level for loud sounds makes soft sounds inaudible. In this respect, we are made sure that loud sounds are not becoming uncomfortably loud because, apart from improving speech intelligibility, this strategy is also designed to avoid discomfort, distortion, and damage. With this in mind, the gain function in each frequency band may be based on a curvilinear or piecewise linear function, such as, for instance, the 3-piece linear approximation illustrated in Figure 2.12 [Hersh and Johnson, 2003]. In this example at hand, it seems clear to notice that low input level sound signals are expanded and high input level signals are compressed in the impaired dynamic range. The two advantages combined together is the so-called “multichannel compression”, or also known “multiband compression”, that intends for applying a different separate compression function in each frequency band, since the subject’s dynamic range often differs in different frequency bands.
Although we consider this method useful for our purposes, since it provides an overall view of the respondents’ opinions about the role ofimages in patient-addressed texts, it allows us only to analyse responses from our sample, which makes it difficult to discuss statistics and to generalize conclusions to our study population: patients reading patient information guides. However, the visual nature of the images and texts made possible the use of both the questionnaire and the focus group, which were easy to carry out and non- invasive for patients, despite these methods are not specific to multimodality. As a consequence, readability and comprehensibility were explored only through subjective self-reporting, so that our research can be regarded only as an exploratory pilot study, which will be extended and enhanced by means of other methods used in multimodality studies, such as comprehension tests.
Localization is the process for robots to ensure its location and orientation in the working environment. To be more precise, use some input information, including prior environment map information, real time pose estimation of robots and observation values from sensors, to generate a more accurate estimation of robots’ current pose. The localization method of robots depends on what sensors are used. Localization sensors for mobile robots include odometry, camera, lidar, ultrasound, infrared, microwave radar, gyroscope, compass, accelerometer and etc. The corresponding localization technology for robots can be divided into two categories: absolute localization and relative localization. Absolute localization uses navigation marks, active/passive identification, map matching, GPS and so on technology to achieve self- localization, whose result is relatively accurate. Relative localization is called dead reckoning method, which infers current pose of robots by calculating robots’ direction and distance relative to initial pose.
In a small article entitled “Research: The Young” (Jeunes chercheurs) , Roland Barthes expresses his contempt for ‘academic prose’, as commonly understood in past practice. His text, intended for post-graduate students in arts and humanities, is a true declaration against what the author called the separation of discourses: that of scientificity and the discourse of desire, that is to say, writing: “the task (of research) must be perceived in desire (…) to cast the subject across the blank page, not to ‘express’ it (nothing to do with ‘subjectivity’) but to disperse it: to overflow the regular discourse of research”(Barthes, 1987, p 69, 71). His Text, in italics, is a new object that belongs to nobody and that is created by means of interdisciplinarity (very incipient at the time). Could we – many years later – think of it, as he proposes, as a Tissue? Perhaps we could think from the work to the Text and from the Text to the device; a device in which images and words are entangled: imagesof art, but also of writing itself. Images – to go beyond Barthes – of which we emphasize their role as acts (Bredekamp) or events (Belting); images that neither represent nor illustrate, but intervene in the reconfiguration of the sensible (Rancière). A device-text where all of this inevitably interweaves with the logics of distribution.
The voluntary or involuntary movement of the patient (due to respiration, coughing, shaking, etc.) provokes changes in the electrode/electrolyte interface, and therefore, generates movement artefacts. The effects of the respiration can be manifested as a baseline wandering (Figure 1.6 a) and /or as an amplitude modulation (Figure 1.6 b). Through the cables, undesirable signals can be electrostatically coupled or electromagnetically induced. An example of the undesirable signals is the 60/50Hz powerline interference (Figure 1.6 c). The most appropriate room for VLPs studies would be a Faraday cage, which provides electrostatic screening. Screening from magnetic influences is more difficult. As a practical solution, the magnetic induction can be minimised by keeping the patient cables close each other (pick up area small) and locating the room as far away as possible from electromagnetic interference sources (e.g. motors, diathermy equipment, etc.).
The majority of the currently available techni- ques to perform remote sensed image fusion are based on multiresolution analysis. This kind ofimages analysis requires the decomposition of the image at differente scales or levels, depen- ding the fusion results on this level. Then, the two main objectives of this work are: to investi- gate the influence of the source images spatial characteristics on the decomposition level that the process fusion should be performed in; and to show how depends the spatial-spectral quality of fused images on this decomposition level. To carry out this study, the image fusion methodo- logy that has been applied is based on the Wave- let transform, calculated by the à trous algorithm. The quality of the fused images has been evalua- ted by the ERGAS indices, as well as, the spec- tral correlation, the spatial correlation (Zhou’s index) and a global index (Q4). This methodo- logy has been applied to fuse several multispec- tral and panchromatic images registered by the corresponding sensors on board the Landsat, Iko- nos, and Quickbird satellites. It has been de- monstrated that, in the majority of the cases, a low number of decompositions provides fused images with a high spatial and spectral quality trade-off. Additionally, the results indicate that the decomposition level that provides the best spatial-spectral quality trade-off depends on the