Images and Processing

Top PDF Images and Processing:

Diagnosis of breast cancer through the processing of thermographic images and neural networks

Diagnosis of breast cancer through the processing of thermographic images and neural networks

______________________________________________________________________________ In this thesis, one of the biggest challenges was to acquire relevant information of the thermograms that could indicate which thermal classification belongs each patient. To accomplish this, some techniques of digital image processing were used in the analysis of the thermograms. Digital image processing deals with the use of different computer algorithms to obtain certain information of an image, to get better visualization, transform it, etc., just a manipulation to get a desired result from the image. An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial coordinates, and the amplitude out off at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point (Mandelblatt, et al., 2009). A digital image is composed by a finite number of elements (pixels), and it can be a monochrome image (gray level) or a combination of individual images, for example, RGB images consists of three individual monochrome images the red (R), green (G), and blue (B) (Jahne, 1991).
Mostrar más

144 Lee mas

A ross node for processing images and providing robots with location capabilities

A ross node for processing images and providing robots with location capabilities

There are many uncertainties during localizing. For example, the uncertainty of robot itself, the accumulation of odometer errors, the noise of sensors, and the complexity and unknowns of the robot’s environment. In short, due to the existence of these uncertain factors, localization becomes more complicated. In recent years, many researchers have applied probability theory to robots’ localization. The core idea is to use the data collected so far as known conditions, and then recursively estimate the posterior probability density of the state space. The method based on the probability estimate of particle filtering is more promising to implement. Particle filtering, also known as the sequence Monte Carlo, is a new filtering algorithm developed in the middle and late 1990s. Its core idea is to represent probability distributions with random samples. Dallert combined the particle filter algorithm with the robot motion and perceived probability model to propose the idea of robot Monte Carlo localization. The core idea is to use a set of filters to estimate the possible location of the robot, that is the probability of being at that location. Each filter corresponds to a position, and each filter is weighted using observations, which in turn increases the probability of the most likely location. The pose of mobile robot is usually represented by triples ( 𝑡𝑡 𝑥𝑥 , 𝑡𝑡 𝑦𝑦 , θ ), where ( 𝑡𝑡 𝑥𝑥 , 𝑡𝑡 𝑦𝑦 ) represent the location of the robot under the world coordinate frame (translational component), and θ represents the motion direction of robot (rotational component). Pose estimate mainly involves three analysis methods under different assumption:
Mostrar más

64 Lee mas

Digital processing of in situ hybridization images: identification and spatial allocation of specific labels

Digital processing of in situ hybridization images: identification and spatial allocation of specific labels

to the reference system of the first image. This procedure is performed step-by-step. Every new image is assemble to the preceding one by determining the position of its coordinates origin respect to the coordinates origin of the preceding one. This procedure allows the integration of the series of images into a single image of the complete histological preparation. It also integrates the labels by means of coordinates transformation submitting them to the biological specimen reference system. Afterwards, the coordinates of the specific labels within the unique reference system are stored as a single text file and these data are used to create a new image. This image displays the spatial pattern of distribution of the whole label population of the biological specimen. The result of this procedure is illustrated in figure 3. It can be noted that an area of border overlapping of adjacent images is needed in order to perform an appropriate assemble of images. This allows to apply the cross- correlation between the overlapped areas. An optimal automatic assembly was obtained by cross-correlating the borders of the negatives of the original non- processed images. We also delete repeated labels during this process.
Mostrar más

6 Lee mas

Hierarchical region based processing of images and video sequences: application to filtering, segmentation and information retrieval

Hierarchical region based processing of images and video sequences: application to filtering, segmentation and information retrieval

Increasing attention is being paid to graph based structures. The region adjacency graph is the most well known region oriented structure. The Region Adjacency Graph is a graph which is constituted by a set of nodes representing regions of the space and a set of links connecting two spatially neighboring nodes. The region adjacency graph is usually used to represent a partition of the image. Note that a node of the graph can represent a region, a flat zone (a connected component of the space where the signal is constant, see Sec. 2.1 for a formal definition of flat zone) or even a single pixel. Processing techniques relying on region adjacency graphs have mainly focussed on merging techniques. The graph is constructed based on an initial partition: each region of the partition image is associated to a node in the graph, and two nodes are connected if their associated regions are neighbors in the partition image. A merging algorithm on such a graph is simply an iterative process that removes some of the links and merges the corresponding nodes. The merging order, that is, the order in which the links are processed is usually based on a similarity criterion between regions. Such homogeneity criterion may be based on color, motion, depth, etc. Each time a link is processed its associated nodes (i.e. regions) are merged together. After each merging the algorithm has to look for the links whose distance has to be recomputed. The merging ends once a termination criterion is reached. Most commonly used termination criterion is the number of nodes associated to the graph.
Mostrar más

215 Lee mas

TítuloEfficient processing of raster and vector data

TítuloEfficient processing of raster and vector data

When dealing with spatial data, depending on the particular characteristics of the type of infor- mation, it may be more appropriate to represent that information (at the logical level) using either a raster or a vector data model [1]. The advance of the digital society is providing a con- tinuous growth of the amount of available vector data, but the appearance of cheap devices equipped with GPS, like smartphones, is responsible for a big data explosion, mainly of trajec- tories of moving objects. The same phenomenon can be found in raster datasets, where the advances in hardware are responsible for an important increment of the size and the amount of available data. Only taking into account the images acquired by satellites, several terabytes of data are generated each day [2], and it has been estimated that the archived amount of raster data will soon reach the zettabyte scale [3].
Mostrar más

35 Lee mas

Electromagnetic models for ultrasound image processing

Electromagnetic models for ultrasound image processing

To solve these problems, Frery et al., 1997 deduced a new statistical model, the GA model, based on the product model assuming a Gamma distribution for the speckle component of multi-look SAR images and a generalized inverse Gaussian (GIG) law for the signal component. It was Frery who first proposed to divide a region as ho- mogeneous, commonly heterogeneous or extremely heterogeneous according to its ho- mogeneous degree when deducing the GA model. The K and G0 (also called B distri- bution) distributions are two special forms of the G model. The former is appropriate for the heterogeneous region and the latter is appropriate for the extremely heteroge- neous region. The G0 distribution can be converted into the Beta-Prime distribution under the single-look condition. Although the G0 distribution is a specific example of the G model, it has a more compact form in comparison with the G model and conse- quently has a simple parameter estimation method. The relationship between the G0 distribution and the K distribution cannot be deduced theoretically, but has been eval- uated via Montecarlo Simulation Methods(Mejail et al., 2001). The parameters of the G0 distribution are sensitive to the homogeneous degree of a region, which makes the G0 model appropriate for modeling either heterogeneous or extremely heterogeneous region. Moreover, moments method can be easily and successfully applied to parame- ter estimation of the G0 distribution; and the Log-Compressed G0 distribution, namely HG0 distribution, has an analytical expression. Also, Frery et al., 1997 and Muller and Pac, 1999 carried out experiments on many SAR images of different kinds of terrain with various band, polarization, resolution and look numbers, such as different urban areas, homogeneous and heterogeneous regions, etc. Their results testified the good characteristics of the G0 distribution. In next sections the GA and GA0 models will be presented and adapted to ultrasound B-scan images modeling.
Mostrar más

161 Lee mas

Semiautomated segmentation of bone marrow biopsies images based on texture features and Generalized Regression Neural Networks

Semiautomated segmentation of bone marrow biopsies images based on texture features and Generalized Regression Neural Networks

One of the most interesting fields in Digital Image Processing is the segmentation of an image into its different objects (Gonzalez and Woods, 1993). Segmentation plays a vital role in numerous biomedical imaging applications, such as the quantification of tissue volumes, diagnosis, localization of pathologies, study of anatomical structures and others (Glasbey 1995). The segmentation techniques can be divided into two groups: techniques based on contour detection which search for local grey level discontinuity in the image and those involving region growing which seek homogeneous image parts according to statistical measurements such as grey level and texture. The segmentation process of medical images is a difficult task to be accomplished in digital image processing (Chalama 1997).
Mostrar más

12 Lee mas

Appling parallelism in image mining

Appling parallelism in image mining

Image mining deals with the study and development of new technologies that allow accomplishing this subject. A common mistake about image mining is identifying its scopes and limitations. Clearly it is different from computer vision and image processing areas. Image mining deals with the extraction of image patterns from a large collection of images, whereas the focus of computer vision and image processing is in understanding and/or extracting specific features from a single image. On the other hand it might be thought that it is much related to content-based retrieval area, since both deals with large image collections. Nevertheless, image mining goes beyond the simple fact of recovering relevant images, the goal is the discovery of image patterns that are significant in a given collection of images. As a result, an image mining systems implies lots of tasks to be done in a regular time. Images provide a natural source of parallelism; so the use of parallelism in every or some mining tasks might be a good option to reduce the cost and overhead of the whole image mining process.
Mostrar más

5 Lee mas

On-line MRI sequences for the evaluation of apple internal quality.

On-line MRI sequences for the evaluation of apple internal quality.

Both static and dynamic images were submitted to a similar analysis; using dedicated procedures based on Matlab 7.0 image processing and PLS toolbox. First of all, automated background segmentation was performed which is straight forward for the static images due to their inherent quality (Melado et al., 2012). For the dynamic images, two different masks were defined and tested in order to segment the fruit from the background. The first method consisted on the use of a softened logarithmic filter. Then, an automated segmentation level based on Otsu method was defined for each of the images and the largest object was selected. For the second method, first of all a threshold was defined by selecting corner areas of the MRI which always correspond to the background: the image of the fruit always having higher gray level values than those from the background. In this latter case also further Gaussian filter was applied. Then, in both segmentation procedures, a mask was accomplished in order to ensure that the whole fruit was being taken into account in the segmentation while motion artifacts would be minimized. For such purpose, polar coordinates of a circular object with features extracted the main object and represented on the image and thus, a final fruit mask was obtained for removing the background. This procedure was repeated for the three repetitions and then, the histogram of each fruits area of each repetition was obtained..
Mostrar más

6 Lee mas

Optic disc segmentation in retinal images

Optic disc segmentation in retinal images

In this work we have discussed two different approaches towards OD segmentation. The analysis of the algorithm in [5,6] revealed the need for a more general and robust approach, which would enable the segmentation of OD boundaries that differ considerably from a circular shape. As regards to compression effects in segmentation of the optic nerve head, we determined that degradation introduced by lossy compression plays an important role and cannot be neglected when processing compressed images. Nonetheless, our results showed that JPEG2000 compression might provide a safer ground for retinal image segmentation than classical JPEG. A different strategy for OD localization based on active contours was developed. The pre-processing stage consisted in performing color mathematical morphology. This provided a vessel-free OD region with uniform color distribution and preservation of sharp edge position. The active contours algorithm for OD segmentation yielded a fair approximation to the actual hand-labeled OD. Our method was able to achieve an average accuracy rate in pixel classification of 85.67 % (σ = 7.82).
Mostrar más

8 Lee mas

Digital interferometry applied to transient dense plasmas

Digital interferometry applied to transient dense plasmas

The work presented in this paper proposes a simple pulsed interferometric technique of recording and digital processing. With only three exposures (one with a phase object and two reference records), this technique allows the obtaining of simul- taneous interferometric records of variations of the refractive index at large and small scale, similar to the optical method proposed in [7]. Each record is made with a parallel fringe interference pattern of high frequency (microinterferogram), of which spacing is between 15 and 20 lines/mm, or higher, depending on the CCD resolution. With digital processing (as in [6]) of the three saved images, we generate interference patterns with fringes of finite and infinite width, similar to optical holographic interferometry, but instead of each exposure occurring on the same holographic plate, the record is done in consecutive captures of a digital acquisition system. It should be noted that a submillimetric inspection on the microinter- ferometric record (on an enlarged image) would allow one to measure changes in the refractive index to small scale inside a macroscopic phase object. On the other hand, the direct visualization of the plasma microinterferogram possesses the
Mostrar más

6 Lee mas

EjemplosdeusodeAPA

EjemplosdeusodeAPA

To maintain positive affect in the face of negative age-related change (e.g., limited time remaining, physical and cognitive decline), older adults may adopt new cognitive strategies. One such strategy, discussed recently, is the positivity effect (Carstensen & Mikels, 2005), in which older adults spend proportionately more time processing positive emotional material and less time processing negative emotional material. Studies examining the influence of emotion on memory (Charles, Mather, & Carstensen, 2003; Kennedy, Mather, & Carstensen, 2004) have found that compared with younger adults, older adults recall proportionally more positive information and proportionally less negative information. Similar results have been found when examining eye-tracking patterns: Older adults looked at positive images longer than younger adults did, even when no age differences were observed in looking time for negative stimuli (Isaacowitz, Wadlinger, Goren, & Wilson, 2006). However, this positivity effect has not gone uncontested; some researchers have found evidence inconsistent with the positivity effect (e.g., Grühn, Smith, & Baltes, 2005; Kensinger, Brierley, Medford, Growdon, & Corkin, 2002).
Mostrar más

19 Lee mas

Detection of breast lesions in medical digital imaging using neural networks

Detection of breast lesions in medical digital imaging using neural networks

Different studies on using data mining in the processing of medical images have rendered very good results using neural networks for classification and grouping. In recent years different computerized systems have been developed to support diagnostic work of radiologists in mammography. The goal of these systems is to focus the radiologist's attention on suspicious areas. They work in three steps: i. analogic mammograms are digitized; ii. images are segmented and preprocessed; iii. Regions of Interests (ROI) are found and classified by neural networks [Lauria et al, 2003].
Mostrar más

10 Lee mas

Computer Vision and Medical Image Processing: a brief survey of application areas

Computer Vision and Medical Image Processing: a brief survey of application areas

applications, among which are: - Tomodensitometry (x-rays) or scanner, - Mag- netic resonance, - Tomography, - Echocardiography, - Angiography. Although these images provide information on the morphology and function of the organs, their objective and quantitative interpretation is still difficult to perform, as it requires extensive knowledge of the subject and ability to manipulate vast wealth of images and information about the same. [5] Hence, this study aims to review the main techniques, algorithms and methods of medical image processing, in different application areas in order to facilitate the task of clinical interpretation and provide insight to future researchers of the state of the art in this area of computer science.
Mostrar más

8 Lee mas

A Markov random field image segmentation model for lizard spots

A Markov random field image segmentation model for lizard spots

In this paper, a segmentation model for Diploglossus millepunctatus lizards based on MRF is proposed. Extensive experiments using Eqs. (2), (3) and (4) as cost functions, inference methods, loopy belief propagation, dual decomposition and Graph cuts are used. A preprocessing approximation dealing with color spaces, global and local enhancing and segmentation methods is performed. Results show best performance with Potts function as smooth term and intensity build data term (2) with preprocessed images that reach 84.87%, 71.49% and 67.70% of conidence in ideal, standard and contaminated images respectively. In raw images color based data term (4) reaches 69.7%, 64.16% and 58.98% of conidence in ideal, standard and contaminated images respectively. The model shows promising performance to automatize segmentation processes in PMR and to reduce processing time and subjectivity.
Mostrar más

9 Lee mas

Un método cuantitativo para analizar redes neuronales marcadas por histoquímica para acetilcolinesterasa

Un método cuantitativo para analizar redes neuronales marcadas por histoquímica para acetilcolinesterasa

Quantitative morphological studies are currently being carried out in biological sciences. However, they are time-consuming and need a fair level of exper tise in handling them. Digital image processing techniques have been employed when analysing biomedical samples in an attempt to solve this kind of problem; they also enable improving characteristics of images of interest, more information thus being obtained from them. The difficulty of obtaining morphometric data from tissue samples from different focal planes appears when analysing enzymatic histochemistry multi- focal images. Immunohistochemical and cytochemical techniques are par ticularly employed in neuron network studies. Samples are
Mostrar más

5 Lee mas

Encouraging comprehensibility through multimodal patient information guides

Encouraging comprehensibility through multimodal patient information guides

In the case of the multimodal translation of medical texts, medical illustrators should work hand-in-hand with medical translators to meet the recipients’ needs, whether they are patients lacking background knowledge or experts with a great amount of it. The representation of specialised concepts by means of images has been already explored from an intermodal translation perspective (Faber, León Araúz, Prieto-Velasco, & Reimerink, 2007; Prieto-Velasco & López-Rodríguez, 2009; Prieto-Velasco & Tercedor- Sánchez, 2014) for the sake of managing visual information in terminological databases. Intermodal meaning-making is a task current translators are expected to perform, since textual meaning is increasingly being built upon the interaction between words and visuals. In this respect, Kress and van Leeuwen (1996, 2001) have long advocated a visual grammar providing guidelines on the construction of meaning through images in multimodal texts.
Mostrar más

19 Lee mas

Evaluation of shear wave speed measurements using crawling waves sonoelastography and single tracking location acoustic radiation force impulse imaging

Evaluation of shear wave speed measurements using crawling waves sonoelastography and single tracking location acoustic radiation force impulse imaging

Many pathological conditions are closely related with an increase in tissue stiffness [5]. For hundreds of years, experts have performed manual palpation in order to measure elastic- ity changes. However, this method can only be applied on superficial areas of the human body and provides a crude estimation of tissue stiffness. Elastography is a technique that attempts to characterize the elastic properties of tissue in order to provide additional and useful information for clinical diagnosis [5]. For more than twenty years, different research groups have developed qualitative and quantitative elastography modalities. As a result, several techniques, mostly based on ultrasound but also on magnetic resonance imaging and optical coherence tomography, have been proposed and applied to a number of clinical applications such as cancer diagnosis (prostate, breast, liver), hepatic fibrosis staging, early detection of renal pathology, focal thyroid lesions characterizations, arterial plaque evalua- tion, wall stiffness measurement in arteries thrombosis evaluation in veins, and many others [3]. Recently, various groups have performed comparative studies among different elas- tographic techniques in order to characterize biomaterials [6, 7], to cross-validate several shear wave elastographic modalities [7, 8] and to study the factors that influence their preci- sion and accuracy [8]. These comparisons evaluated the shear wave speed (SWS) generated in the medium, the shear modulus, or the Youngs modulus. Some of these comparisons val- idated their work using mechanical testing to evaluate elastic properties, or a rheometer to measure the linear viscoelasticity in the materials [6, 9]. Gennisson et al. [9], tried to show that supersonic shear imaging (SSI) has better potential than transient elastography (TE) for materials characterization and highlighted the need to extend SSI for viscoelastic properties estimation. TE and SSI have also been applied for the shear modulus assessment of thin- layered phantoms [7]. Since thin-layered phantoms can simulate arteries, skin or corneal tissues. Both techniques presented similar shear wave speed estimation even though they use different vibration sources. Fromageau et al. [6] made a characterization for polyvinyl alcohol cryogel (PVA-C) phantoms using mechanical tests and two different elastographic modalities: Quasistatic elastography and TE. Both modalities showed good correspondence with mechanical tests. Latorre-Ossa et a. [10], used static elastography and shear wave elastography (SWE) for nonlinear shear modulus estimation in gelatin-agar phantoms and beef liver samples. Static elastography and SWE measure the local strain and the SWS value, respectively. With this information, it was possible to recover the local Landau co-
Mostrar más

47 Lee mas

Singing information processing: techniques and applications

Singing information processing: techniques and applications

Many different algorithms for F0 estimation in the frequency-domain have been proposed for decades. In the late 60s, Noll proposed several algorithms based on this approach: the use of the cepstrum for pitch estimation, since it peaks at the period of the signal under certain circumstances [Noll, 1967]; and a method based on harmonic product spectrum, which was based on the computation of the common divisor of its harmonic sequence [Noll, 1969]. Some years later, in 1987, Lahat et al. proposed a method based on the spectrum autocorrelation, which derived from the observation that a periodic but non-sinusoidal signal has a periodic magnitude spectrum, the period of which is the fundamental frequency [Lahat et al., 1987]. On the other hand, some other successful frequency-domain approaches are based on the idea of harmonic matching. This idea consists of comparing the harmonic positions of a predicted F0 and the actual positions of the harmonics in the sig- nal. One of the most successful implementations is the Two-way mismatch (TWM) algorithm preseted by [Maher and Beauchamp, 1994]. In TWM algorithm, for each fundamental frequency candidate, mismatches between the harmonics gener- ated and the measured partials frequencies are averaged over a fixed subset of the available partials. A weighting scheme is used to make the procedure robust to the presence of noise or absence of certain partials in the spectral data. The discrepancy between the measured and predicted sequences of harmonic partials is referred as the mismatch error.
Mostrar más

196 Lee mas

TítuloA methodology to develop computer vision systems in civil engineering : applications in material testing and fish tracking

TítuloA methodology to develop computer vision systems in civil engineering : applications in material testing and fish tracking

The answer to this question is a key issue on Computer Vision; the problem is related to the image data and to the fact that Computer Vision is a backward process which tries to describe the world from one or more images and to reconstruct its properties, such as shape, illumination, and color distributions. Computer Vision can be therefore understood like the opposite to the forward models developed in physics and in computer graphics. These fields model how objects move and animate; how light reflects off their surfaces, is scattered by the atmosphere, refracted through camera lenses, and finally projected onto image plane. These models are much evolved and they can currently provide an almost perfect illusion of reality [13].
Mostrar más

309 Lee mas

Show all 10000 documents...