Image Processing

Top PDF Image Processing:

Real-time speckle image processing

Real-time speckle image processing

Abstract The laser dynamic speckle is an optical phe- nomenon produced when a laser light is reflected from an illuminated surface undergoing some kind of activity. It allows a non-destructive process for the detection of activities that are not easily observable, such as seed via- bility, paint drying, bacterial activities, corrosion pro- cesses, food decomposition, fruit bruising, etc. The analysis of these processes in real time makes it possible to develop important practical applications of commercial, biological and technological interest. This paper presents a new dig- ital system based on granular computing algorithms to characterize speckle dynamics within the time domain. The selected platform to evaluate the system is Field Pro- grammable Gate Array (FPGA) technology. The obtained minimum clock periods and latencies enable speckle image processing with real-time constraints with a maximum throughput of about thousand 512 9 512 fps.
Mostrar más

13 Lee mas

Cell Microscopy Imaging: a review on digital image processing applications

Cell Microscopy Imaging: a review on digital image processing applications

A first example of the application of level sets in cellular image processing can be found in [51], where it is performed the interpretation and measurement of the architectural organization of mitochondria in electron microscopy images. The images of mitochondria were firstly pre-processed by denoising and closing interrupted structures using an approach based in edge and coherence enhancing diffusion. This allows noise removal and structure closure at certain scales, while preserving both the orientation and magnitude of discontinuities. A variational level sets method was then used for contour extraction. In [52] the level sets method was applied to multi-cell segmentation and tracking in time-lapse fluorescence microscopy and contributed an algorithm which improves the robustness against noise as well as the segmentation and tracking accuracy while reducing the computational cost in comparison to previous similar algorithms. The algorithm’s performance was evaluated for real fluorescence microscopy images from different biological studies. In this work, watershed segmentation was also used as a complementary algorithm to separate touching cells. In [53] the problem of segmenting stem cells was addressed through multilevel-sets segmentation. Given that the stem cells have complicated morphologies composed by blobs (the cellular body) and curvilinear structures, the authors introduced the use of multi-scale curvilinear structure detectors for these structural components, and used the detected structures as initial cell contours for multi-level sets. Results were validated for embryonic and neural stem cells, with more than 90% of cell blobs detected correctly.
Mostrar más

59 Lee mas

Structure analysis through multispectral image processing from a UAV

Structure analysis through multispectral image processing from a UAV

For the industry, the inspection of structures has become a tedious task which requires the in- vestment of an important amount of time and resources, yet necessary to guarantee the proper safety measures demanded by laws. With each new improvement in the structural analysis, companies keep trying to find faster and more reliable ways to check their buildings without risking their employees or having to rely on expensive methods of inspection for accurate results. This graduate thesis focuses on the development of a platform for aerial inspection based on multispectral image processing. An unmanned aerial vehicle performs both the nav- igation and vision processes through an embedded microcomputer and a multispectral sensor that captures images of the structure. The drone is positioned over the target through semiau- tonomous commands before starting the analysis. Different spectral images of the structure are collected to extract features and use them along with the spectral signatures of the possible failures to classify each flaw on the inspected section. The results of the failure detection are sent to a ground control station where the operator is able to find should there be problems on the structure, hence reducing the time required for the inspection and the amount of resources and workforce required.
Mostrar más

106 Lee mas

Tag detection for preventing unauthorized face image processing

Tag detection for preventing unauthorized face image processing

In this paper we present a technology which allows the individuals that are being subject to a face processing method to express their privacy preferences through a visual code in which the information is embedded. In further steps, this flag-based system which is being worn by the individuals who want to pro- tect their privacy under certain situations, is detected with a code detection algorithm, and thus the information contained can be extracted and applied to the image. These privacy preferences are set according to a policy frame- work presented by Adrian Dabrowski called Picture Privacy Policy Framework (P3F) which consists in a central database of privacy policies using a flag-based system [9]. In its work, the author developed a policy framework providing a solution to the problem of unauthorized face image processing. It covered all the aspects regarding how the desired system should work since the moment the policy was obtained from a code that the individuals would wear. That is the main difference with the work presented in this paper, which presents the a new novel technology that will be capable of encoding the privacy preferences of the user,and will therefore make this system feasible. An example of use of the P3F policy framework can be seen in Figure 1.
Mostrar más

12 Lee mas

Retinal Image Analysis: Image Processing and Feature Extraction Oriented to the Clinical Task

Retinal Image Analysis: Image Processing and Feature Extraction Oriented to the Clinical Task

As we have seen the application of digital image processing techniques for medical image analysis, in this case retinal images, is not only extremely beneficial but can also prove to be effective and cost-efficient for disease management, diagnosis, screening, etc. The increasing need for early detection and screening, along with the ever-increasing costs of health care, are likely to be the driving force for the rapid adoption and translation of research findings into clinical practice. Our contributions are a step toward that goal. However, there are many remaining obstacles and there is an implicit need to test and develop more robust techniques, to test many more patients, different illnesses, etc., before this technology is ready for everyday use. In our particular case, we need to further develop the PSF estimation and selection technique before we can have a fully automated fundus image enhancement algorithm. We believe that mobile computing devices will pave the way in the upcoming years for health-oriented applications with the intent of increasing global health-care access.
Mostrar más

14 Lee mas

Improvement of a Parallel System for Image Processing

Improvement of a Parallel System for Image Processing

The parallel processing has become an important topic when the object is to increase the computational speed of a task. Images provide a natural source of parallelism so image processing at low level have several characteristics that make it suitable for implementation with parallel computers5]15]. In the last time, there is an incremental eort to develop platforms that make the parallel programming a more easy and practical job this is the case of PVM. PVM enables an heterogeneous network of computers to be enrolled on a single problem by means of the use of message passing 6]16].
Mostrar más

9 Lee mas

Electromagnetic models for ultrasound image processing

Electromagnetic models for ultrasound image processing

A Central Limit Theorem Generalization due to Gnedenko and Kolmogorov (Ho- effding et al., 1955) states that the sum of a number of random variables with a power- law tail (Paretian tail with power α) distribution (and therefore having infinite vari- ance) will tend to a Alpha-Stable distribution as the number of summands grows. If the exponent α>2 then the sum converges to a stable distribution with stability parameter equal to 2, i.e. a Gaussian distribution. As a result Rayleigh distribution is a special case of the square-root-symmetric stable distribution. In Pereyra and Batatia, 2012, Alpha- stable distributions have been applied to statistical image processing of high frequency ultrasound imaging, in order to perform tissue segmentation in ultrasound images of skin. It was established that ultrasound signals backscattered from skin tissues con- verge to a complex Levy Flight random process with non-Gaussian–stable statistics. Based on these results, it was proposed to model the distribution of multiple-tissue ultrasound images as a spatially coherent finite mixture of heavy-tailed Rayleigh dis- tributions i.e. alpha-stable distributions.
Mostrar más

161 Lee mas

Image processing and computing for digital holography with ImageJ

Image processing and computing for digital holography with ImageJ

The   development   of   a   platform   within   the   framework   of   ImageJ   to   process   digitally   recorded   holograms  is  presented  in  this  work.  ImageJ  an  open  source  software  for  processing  digital  images,   provides  the  needed  architecture  to  develop  customized  and  specialized  processing  tools  of  images.   In   this   paper,   we   show   the   use   of   that   architecture   to   develop   the   needed   tools   to   reconstruct   numerically  holograms  that  were  digitally  recorded.  The  main  advantage  of  this  development  is  the   possibility  of  using  the  built-­‐in  functions  of  ImageJ  to  pre-­‐process  the  recorded  holograms  as  well  as   to   visualize   and   manage   the   reconstructed   images.   The   use   of   the   developed   tool   is   illustrated   by   means  of  a  step-­‐by-­‐step  reconstruction  of  a  digital  hologram  of  a  regular  die.  
Mostrar más

8 Lee mas

Image processing methods for computer-aided screening for disease

Image processing methods for computer-aided screening for disease

Next, we present the results of the quantitative evaluation of the datasets described after the application of the selected methods for the complete stack of frames (256 frames for each dataset). We used the same ROIs for all the frames in the same dataset in order to maintain a fixed reference in the evaluation. The initial values of the metrics for all the datasets are included in table 4.4. From table 4.5 we can conclude that all the denoising techniques improve the image quality metrics (SNR, CNR and ENL). The best results are accomplished by the methods using the wavelet compounding strategies, WCAN and WVMF algorithms, with respect to the application of a mean compounding plus digital filtering (HMF, OBNLM). With this strategy the enhancement metrics increase in all the techniques and reduce the speckle noise, improving the possible study of details in the image (figure 4.7). The beta parameter determines the degree of alteration of the image in relation with the original image, as expected, the methods with more reduction of noise (WCAN and WVMF) presented a lower value of the beta parameter. The proposed method presents the best performance in CNR and ENL and the second in SNR (with WVMF being the first). The potential limitation in the edge preservation because of the lower value of the beta parameter with respect to the rest algorithms did not compromise the edge preservation according with the results of the qualitative evaluation that we will present in section 4.3.2.
Mostrar más

136 Lee mas

Comprehensive retinal image analysis: image processing and feature extraction techniques oriented to the clinical task

Comprehensive retinal image analysis: image processing and feature extraction techniques oriented to the clinical task

We assume that in small regions of the image the SV blur can be ap- proximated by a spatially invariant PSF. In other words, that in a small region the wavefront aberrations remain relatively constant; the so-called isoplanatic patch (Bedggood et al., 2008, Goodman, 1968). An important aspect of our approach is that instead of deblurring each patch with its corre- sponding space-invariant PSF—and later stitching together the results—we sew the individual PSFs by interpolation, and restore the image globally. The estimation of local space-invariant PSFs, however, may fail in patches with no structural information. Unlike other methods, we incorporate prior knowledge of the blur that originates through the optics of the eye to ad- dress this limitation. To this end we propose a strategy based on eye-domain knowledge for identifying non-valid local PSFs and replacing them with ap- propriate ones. Even though methods for processing retinal images in a space-dependent way (like locally adaptive filtering techniques (Salem & Nandi, 2007, Marrugo & Mill´ an, 2011)) have been proposed in the litera- ture, to the best of our knowledge this is the first time a method for SV deblurring of retinal images is proposed.
Mostrar más

159 Lee mas

Automatic identification of weed seeds by color image processing

Automatic identification of weed seeds by color image processing

The analysis and classification of seeds are essential activities contributing to the final added value in the crop production. These studies are performed at different stages of the global process, including the seed production, the cereal grading for industrialization or commercialization purposes, during scientific research for improvement of species, etc. For all these purposes, different procedures based on manual abilities and appreciation capabilities of specialized technicians are employed. In most cases these methods are slow, have low reproducibility, and possess a degree of subjectivity hard to quantify, both in their commercial as well as in their technological implications. It is then of major technical and economical importance to implement new methods for reliable and fast identification and classification of seeds. Like the manual identification work, the automatic classification should be based on knowledge of seed size, shape, color and texture (i.e., greytone variations on the surface). Numerous image analysis algorithms are available for such descriptions, which make machine vision a suitable candidate for such a task.
Mostrar más

8 Lee mas

TítuloAutomatic grading of ocular hyperaemia using image processing techniques

TítuloAutomatic grading of ocular hyperaemia using image processing techniques

In order to evaluate hyperaemia, several parameters can be taken into account. Some examples are the general colouration of the conjunctiva, the number of visible blood vessels, and their widths [36]. Several works propose features that need to be cal- culated in order to evaluate the symptom. Papas [37] proposed several image features, including vessel quantity and hue characteristics. In this work, the correlation between the measures and the gradings was analysed, the highest value being obtained by a vessel quantity-related measure. Wolffsohn and Purslow [2] proposed features based on colour or edge detection. The validation was performed with the BHVI scale, and was more focused on the repeatability of the analysed characteristics than on performing the grading automatically. Park et al. [38] also proposed four image features related to red hue, vessel quantity and the area occupied by vessels. The authors validated their methods with two grading scales (consisting of 4 and 10 values, respectively). Results showed that one of the methods that measures the area occupied by blood vessels had the highest correlation with the expert gradings. The aim of these works was to either implement several image features or to analyse the relation of single features and ex- perts’ gradings. Thus, there is a lack of research regarding the comparison of features and the analysis of their interactions and their relevance in the literature.
Mostrar más

246 Lee mas

Flood analysis in Peru using satellite image: The Summer 2017 case

Flood analysis in Peru using satellite image: The Summer 2017 case

After data collection, the next step is to apply different image processing techniques to identify the affected areas. The pre-processing and processing techniques which were implemented in this work consists of five stages. In Fig. 6, one can observe a block diagram for this image processing step.

6 Lee mas

Towards a parallel image mining system

Towards a parallel image mining system

A sort of standard image processing tasks are commonly used at processing stage, like im- age smoothing, histogramming, 2-D FFT calculation, local area histogram equalization, local area, brightness and gain control, feature extraction, maximum likelihood classification, con- textual statistical classification, image correlation (convolution, filtering), scene segmentation, clustering feature enhancement, rendering, etc. [6]. Many existing algorithmic implementions [3][5][7][9][11][12], could be done thru parallel solutions. Moreover, different techniques at differ- ent grain scale could be applied depending on the particular task; some of them are [4][22][23]. At this level any parallel model proposed not depends directly from the mining model itself, whereas it depends directly from any image processing task involved at the processing phase. As a consequence, any possible parallel model will be closely related to the specific image processing task to be done [10]; that is the reason because we do not suggest any model. The best solution could be to build a standard parallel image processing library that enables to make parallel processing at different combinations.
Mostrar más

11 Lee mas

Complex Modulation Code for Low  Resolution Modulation Devices

Complex Modulation Code for Low Resolution Modulation Devices

Digital information technology is constantly developed using electronic devices. The three dimensional (3D) image processing is also supported by electronic devices to record and display signals. Computer generated holograms (CGH) and integral imaging (II) use liquid-crystal spatial light modulator (SLM). This doctoral dissertation studies and develops the application of a commercial twisted nematic liquid crystal display (TNLCD) in computer generated holography and integral imaging. The goal is to encode and reconstruct complex wave fronts with computer generated holograms, and 3D images using Integral Imaging systems. Light modulation curves are presented: amplitude and phase-mostly modulation. Holographic codes are designed and implemented experimentally with optimum reconstruction efficiency, maximum signal bandwidth, and high signal to noise ratio (SNR). The study of TNLCD into II is presented as a review of the basics techniques of display. A digital magnification of 3D images is proposed and implemented. 3D digital magnified images have the same quality of optical magnified images, but the magnified system is less complex. Recognition system for partially occluded object is solved using a 3D II volumetric reconstruction. 3D Recognition solution presents better performance than the conventional 2D image systems. The importance in holography and 3D II is supported by the applications as: optical tweezers, as dynamic trapping light configurations, invariant beams, and 3D medical images.
Mostrar más

180 Lee mas

Identificación del área afectada por Lemna en la bahía de Puno utilizando procesamiento digital de imágenes

Identificación del área afectada por Lemna en la bahía de Puno utilizando procesamiento digital de imágenes

The identification of the area affected by Lemna is necessary in the inner Bay of Puno, as well as being a great source of information for the decontamination and treatment works in the waters of Lake Titicaca. At present there are many techniques of digital image processing and a tool that has been highly used is the digital image processing is Matlab Image Processing Toolbox, it is precisely for its high precision in statistical data and recognition of areas as well as contours within an image; All thanks to the image processing library found in Matlab and using a GUI (Graphic User Interface). The samples were taken as samples for the Google Earth satellite imagery program of August 2013, and the internal puncture bay was distributed in 8 zones for recognition. The program was used using the following tools for each zone: Zone 1 with An area of Lemna of 25406.3m 2 , Zone 2 with a Lemna area of 53196.3m 2 , Zone 3 with a Lemna area
Mostrar más

104 Lee mas

Hyperspectral image representation and processing with binary partition trees

Hyperspectral image representation and processing with binary partition trees

In this work, Binary Partition Trees have been proposed as a new representation for hyperspectral images. Obtained through a recursive region-merging algorithm, they can be interpreted as a new region-based and hierarchical representation of the hyperspectral data. The main advantage of BPT is that it can be considered as a generic representation. Hence, it can be constructed once and used for many applications such as segmentation, classification, filtering, object detection, etc. Many tree processing techniques can be formulated as pruning strategies. Concerning the BPT construction, two concepts have been highlighted to define the recursive merging algo- rithm. The first concept is the use of nonparametric statistical region models which efficiently deal with the problems of spectral variability and textures for clustering hyperspectral data. The second one is the use of a new similarity mea- sure called Multi-Dimensional Scaling (MDS) depending on canonical correlations relating principal coordinates. Note that, in this approach, as in many hyperspectral image processing algorithms, there is a dimension reduction step represented by the number of principal components. However, by contrast to classical approaches, the dimension reduction is not defined and applied globally on the entire image but locally between each pair of regions. It has been demonstrated that BPT enables the extraction of a hierarchically structured set of regions representing well the image. As a first example of BPT processing, we have proposed and illustrated a pruning strategy to classify hyperspectral data. Experimental results obtained from different data sets have shown that the proposed method improves the classification accuracies of a classical SVM, providing classification maps with a reduced amount of noise. Future work will be conducted for the pruning strategy. New global techniques are currently being studied to improve the accuracy and the robustness of the results. We will also develop pruning strategies for different types of applications including object detection and segmentation.
Mostrar más

171 Lee mas

Appling parallelism in image mining

Appling parallelism in image mining

Image mining deals with the study and development of new technologies that allow accomplishing this subject. A common mistake about image mining is identifying its scopes and limitations. Clearly it is different from computer vision and image processing areas. Image mining deals with the extraction of image patterns from a large collection of images, whereas the focus of computer vision and image processing is in understanding and/or extracting specific features from a single image. On the other hand it might be thought that it is much related to content-based retrieval area, since both deals with large image collections. Nevertheless, image mining goes beyond the simple fact of recovering relevant images, the goal is the discovery of image patterns that are significant in a given collection of images. As a result, an image mining systems implies lots of tasks to be done in a regular time. Images provide a natural source of parallelism; so the use of parallelism in every or some mining tasks might be a good option to reduce the cost and overhead of the whole image mining process.
Mostrar más

5 Lee mas

Automatic stereoscopic video object-based watermarking using qualified significant wavelet trees

Automatic stereoscopic video object-based watermarking using qualified significant wavelet trees

A very interesting and common category of attacks com- bines mixed image processing operations together with JPEG compression. Mixed image processing operations can enhance the overall quality of video objects, while JPEG compression decreases the data size of the final video objects. In our ex- periments sharpening and blurring operations are performed to the watermarked video objects of Figures 6(d) and 7(d) and then JPEG compression is applied. Tables VI and VII show the watermark extraction results for the two video objects. The video object of Figure 6(d) is enhanced and afterwards com- pressed with ratio 15.7 providing a PSNR value of 27.4 dB. Similar image processing operations are performed to the video object of Figure 7(d), where now the compression ratio is 16.3 providing PSNR equal to 29.6 dB. Again, in all cases the ex- tracted watermark patterns are highly correlated to the original watermarks, while the contained characters in each pattern are in most cases easily recognizable.
Mostrar más

10 Lee mas

Stream processing to solve image search by similarity

Stream processing to solve image search by similarity

"analysis and subsequent reaction with data in motion". In this paper we used the stream processing platform, called S4 to accelerate image processing system, when a continuous flow of objects are expected. The design of the application architecture that is capable to analyze and act on a flow of images in real time is presented here. Finally, an experimental study of the performance of each step involved in the process is done. As to the construction of the feature vector, it is visible the impact of the SSS technique used to search, making this one of the points to continue to develop to optimize the proposed architecture. Regarding the construction of the index, the technique used to maintain the utilization of PE 50%, showing its positive impact on time indexing of objects-images. Finally, this paper analyzes the performance of the whole process of similarity search of images, showing how the process is favoured when it has a solid architecture, and dynamic time, enabling it to rule out candidates during the search process.
Mostrar más

7 Lee mas

Show all 1324 documents...