Flow-mediated dilation (FMD) offers a mechanism to characterize endothe- lial function and therefore may play a role in the diagnosis of cardiovascular diseases. Computerized analysis techniques are very desirable to give accuracy and objectivity to the measurements. Virtually all methods proposed up to now to measure FMD rely on accurate edge detection of the arterial wall, and they are not always robust in the presence of poor image quality or image artifacts. A novel method for automatic dilation assessment based on a global imageanalysis strategy is presented. We model interframe arterial dilation as a super- position of a rigid motion model and a scaling factor perpendicular to the artery. Rigid motion can be interpreted as a global compensation for patient and probe movements, an aspect that has not been sufﬁciently studied before. The scal- ing factor explains arterial dilation. The ultrasound (US) sequence is analyzed in two phases using image registration to recover both transformation models. Temporal continuity in the registration parameters along the sequence is en- forced with a Kalman ﬁlter since the dilation process is known to be a gradual physiological phenomenon. Comparing automated and gold standard measure- ments we found a negligible bias (0.04%) and a small standard deviation of the differences (1.14%). These values are better than those obtained from manual measurements (bias = 0.47%, SD = 1.28%). The proposed method offers also a better reproducibility (CV = 0.46%) than the manual measurements (CV = 1.40%).
In regards to neural networks, the best approach for convolutional and feed forward neural networks is to train a single neural network per expression. Then the image is classified by running it through all networks and choosing the expression which outputs the highest probability. The most challenging part of training the neural networks is to avoid over-fitting, and combination of dropout, normaliza- tion and augmentation help to avoid this problem. It is expected for convolutional neural networks to perform better than feed forward neural networks. This is because convolutional neural networks keeps information about the location of the pixel with respect to its neighbours while feed forward networks don’t. Results show that convolutional neural networks indeed perform better and are proven to be better suited for expression detection through imageanalysis.
since we apply process to new data. With reproducibility, re- searchers cannot obtain new results. Therefore, the main objective of this project is to offer the possibility to other re- searchers that they can execute any imageanalysis algorithm with their dataset without access to the original code. With this, they just need to upload their dataset and after execu- tion, they will get the results. This way we cover one of the main principles of reproducibility in computer science since the researchers will be able to rerun the experiment with new datasets. As we said in the previous paragraph, that possibi- lity, to rerun the experiment with new data, in this case with a new set of images, is very important in today’s approaches of computer science. Second objective, which is of great im- portance, is to collect new images from the researchers. With those images we will be able to get new datasets for future re- searches and improve already existing experiments. The main objective is mainly intended for the researchers to easily test algorithms with theirs datasets. The second objective is inten- ded for the owner laboratory of the project.
Retinal imageanalysis is a constantly growing applied and interdisciplinary field in which any recent development in optics, medical imaging, image and signal processing, statistical analysis, etc. is likely to find its place to further augment the capabilities of the modern clinician. Its primary goal is to have automated imageanalysis and computer-aided diagnosis ubiquitous and effective enough that ultimately leads to the improvement of the health- care system, thus enhancing the health and quality of the general population. When we started this thesis the aforementioned goal seemed rather far, today many strong initiatives are being developed to solidify the use of computer-aided diagnosis in many different medical fields. Ophthalmology is a field that is heavily dependent on the analysis of digital images be- cause they can aid in establishing an early diagnosis even before the first symptoms appear. In this way it is easier to stop the development of many ocular diseases when they are in the early stages. As such we have made our best effort to contribute and find solutions to different problems along the imaging pipeline. We have dealt with problems arising in image acqui- sition (Chapters 3 and 4), poor image quality (Chapters 4, 5, and 6), and extraction of features of medical relevancy (Chapters 2 and 5). By improv- ing the acquisition procedure, sorting the images by quality, or enhancing the images, among the different topics we have covered herein, we leverage the potential use of the images in the clinical setting. This has been our motivation throughout this thesis and the true common denominator of all chapters.
As we have seen the application of digital image processing techniques for medical imageanalysis, in this case retinal images, is not only extremely beneficial but can also prove to be effective and cost-efficient for disease management, diagnosis, screening, etc. The increasing need for early detection and screening, along with the ever-increasing costs of health care, are likely to be the driving force for the rapid adoption and translation of research findings into clinical practice. Our contributions are a step toward that goal. However, there are many remaining obstacles and there is an implicit need to test and develop more robust techniques, to test many more patients, different illnesses, etc., before this technology is ready for everyday use. In our particular case, we need to further develop the PSF estimation and selection technique before we can have a fully automated fundus image enhancement algorithm. We believe that mobile computing devices will pave the way in the upcoming years for health-oriented applications with the intent of increasing global health-care access.
In this research, twelve soil pits were excavated on a bare Mazic Pellic Vertisol, six of them in May 13/2011 and the rest in May 19/2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak™ camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents: box counting dimension (D BC ) and interface fractal dimension (D i ). In addition, three pre-fractal scaling coefficients were determinate at maximum resolution: total number of boxes intercepting the foreground pattern (A), fractal lacunarity (λ 1 ) and Shannon entropy (S 1 ).
The main aim of dynamic imaging is to study the physiology (function) of the or- gan in vivo. Typically the image sequence has constant morphologic structures of the imaged organs but the regional voxel intensity varies from one frame to another, depending on the local tissue response to the administered contrast agent or radiopharmaceutical. In the past, analysis of such dynamic images involved only visual analysis of differences between the early and delayed im- ages from which qualitative information about the organ, for instance, regional myocardial blood ﬂow and distribution volume are obtained. However, the se- quence of dynamic images also contain spatially varying quantitative information about the organ which is difﬁcult to extract solely based on visual analysis. This led to the method of parametric imaging where dynamic curves in the image se- quence are ﬁt to a mathematical model on a pixel-wise basis. Parametric images whose pixels deﬁne individual kinetic parameters or physiologic parameters that describe the complex biochemical pathways and physiologic/pharmacokinetic processes occurred within the tissue/organ can then be constructed. This ap- proach is categorized as model-led technique that utilizes knowledge and a pri- ori assumptions of the processes under investigation, and represents the kinetics of the measured data by an analytical (or parametric) model.
tion method, which is well documented in the literature, creates a new volume from the input data by solving an initial value partial differential equation (PDE) with user-deﬁned feature-extracting terms. Given the local/global nature of these terms, proper initialization of the level set algorithm is extremely important. Thus, level set deformations alone are not sufﬁcient, they must be combined with powerful preprocessing and data analysis techniques in order to produce successful segmentations. In this chapter the authors describe the preprocessing and data analysis techniques that have been developed for a number of segmen- tation applications, as well as the general structure of our framework. Several standard volume processing algorithms have been incorporated into the frame- work in order to segment datasets generated from MRI, CT and TEM scans. A technique based on moving least-squares has been developed for segmenting multiple nonuniform scans of a single object. New scalar measures have been de- ﬁned for extracting structures from diffusion tensor MRI scans. Finally, a direct approach to the segmentation of incomplete tomographic data using density pa- rameter estimation is presented. These techniques, combined with level set sur- face deformations, allow us to segment many different types of biological volume datasets.
With regards to the synthetic addition of noise and outliers, the combination of both geometric and structural constraints proposed by our method has re- sulted in a superior performance than any of its parts separately as well as than the rest of the methods. In the presence of nonrigid deformations, both the full version of our method and the purely structural one share the best performance. In the image matching experiments the methods with outlier rejection ca- pabilities have performed the best, due to the usual presence of clutter in these types of experiments. There are no significant differences between the per- formance of the full and the pure geometric versions of our method in these experiments. This is because the similarity transformation model adjusts fairly well to the underlying geometry of the problem. The Dual-Step method by Cross & Hancock (1998) present a roughly similar performance than ours but takes a considerably higher time.
Then, a factor analysis of these components (cognitive and affective) has been conducted, in order to identify possible underlying dimensions of perception in the set of attributes. This factor analysis of the components is used to reduce the large amount of data, by grouping together those attributes related to each other under the same dimension. For this purpose, the VARIMAX method of rotation with Kaiser Normalization has been used. Once the rotation is completed, the significant factors which explain at least one variable have been selected. Thus, among the 24 displayed attributed (Table 1), we have obtained five different factors which explain 53,42% of variance using factor analysis.
Abstract—This paper presents a novel and general framework for histopathology imageanalysis using nonnegative matrix factorization. The proposed method uses a collection-based image representation called Bag of Features (BOF) to represents the visual information of a histopathology image collection. Con- vex Nonnegative Matrix Factorization (CNMF) is applied to a training set of images to find a compact representation in a latent topic space. The latent representation has two important characteristics: first, CNMF is able to find representative clusters of images in the collection, second, clusters are represented by convex linear combinations of images in the training set. This latent representation is exploited in different ways by the proposed framework: concept labels can be assigned to clusters using the labels of the constituting images, representative images and visual words can be identified for each cluster, and new unlabeled images can be labeled by mapping them to the latent space. The proposed annotation model has an interesting property, it is easily interpretable since it is possible to trace those visual words present in the image which contribute the most to a given annotation. This implies that annotations in an image may be explained by identifying the regions that contributed to them. An exploratory experimentation was performed in a histopathology dataset used to diagnose a type of skin cancer called basal cell carcinoma. The preliminary results show that the combination of BOF and NMF is an interesting alternative for biomedical image collection analysis with a high level of interpretability.
One of the most direct methods of characterizing soil structure is the analysis of the spatial arrangement of pore and solid spaces on images of sections of resin- impregnated soil. Recent technological advances in digital imagery and computers have greatly facilitated the application of imageanalysis techniques in soil science. Thick sections are analysed by reflected light, and thin sections are analysed by transmitted light to obtain images from which pores and solid spaces can be sep- arated using image analyses techniques. Direct measurements on images together with applications of set theory are used to quantify connectivity, size and shape of pores. However, the image resolution and the threshold value used to discriminate between pore and solid space can introduce errors in the method.
This task is clearly aimed at imageanalysis research groups and the areas of expertise of the MIRACLE group don’t include imageanalysis research. However, as our group did have a strong expertise in automatic learning algorithms applied to different projects mainly in the fields of data, text and web mining, we decided to make the effort and participate in this task to promote and encourage multidisciplinary participation in all aspects of information retrieval, no matter if it is text or content based.
In the literature, as described in subsection 2.3, exists many studies related to pornog- raphy detection in video, using different computational methods. All of them try to achieve a higher detection rate as possible and require much processing time. So, besides being excellent techniques to identify pornography in videos, they were not specifically developed to be used at crime scenes. Also, they are not specific for child pornography, but they try to identify videos of nudity or adult pornography. NuDetec- tive Video Analysis uses the algorithms provided by its ImageAnalysis. However, all the empirically experiments to determine the parameters of the algorithms were con- ducted with real child pornographic videos. That means that the method described in this subsection is optimized to detect child pornographic videos and may not work properly to identify adult pornography.
Imageanalysis is fundamental to any solution designed for outside use. The infinite variety of natural environments and conditions presents a formidable challenge to any detection or recognition system. The first task for these systems is to filter the images and highlight or extract the relevant information. Some of the principal difficulties are: moving vegetation, rain, changes in the light, reflections, low camera quality, etc. The recognition algorithms also allow objects and/or people to be identified and classified. Expe- rience in, and knowledge of, these algorithms are the keys to adapting them and finding the optimal solutions. Vaelsys, being a specialised company, can combine different factors such as recognition rates, failure rates, performance, etc, to provide a realistic and effective solution.
destination, as they create underlying structures of representation and interpretation of places, cultures and peoples. Stereotypes create expectations as to what a certain culture should look like or how locals should behave. Indeed, “images incorporated in marketing destinations set up a genre of myths and expectations that influence how cultures are perceived and interpreted” (Selwyn, 1996 as in Andsager & Drzewiecka, 2002). What tourists see, experience, and learn about the cultures they visit is often conditioned by existing structures of image representation and interpretation of cultural others, which can re-affirm stereotypes rather than break them down” (Andsager & Drzewiecka, 2002). As the authors’ results suggest, “pre-existing stereotypes are not dismantled by actual experiences, but instead serve as standards against which the visited culture is evaluated”. “Stereotyping can be so strong that it can lead a tourist to see something that is not there” (Laxson, 1991 as cited in Andsager & Drzwiecka, 2002). However, Ansager and Drzwiecka (2002) acknowledge that although stereotypes have implications in the perception of cultural identity differences, it is inevitable for humans to classify information into types. “Destination images might be products of typing or stereotyping” (Andsager & Drzwiecka, 2002). When dealing with the issue of stereotypes in perceived image formation, the authors emphasize the effect the desire for a cultural essential or inherent difference may have in how tourists perceive and interpret the destination and its culture. As found in their first study “when respondents generalized descriptions of the people who lived in the destinations pictured more from their own preconceived ideas than what they saw in the photos”. “This apparently strong influence of stereotypes, even when potential tourists are wrong about the destination they believe they are viewing, suggests that stereotypes confound perceptions of familiarity—something that tourist images could easily exploit but may find difficult to combat” (Andsager & Drzwiecka, 2002).
point and several radial points with the condition that the central point must have higher intensity than the radial points. The morphological filter was optimized by analyzing empirically two critical parameters: the number of radial points equidistantly distributed along an external radius and the value of this external radius. The analysis was performed by considering that the intensity peaks have always a lower radius than the grapes (a preliminary study proved that the most frequent radius for grapes was 75 pixels). Therefore, the range of the values of the external radius considered was from 10 to 50 pixels being analyzed in steps of 10 pixels. Similarly, the number of radial points selected was from 5 to 35 points being analyzed in steps of 5 points. Finally, the selected parameter combination for the morphological filter was a radius of 30 pixels and 15 radial points equidistantly distributed along the external radius. Moreover, the morphological detector was applied to the R intensity color layer of the segmented and filtered images due to the reddish nature of the grapes. The results showed that this configuration provided a good agreement between time-performances and the success achieved; the morphological detector was very fast achieving a success rate of 85% when counting clusters of grapes identifying even highly occluded grapes with a low percentage of false positives (10%).
3) Terrain correction: Another essential step is to apply a geometric correction. In this work, one applies a terrain correction. For terrain correction, one uses SRTM 3secc. For DEM and image, one sets the resampling methods as bilinear interpolation. The pixel spacing in met4res should be set to 10 m. In this case, it was used “WGS84 (DD)” geographical coordinates. .
Calcium waves were generated with a mechanical stimu- lus, which consisted in a borosilicate glass micropipette (Harvard Apparatus Lts, UK) for patch-clamp, mounted on a micromanipulator. The micropipette descended grad- ually to touch the surface of a cell, generating an increase of calcium, which spreads to neighboring cells. Video mi- croscopy images were obtained in a fluorescence phase microscope (Olympus Bx 51W1L, Olympus Optical, Tokyo, Japan) with water immersion lens coupled to an imaging detection system (QImaging Retiga 13001 CCD, cooled monochromatic digital camera 12-bit, QImaging Burnaby, Canada). Images were captured with Metafluor software (Universal Imaging Corp., PA, USA). To provide a sufficient number of frames for proper basal conditions estimation, at least 10 seconds were captured before the mechanical stimulus at each experiment. Experiments where stopped when the operator could not see any sig- nificant intensity change in the ratio image displayed by the Metafluor software.
allows the study of fractal dimension; X-ray, Scanning Electron Microscopy (SEM) and Transmission Electron Microscopy (TEM) analysis were used to analyze changes on the surface of samples from the current results the distinctive characteristics of the surfaces for each sample may be obtained, making it possible to predict a future behavior of the samples. MATLAB software FRACLAB 2.03 developed by INRIA was used as a tool.