Patent application number | Description | Published |
20080292153 | Generating an anatomical model using a rule-based segmentation and classification process - A system for computer-aided detection uses a computer-implemented network structure to analyze patterns present in digital image slices of a human body and to generate a three-dimensional anatomical model of a patient. The anatomical model is generated by detecting easily identifiable organs first and then using those organs as context objects to detect other organs. A user specifies membership functions that define which objects of the network structure belong to the various classes of human organs specified in a class hierarchy. A membership function of a potentially matching class determines whether a candidate object of the network structure belongs to the potential class based on the relation between a property of the voxels linked to the candidate object and a property of the context object. Some voxel properties used to classify an object are location, brightness and volume. The human organs are then measured to assist in the patient's diagnosis. | 11-27-2008 |
20100265267 | Analyzing pixel data by imprinting objects of a computer-implemented network structure into other objects - An analysis system analyzes digital images using a computer-implemented network structure that includes a process hierarchy, a class network and a data network. The data network includes image layers and object networks. Objects in a first object network are segmented into a first class, and objects in a second object network are segmented into a second class. One process step of the process hierarchy involves generating a third object network by imprinting objects of the first object network into the objects of the second object network such that pixel locations are unlinked from objects of the second object network to the extent that the pixel locations were also linked to objects of the first object network. The imprinting step allows object-oriented processing of digital images to be performed with fewer computations and less memory. Characteristics of an object of the third object network are then determined by measuring the object. | 10-21-2010 |
20110122138 | Context driven image mining to generate image-based biomarkers - An image-based biomarker is generated using image features obtained through object-oriented image analysis of medical images. The values of a first subset of image features are measured and weighted. The weighted values of the image features are summed to calculate the magnitude of a first image-based biomarker. The magnitude of the biomarker for each patient is correlated with a clinical endpoint, such as a survival time, that was observed for the patient whose medical images were analyzed. The correlation is displayed on a graphical user interface as a scatter plot. A second subset of image features is selected that belong to a second image-based biomarker such that the magnitudes of the second image-based biomarker for the patients better correlate with the clinical endpoints observed for those patients. The second biomarker can then be used to predict the clinical endpoint of other patients whose clinical endpoints have not yet been observed. | 05-26-2011 |
20120147010 | Graphical User Interface For Interpreting The Results Of Image Analysis - A method of intuitively displaying values obtained from analyzing bio-medical images includes displaying a table of the values in a first pane of a graphical user interface. The table contains a user selectable row that includes a reference value and two numerical values. The reference value refers to an image of a tissue slice. The first numerical value is generated by performing image analysis on the image, and the second numerical value indicates a health state of the tissue. The image is displayed in a second pane of the graphical user interface in response to the user selecting the user selectable row. A graphical plot with a selectable symbol associated with the image is displayed in a third pane. The symbol has a position in the plot defined by the values. Alternatively, in response to the user selecting the selectable symbol, the image is displayed in the second pane. | 06-14-2012 |
20120232930 | Clinical Decision Support System - A clinical decision support system performs a similarity search to determine the probable outcome of applying on a current patient those clinical actions that were performed on similar patients. The system analyzes stored electronic health records of similar patients so as to recommend diagnostic and therapeutic steps for the current patient. The system receives the health record of the patient, determines which clinical actions were already applied on the patient, generates classifiers associated with potential future clinical actions, generates a success value for each health record of another patient using the classifiers, displays the health record of the other patient having the greatest success value, and indicates a proposed clinical action that is to be applied on the patient. The system also calculates a quality value indicating the probability that a sequence of clinical actions that were applied to a similar patient will be successful if applied to the patient. | 09-13-2012 |
20120237106 | Automatic image analysis and quantification for fluorescence in situ hybridization - An analysis system automatically analyzes and counts fluorescence signals present in biopsy tissue marked using Fluorescence in situ Hybridization (FISH). The user of the system specifies classes of a class network and process steps of a process hierarchy. Then pixel values in image slices of biopsy tissue are acquired in three dimensions. A computer-implemented network structure is generated by linking pixel values to objects of a data network according to the class network and process hierarchy. Objects associated with pixel values at different depths of the biopsy tissue are used to determine the number, volume and distance between cell components. In one application, fluorescence signals that mark Her2/neural genes and centromeres of chromosome seventeen are counted to diagnose breast cancer. Her2/neural genes that overlap one another or that are covered by centromeres can be accurately counted. Signal artifacts that do not mark genes can be identified by their excessive volume. | 09-20-2012 |
20130016886 | Generating Artificial Hyperspectral Images Using Correlated Analysis of Co-Registered Images - High-resolution digital images of adjacent slices of a tissue sample are acquired, and tiles are defined in the images. Values associated with image objects detected in each tile are calculated. The tiles in adjacent images are co-registered. A first hyperspectral image is generated using a first image, and a second hyperspectral image is generated using a second image. A first pixel of the first hyperspectral image has a first pixel value corresponding to a local value obtained using image analysis on a tile in the first image. A second pixel of the second hyperspectral image has a second pixel value corresponding to a local value calculated from a tile in the second image. A third hyperspectral image is generated by combining the first and second hyperspectral images. The third hyperspectral image is then displayed on a computer monitor using a false-color encoding generated using the first and second pixel values. | 01-17-2013 |
20130108139 | Biomarker Evaluation Through Image Analysis | 05-02-2013 |
20130156279 | Evaluation of Co-Registered Images of Differently Stained Tissue Slices - A method for co-registering images of tissue slices stained with different biomarkers displays a first digital image of a first tissue slice on a graphical user interface such that an area of the first image is enclosed by a frame. Then a portion of a second image of a second tissue slice is displayed such that the area of the first image enclosed by the frame is co-registered with the displayed portion of the second image. The displayed portion of the second image has the shape of the frame. The tissue slices are both z slices of a tissue sample taken at corresponding positions in the x and y dimensions. The displayed portion of the second image is shifted in the x and y dimensions to coincide with the area of the first image that is enclosed by the frame as the user shifts the first image under the frame. | 06-20-2013 |
20140050384 | Context Driven Image Mining to Generate Image-Based Biomarkers - An image-based biomarker is generated using image features obtained through object-oriented image analysis of medical images. The values of a first subset of image features are measured and weighted. The weighted values of the image features are summed to calculate the magnitude of a first image-based biomarker. The magnitude of the biomarker for each patient is correlated with a clinical endpoint, such as a survival time, that was observed for the patient whose medical images were analyzed. The correlation is displayed on a graphical user interface as a scatter plot. A second subset of image features is selected that belong to a second image-based biomarker such that the magnitudes of the second image-based biomarker for the patients better correlate with the clinical endpoints observed for those patients. The second biomarker can then be used to predict the clinical endpoint of other patients whose clinical endpoints have not yet been observed. | 02-20-2014 |
20140169654 | Gleason Grading by Segmenting and Combining Co-Registered Images of Differently Stained Tissue Slices - An improved histopathological score is obtained by generating image objects from images of tissue containing stained epithelial cells. First objects are generated that correspond to basal cells stained with a first stain, such as p63. Second objects are generated that correspond to luminal cells stained with a second stain, such as CK18. If the same tissue is not stained with both stains, then the images of differently stained tissue are co-registered. Third objects are defined to include only those second objects that have more than a minimum separation from any first object. A scoring region includes the third objects, and the histopathological score is determined based on tissue that falls within the scoring region. For example, a Gleason score of prostate tissue is determined by classifying tissue patterns in the scoring region. Alternatively, a Gleason pattern is assigned by counting the number of third objects that possess a predetermined form. | 06-19-2014 |
20140185891 | Generating Image-Based Diagnostic Tests By Optimizing Image Analysis and Data Mining Of Co-Registered Images - A method for generating an image-based test improves diagnostic accuracy by iteratively modifying rule sets governing image and data analysis of coregistered image tiles. Digital images of stained tissue slices are divided into tiles, and tiles from different images are coregistered. First image objects are linked to selected pixels of the tiles. First numerical data is generated by measuring the first objects. Each pixel of a heat map aggregates first numerical data from coregistered tiles. Second objects are linked to selected pixels of the heat map. Measuring the second objects generates second numerical data. The method improves how well second numerical data correlates with clinical data of the patient whose tissue is analyzed by modifying the rule sets used to generate the first and second objects and the first and second numerical data. The test is defined by those rule sets that produce the best correlation with the clinical data. | 07-03-2014 |