Patent application number | Description | Published |
20130163859 | REGRESSION TREE FIELDS - A new tractable model solves labeling problems using regression tree fields, which represent non-parametric Gaussian conditional random fields. Regression tree fields are parameterized by non-parametric regression trees, allowing universal specification of interactions between image observations and variables. The new model uses regression trees corresponding to various factors to map dataset content (e.g., image content) to a set of parameters used to define the potential functions in the model. Some factors define relationships among multiple variable nodes. Further, the training of regression trees is scalable, both in the training set size and in the fact that the training can be parallelized. In one implementation, maximum pseudolikelihood learning provides for joint training of various aspects of the model, including feature test selection and ordering (i.e., the structure of the regression trees), parameters of each factor in the graph, and the scope of the interacting variable nodes used in the graph. | 06-27-2013 |
20130166481 | DISCRIMINATIVE DECISION TREE FIELDS - A tractable model solves certain labeling problems by providing potential functions having arbitrary dependencies upon an observed dataset (e.g., image data). The model uses decision trees corresponding to various factors to map dataset content to a set of parameters used to define the potential functions in the model. Some factors define relationships among multiple variable nodes. When making label predictions on a new dataset, the leaf nodes of the decision tree determine the effective weightings for such potential functions. In this manner, decision trees define non-parametric dependencies and can represent rich, arbitrary functional relationships if sufficient training data is available. Decision trees training is scalable, both in the training set size and by parallelization. Maximum pseudolikelihood learning can provide for joint training of aspects of the model, including feature test selection and ordering, factor weights, and the scope of the interacting variable nodes used in the graph. | 06-27-2013 |
20140122381 | DECISION TREE TRAINING IN MACHINE LEARNING - Improved decision tree training in machine learning is described, for example, for automated classification of body organs in medical images or for detection of body joint positions in depth images. In various embodiments, improved estimates of uncertainty are used when training random decision forests for machine learning tasks in order to give improved accuracy of predictions and fewer errors. In examples, bias corrected estimates of entropy or Gini index are used or non-parametric estimates of differential entropy. In examples, resulting trained random decision forests are better able to perform classification or regression tasks for a variety of applications without undue increase in computational load. | 05-01-2014 |
20140172753 | RESOURCE ALLOCATION FOR MACHINE LEARNING - Resource allocation for machine learning is described such as for selecting between many possible options, for example, as part of an efficient training process for random decision tree training, for selecting which of many families of models best describes data, for selecting which of many features best classifies items. In various examples samples of information about uncertain options are used to score the options. In various examples, confidence intervals are calculated for the scores and used to select one or more of the options. In examples, the scores of the options may be bounded difference statistics which change little as any sample is omitted from the calculation of the score. In an example, random decision tree training is made more efficient whilst retaining accuracy for applications not limited to human body pose detection from depth images. | 06-19-2014 |
20140307950 | IMAGE DEBLURRING - Image deblurring is described, for example, to remove blur from digital photographs captured at a handheld camera phone and which are blurred due to camera shake. In various embodiments an estimate of blur in an image is available from a blur estimator and a trained machine learning system is available to compute parameter values of a blur function from the blurred image. In various examples the blur function is obtained from a probability distribution relating a sharp image, a blurred image and a fixed blur estimate. For example, the machine learning system is a regression tree field trained using pairs of empirical sharp images and blurred images calculated from the empirical images using artificially generated blur kernels. | 10-16-2014 |
20150030237 | IMAGE RESTORATION CASCADE - Image restoration cascades are described, for example, where digital photographs containing noise are restored using a cascade formed from a plurality of layers of trained machine learning predictors connected in series. For example, noise may be from sensor noise, motion blur, dust, optical low pass filtering, chromatic aberration, compression and quantization artifacts, down sampling or other sources. For example, given a noisy image, each trained machine learning predictor produces an output image which is a restored version of the noisy input image; each trained machine learning predictor in a given internal layer of the cascade also takes input from the previous layer in the cascade. In various examples, a loss function expressing dissimilarity between input and output images of each trained machine learning predictor is directly minimized during training. In various examples, data partitioning is used to partition a training data set to facilitate generalization. | 01-29-2015 |
Patent application number | Description | Published |
20120225719 | Gesture Detection and Recognition - A gesture detection and recognition technique is described. In one example, a sequence of data items relating to the motion of a gesturing user is received. A selected set of data items from the sequence are tested against pre-learned threshold values, to determine a probability of the sequence representing a certain gesture. If the probability is greater than a predetermined value, then the gesture is detected, and an action taken. In examples, the tests are performed by a trained decision tree classifier. In another example, the sequence of data items can be compared to pre-learned templates, and the similarity between them determined. If the similarity for a template exceeds a threshold, a likelihood value associated with a future time for a gesture associated with that template is updated. Then, when the future time is reached, the gesture is detected if the likelihood value is greater than a predefined value. | 09-06-2012 |
20120251008 | Classification Algorithm Optimization - Classification algorithm optimization is described. In an example, a classification algorithm is optimized by calculating an evaluation sequence for a set of weighted feature functions that orders the feature functions in accordance with a measure of influence on the classification algorithm. Classification thresholds are determined for each step of the evaluation sequence, which indicate whether a classification decision can be made early and the classification algorithm terminated without evaluating further feature functions. In another example, a classifier applies the weighted feature functions to previously unseen data in the order of the evaluation sequence and determines a cumulative value at each step. The cumulative value is compared to the classification thresholds at each step to determine whether a classification decision can be made early without evaluating further feature functions. | 10-04-2012 |
20130156297 | Learning Image Processing Tasks from Scene Reconstructions - Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system. | 06-20-2013 |