Patent application number | Description | Published |
20110231786 | Medical Information Generation and Recordation Methods and Apparatus - Computer-implemented medical information recording methods are described. According to one aspect, a computer-implemented medical information recording method includes displaying a graphical user interface including a graphical representation of the human anatomy, accessing user inputs interacting with the graphical representation of the human anatomy, and generating an electronic record comprising data pertaining to the health of the patient using the user inputs interacting with the graphical representation of the human anatomy. | 09-22-2011 |
20130246097 | Medical Information Systems and Medical Data Processing Methods - Medical information systems and medical data processing methods are described. According to one aspect, a medical information system includes a communications interface which is configured to receive patient treatment data from a plurality of medical providers and which regards medical treatment provided by the medical providers with respect to a plurality of patients; and storage circuitry storing the patient treatment data for the plurality of patients of the plurality of the medical providers in a database. | 09-19-2013 |
20140310016 | Medical Treatment Methods - Medical treatment methods are described. According to one aspect, a medical treatment method includes obtaining data values for a plurality of patient characteristics of a subject patient to be treated for a medical condition, using the data values of the patient characteristics of the subject patient, searching treatment results of a plurality previous patients which were treated for the medical condition using a plurality of different treatment options, and using the searching, providing information to medical personnel regarding the treatment results of the previous patients which were treated for the medical condition for each of the treatment options, the information being usable to assist the medical personnel with treatment of the subject patient for the medical condition. | 10-16-2014 |
Patent application number | Description | Published |
20080309660 | THREE DIMENSIONAL RENDERING OF DISPLAY INFORMATION - Game data is rendered in three dimensions in the GPU of a game console. A left camera view and a right camera view are generated from a single camera view. The left and right camera positions are derived as an offset from a default camera. The focal distance of the left and right cameras is infinity. A game developer does not have to encode dual images into a specific hardware format. When a viewer sees the two slightly offset images, the user's brain combines the two offset images into a single 3D image to give the illusion that objects either pop out from or recede into the display screen. In another embodiment, individual, private video is rendered, on a single display screen, for different viewers. Rather than rendering two similar offset images, two completely different images are rendered allowing each player to view only one of the images. | 12-18-2008 |
20100253766 | Stereoscopic Device - Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts. | 10-07-2010 |
20100287485 | Systems and Methods for Unifying Coordinate Systems in Augmented Reality Applications - Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system. | 11-11-2010 |
20120223967 | Dynamic Perspective Video Window - Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. | 09-06-2012 |
20130057543 | SYSTEMS AND METHODS FOR GENERATING STEREOSCOPIC IMAGES - Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts. | 03-07-2013 |
20150086108 | IDENTIFICATION USING DEPTH-BASED HEAD-DETECTION DATA - A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer. | 03-26-2015 |
20150235432 | AUGMENTED REALITY COMPUTING WITH INERTIAL SENSORS - Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment. | 08-20-2015 |
Patent application number | Description | Published |
20100197390 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source an observed depth image of a scene including the target. Each pixel of the observed depth image is labeled as either a foreground pixel belonging to the target or a background pixel not belonging to the target. Each foreground pixel is labeled with body part information indicating a likelihood that that foreground pixel belongs to one or more body parts of the target. The target is modeled with a skeleton including a plurality of skeletal points, each skeletal point including a three dimensional position derived from body part information of one or more foreground pixels. | 08-05-2010 |
20100313133 | AUDIO AND POSITION CONTROL OF USER INTERFACE - A method is provided for using a wireless controller to interact with a user interface presented on a display. The method includes receiving an audio signal and a position signal from the wireless controller. The audio signal is based on an audio input applied to the wireless controller, while the position signal is based on a position input applied to the wireless controller. The method includes selecting a user interface item displayed on the display, based on the audio signal and the position signal. One or more position signals from the wireless controller may also be received and processed to cause navigation of the user interface to highlight a user interface item for selection. | 12-09-2010 |
20120157207 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 06-21-2012 |
20130028476 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 01-31-2013 |
20130241833 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 09-19-2013 |
20140078141 | POSE TRACKING PIPELINE - A method of tracking a subject includes receiving from a source a depth image of a scene including the subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that image the subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the subject as a model including a plurality of shapes. | 03-20-2014 |
20140119640 | SCENARIO-SPECIFIC BODY-PART TRACKING - A human subject is tracked within a scene of an observed depth image supplied to a general-purpose body-part tracker. The general-purpose body-part tracker is retrained for a specific scenario. The general-purpose body-part tracker was previously trained using supervised machine learning to identify one or more general-purpose parameters to be used by the general-purpose body-part tracker to track a human subject. During a retraining phase, scenario data is received that represents a human training-subject performing an action specific to a particular scenario. One or more special-purpose parameters are identified from the processed scenario data. The special-purpose parameters are selectively used to augment or replace one or more general-purpose parameters if the general-purpose body-part tracker is used to track a human subject performing the action specific to the particular scenario. | 05-01-2014 |
20150029097 | SCENARIO-SPECIFIC BODY-PART TRACKING - A human subject is tracked within a scene of an observed depth image supplied to a general-purpose body-part tracker. The general-purpose body-part tracker is retrained for a specific scenario. The general-purpose body-part tracker was previously trained using supervised machine learning to identify one or more general-purpose parameters to be used by the general-purpose body-part tracker to track a human subject. During a retraining phase, scenario data is received that represents a human training-subject performing an action specific to a particular scenario. One or more special-purpose parameters are identified from the processed scenario data. The special-purpose parameters are selectively used to augment or replace one or more general-purpose parameters if the general-purpose body-part tracker is used to track a human subject performing the action specific to the particular scenario. | 01-29-2015 |
20150145860 | POSE TRACKING PIPELINE - A method of tracking a subject includes receiving from a source a depth image of a scene including the subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that image the subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the subject as a model including a plurality of shapes. | 05-28-2015 |