Molyneaux
Bradley J. Molyneaux, Cambridge, MA US
Patent application number | Description | Published |
---|---|---|
20120020929 | METHODS AND COMPOSITIONS RELATING TO NEURONAL CELL AND TISSUE DIFFERENTIATION - The invention relates to methods for isolating and purifying specific types of neurons, such as cortical or other projection neurons including corticospinal motor neurons, subcerebral projection neurons, and callosal projection neurons. The invention also relates to genes that are specific for particular neuronal subtypes, and the use of such genes in genetic/molecular control of cell development. The isolated cells and subtype-specific genes also have uses in diagnostics, therapeutics, and screening assays for pharmaceutical molecules. | 01-26-2012 |
Bradley J. Molyneaux, Boston, MA US
Patent application number | Description | Published |
---|---|---|
20120251506 | METHODS AND COMPOSITIONS RELATING TO NEURONAL CELL AND TISSUE DIFFERENTIATION - The invention relates to methods for isolating and purifying specific types of neurons, such as cortical or other projection neurons including corticospinal motor neurons, subcerebral projection neurons, and callosal projection neurons. The invention also relates to genes that are specific for particular neuronal subtypes, and the use of such genes in genetic/molecular control of cell development. The isolated cells and subtype-specific genes also have uses in diagnostics, therapeutics, and screening assays for pharmaceutical molecules. | 10-04-2012 |
Dave Molyneaux, Kirkland, WA US
Patent application number | Description | Published |
---|---|---|
20150123965 | CONSTRUCTION OF SYNTHETIC AUGMENTED REALITY ENVIRONMENT - Embodiments are disclosed that relate to producing a synthetic environmental model derived from a three dimensional representation of an environment, and rendering images from the model. For example, one disclosed embodiment provides a method including detecting a trigger to build the synthetic environmental model utilizing the three dimensional representation of the environment, and, in response to the trigger, obtaining a set of synthetic image elements for use in constructing the synthetic environmental model. The method further includes fitting one or more elements from the set of synthetic image elements to the three dimensional representation of the environment according to a set of rules to produce the synthetic environmental model, and rendering an image from the synthetic environmental model for display, the image showing the one or more elements from the set of synthetic image elements replacing real-world topography in the environment. | 05-07-2015 |
David Molyneaux, Cambridge GB
Patent application number | Description | Published |
---|---|---|
20110085705 | DETECTION OF BODY AND PROPS - A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system. | 04-14-2011 |
20130156297 | Learning Image Processing Tasks from Scene Reconstructions - Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system. | 06-20-2013 |
20130244782 | REAL-TIME CAMERA TRACKING USING DEPTH MAPS - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 09-19-2013 |
David Molyneaux, Oldham GB
Patent application number | Description | Published |
---|---|---|
20110210917 | User Interface Control Using a Keyboard - User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates. | 09-01-2011 |
20120113140 | Augmented Reality with Direct User Interaction - Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects. | 05-10-2012 |
20120113223 | User Interaction in Augmented Reality - Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object. | 05-10-2012 |
20120117514 | Three-Dimensional User Interaction - Three-dimensional user interaction is described. In one example, a virtual environment having virtual objects and a virtual representation of a user's hand with digits formed from jointed portions is generated, a point on each digit of the user's hand is tracked, and the virtual representation's digits controlled to correspond to those of the user. An algorithm is used to calculate positions for the jointed portions, and the physical forces acting between the virtual representation and objects are simulated. In another example, an interactive computer graphics system comprises a processor that generates the virtual environment, a display device that displays the virtual objects, and a camera that capture images of the user's hand. The processor uses the images to track the user's digits, computes the algorithm, and controls the display device to update the virtual objects on the display device by simulating the physical forces. | 05-10-2012 |
20120139897 | Tabletop Display Providing Multiple Views to Users - A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed. | 06-07-2012 |
20120194516 | Three-Dimensional Environment Reconstruction - Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value. | 08-02-2012 |
20120194517 | Using a Three-Dimensional Environment Model in Gameplay - Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates. | 08-02-2012 |
20120194644 | Mobile Camera Localization Using Depth Maps - Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment. | 08-02-2012 |
20120194650 | Reducing Interference Between Multiple Infra-Red Depth Cameras - Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment. | 08-02-2012 |
20120195471 | Moving Object Segmentation Using Depth Images - Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity. | 08-02-2012 |
20120196679 | Real-Time Camera Tracking Using Depth Maps - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 08-02-2012 |
20120306850 | DISTRIBUTED ASYNCHRONOUS LOCALIZATION AND MAPPING FOR AUGMENTED REALITY - A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations. | 12-06-2012 |
20120306876 | GENERATING COMPUTER MODELS OF 3D OBJECTS - Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping. | 12-06-2012 |
20130169626 | DISTRIBUTED ASYNCHRONOUS LOCALIZATION AND MAPPING FOR AUGMENTED REALITY - A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations. | 07-04-2013 |
20130290910 | USER INTERFACE CONTROL USING A KEYBOARD - User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates. | 10-31-2013 |
David Molyneaux, Kirkland, WA US
Patent application number | Description | Published |
---|---|---|
20130342527 | AVATAR CONSTRUCTION USING DEPTH CAMERA - A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh. | 12-26-2013 |
20140045593 | VIRTUAL JOINT ORIENTATION IN VIRTUAL SKELETON - A method of modeling a human subject includes receiving from a depth camera a depth map of a scene including the human subject. The human subject is modeled with a virtual skeleton including a plurality of virtual joints. Each virtual joint is defined with a three-dimensional position. Furthermore, each of the plurality of virtual joints is further defined with three orthonormal vectors. The three orthonormal vectors for each virtual joint provide an orientation of that virtual joint at the three-dimensional position defined for that virtual joint. | 02-13-2014 |
David Molyneaux, San Jose, CA US
Patent application number | Description | Published |
---|---|---|
20150243013 | TRACKING OBJECTS DURING PROCESSES - Embodiments are disclosed that relate to tracking one or more objects during a process that utilizes the objects. For example, one embodiment provides a method for monitoring performance of a process involving one or more objects, wherein the method includes receiving a set of rules defining one or more portions of the process and receiving object identification information regarding the one or more objects. The method further includes, for a selected portion of the process, receiving image information of a physical scene, identifying from the image information and the object identification information an operation performed with an identified object in the physical scene, and taking an action based upon whether the operation satisfies a rule of the set of rules associated with the selected portion of the process. | 08-27-2015 |
David A. Molyneaux, Gainesville, FL US
Patent application number | Description | Published |
---|---|---|
20090266887 | Method and Apparatus for Ferrous Object and/or Magnetic Field Detection for MRI Safety - A method and apparatus for ferrous object and/or magnetic field detection are provided. Embodiments can improve magnetic resonance imaging (MRI) safety and increase the safety of MRI facilities. Embodiments can detect a given magnetic field strength around a MRI machine and alert users to the field's presence. In an embodiment, the magnetic field warning system can rely on a single badge that warns its user. In another embodiment, the badge can utilize an RFID system. The RFID system can turn the badge on when it enters the MRI room and off when it leaves the MRI room. In another embodiment, a badge with a rechargeable battery and charger can be utilized with or without an RFID tag. The subject badges or other detection devices can be worn by a person, located on or near a ferrous object, embedded in clothing, or located in other positions convenient to a user. | 10-29-2009 |
David G. Molyneaux, Cambridge GB
Patent application number | Description | Published |
---|---|---|
20120306734 | Gesture Recognition Techniques - In one or more implementations, a static geometry model is generated, from one or more images of a physical environment captured using a camera, using one or more static objects to model corresponding one or more objects in the physical environment. Interaction of a dynamic object with at least one of the static objects is identified by analyzing at least one image and a gesture is recognized from the identified interaction of the dynamic object with the at least one of the static objects to initiate an operation of the computing device. | 12-06-2012 |
20140247212 | Gesture Recognition Techniques - In one or more implementations, a static geometry model is generated, from one or more images of a physical environment captured using a camera, using one or more static objects to model corresponding one or more objects in the physical environment. Interaction of a dynamic object with at least one of the static objects is identified by analyzing at least one image and a gesture is recognized from the identified interaction of the dynamic object with the at least one of the static objects to initiate an operation of the computing device. | 09-04-2014 |
David Geoffrey Molyneaux, Kirkland, WA US
Patent application number | Description | Published |
---|---|---|
20130249811 | CONTROLLING A DEVICE WITH VISIBLE LIGHT - A user may control a user device utilizing visible light transmitted from a handheld control device. Upon a user actuation associated with the control device, the control device may transmit control information to one or more sensors associated with the user device. The control device may project a visual user interface on a display surface, whereby the visual user interface may represent commands and/or operations that may be performed by the user device. The user may also overlay the visual user interface, or components within the visual user interface, on the one or more sensors of the user device for transmitting a particular command to the user device. By virtue of the handheld device and the projected user interface, the user may both view operations that may be performed with respect to the user device and may also cause the user device to perform those operations. | 09-26-2013 |
David Geoffrey Molyneaux, Oldham GB
Patent application number | Description | Published |
---|---|---|
20110210915 | Human Body Pose Estimation - Techniques for human body pose estimation are disclosed herein. Images such as depth images, silhouette images, or volumetric images may be generated and pixels or voxels of the images may be identified. The techniques may process the pixels or voxels to determine a probability that each pixel or voxel is associated with a segment of a body captured in the image or to determine a three-dimensional representation for each pixel or voxel that is associated with a location on a canonical body. These probabilities or three-dimensional representations may then be utilized along with the images to construct a posed model of the body captured in the image. | 09-01-2011 |
James Michael Molyneaux, Camheen IE
Patent application number | Description | Published |
---|---|---|
20120098593 | METHOD OF TRIMMING A THIN FILM RESISTOR, AND AN INTEGRATED CIRCUIT INCLUDING TRIMMABLE THIN FILM RESISTORS - Apparatus and methods of trimming resistors are disclosed. In one embodiment, a method of controlling the PCR of a thin film resistor is provided. The method includes applying a first current to a resistor so as to alter a property of the resistor, and measuring the property of the resistor. Applying the first current and measuring the property of the resistor can be repeated until the PCR of the resistor is within an acceptable tolerance of a desired value for the property of the resistor. | 04-26-2012 |
Justin Molyneaux, San Francisco, CA US
Patent application number | Description | Published |
---|---|---|
20120172132 | Content Synchronization - Described are methods, systems, and computer program products for synchronizing game content access to the broadcast of particular television content. A game, wherein a portion of the game is locked and inaccessible to a player of the game, is provided via a gaming network comprising one of an Internet network, a cable television network, or an interactive television network. An identifier is provided to a television program for broadcast, which is then viewed by the player. The player inputs the identifier, the identifier is validated, and, if the identifier is valid, the locked portion of the game is unlocked and made accessible to the player. | 07-05-2012 |
Kevin L. Molyneaux, Summerside CA
Patent application number | Description | Published |
---|---|---|
20090132277 | SYSTEM AND METHOD FOR MEDICAL PROCEDURE CODE SCHEDULING - A method for scheduling procedure codes for a healthcare provider, the method including tracking over time prioritization data for each procedure code of a plurality of procedure codes, selecting a part of a body for which one or more procedure codes of the plurality of procedure codes are desired to be scheduled, providing a prioritized list of procedure codes comprising a group of procedure codes from the plurality of procedure codes which are associated with the selected part of the body, wherein the list of procedure codes is ranked based on statistical analysis of the corresponding prioritization data for each of the procedure codes so that the list of procedure codes is arranged in descending order beginning with a procedure code deemed most likely to be selected, and selecting one or more procedure codes from the list of procedure codes for scheduling. | 05-21-2009 |
Robert F. Molyneaux, Austin, TX US
Patent application number | Description | Published |
---|---|---|
20090013224 | INTEGRATED CIRCUIT WITH BLOCKING PIN TO COORDINATE ENTRY INTO TEST MODE - An integrated circuit (IC) including a blocking pin. An IC may include state logic, a test control unit configured to coordinate access by external circuitry to operating state of the state logic during a test mode, and interface pins configured to couple the integrated circuit to the external circuitry. Shared interface pins may provide input signals to the test control unit during the test mode of operation and may perform distinct I/O functions during normal mode operation. A blocking interface pin, when asserted by external circuitry during normal mode operation, may force test signals derived from at least a portion of the shared interface pins by the test control unit into respective quiescent states, such that subsequent to assertion of the blocking pin, the integrated circuit is operable to enter the test mode of operation from the normal mode of operation without resetting operating state of the state logic. | 01-08-2009 |
20100037111 | METHOD AND APPARATUS FOR TESTING DELAY FAULTS - An apparatus or method for testing of a SOC processor device may minimize interference that is caused by interfacing a comparatively low-speed testing device with the high-speed processor during testing. Implementations may gate the input clock signal at the clock input to each domain of the SOC processor device rather than at the output of the PLL clock. The gating of the clock signal to each domain may then be controlled by clock stop signals generated by the testing device and sent to the individual domains of the processor device. Gating the clock signal at the domain may provide a more natural state for the circuit during testing as well as allow the test control unit to test the different domains of the SOC device individually. | 02-11-2010 |
Theodore Molyneaux, Stroudsburg, PA US
Patent application number | Description | Published |
---|---|---|
20100006615 | Body bow rest and carrier - A bow holder and carrier provides an adjustable holster coupled to a support member which can be symmetrically aligned with the center line of the human body that partially encircles and lies against the waist of a user. The adjustable holster is adaptively configured to accommodate differently sized compound bow types by the reconfiguring of adjustable rests, stops, supports and guides mounted within the holster. A slide track assembly provides a means to frictionally insert the adjustable holster onto and to remove the holster from the support member for ease of storage. | 01-14-2010 |