Patent application number | Description | Published |
20080288666 | Embedded System Development Platform - A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds the processor and one or more peripheral modules each having a peripheral device and an interface element. The modules can be electrically and physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet. | 11-20-2008 |
20080288919 | Encoding of Symbol Table in an Executable - A method of compiling source code is described in which symbol information is retained in the optimized object code and the executable file. This symbol information is retained in the form of function calls which return memory locations and enable an application to query where variable or function data is stored and then access that variable or function data. | 11-20-2008 |
20090139778 | User Input Using Proximity Sensing - A device is described which enables users to interact with software running on the device through gestures made in an area adjacent to the device. In an embodiment, a portable computing device has proximity sensors arranged on an area of its surface which is not a display, such as on the sides of the device. These proximity sensors define an area of interaction adjacent to the device. User gestures in this area of interaction are detected by creating sensing images from data received from each of the sensors and then analyzing sequences of these images to detect gestures. The detected gestures may be mapped to particular inputs to a software program running on the device and therefore a user can control the operation of the program through gestures. | 06-04-2009 |
20090195402 | Unique Identification of Devices Using Color Detection - Methods and apparatus for uniquely identifying wireless devices in close physical proximity are described. When two wireless devices are brought into close proximity, one of the devices displays an optical indicator, such as a light pattern. This device then sends messages to other devices which are within wireless range to cause them to use any light sensor to detect a signal. In an embodiment, the light sensor is a camera and the detected signal is an image captured by the camera. Each device then sends data identifying what was detected back to the device displaying the pattern. By analyzing this data, the first device can determine which other device detected the indicator that it displayed and therefore determine that this device is in close physical proximity to it. In an example, the first device is an interactive surface arranged to identify the wireless addresses of devices which are placed on the surface. | 08-06-2009 |
20090219253 | Interactive Surface Computer with Switchable Diffuser - An interactive surface computer with a switchable diffuser layer is described. The switchable layer has two states: a transparent state and a diffusing state. When it is in its diffusing state, a digital image is displayed and when the layer is in its transparent state, an image can be captured through the layer. In an embodiment, a projector is used to project the digital image onto the layer in its diffusing state and optical sensors are used for touch detection. | 09-03-2009 |
20090276734 | Projection of Images onto Tangible User Interfaces - A surface computing device is described which has a surface which can be switched between transparent and diffuse states. When the surface is in its diffuse state, an image can be projected onto the surface and when the surface is in its transparent state, an image can be projected through the surface and onto an object. In an embodiment, the image projected onto the object is redirected onto a different face of the object, so as to provide an additional display surface or to augment the appearance of the object. In another embodiment, the image may be redirected onto another object. | 11-05-2009 |
20100145920 | Digital Media Retrieval and Display - Retrieval and display of digital media items is described. For example, the digital media items may be photographs, videos, audio files, emails, text documents or parts of these. In an embodiment a dedicated apparatus having a touch display screen is provided in a form designed to look like a domestic fish tank. In an embodiment graphical animated agents are depicted on the display as fish whose motion varies according to at least one behavior parameter which is pseudo random. In embodiments, the agents have associated search criteria and when a user selects one or more agents the associated search criteria are used in a retrieval operation to retrieve digital media items from a store. In some embodiments media items are communicated between the apparatus and a portable communications device using a communications link established by tapping the portable device against the media retrieval and display apparatus. | 06-10-2010 |
20100149090 | GESTURES, INTERACTIONS, AND COMMON GROUND IN A SURFACE COMPUTING ENVIRONMENT - Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur. | 06-17-2010 |
20100149182 | Volumetric Display System Enabling User Interaction - A volumetric display system which enables user interaction is described. In an embodiment, the system consists of a volumetric display and an optical system. The volumetric display creates a 3D light field of an object to be displayed and the optical system creates a copy of the 3D light field in a position away from the volumetric display and where a user can interact with the image of the object displayed. In an embodiment, the optical system involves a pair of parabolic mirror portions. | 06-17-2010 |
20100182220 | SURFACE PUCK - An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display. | 07-22-2010 |
20100302199 | FERROMAGNETIC USER INTERFACES - Ferromagnetic user interfaces are described. In embodiments, user interface devices are described that can detect the location of movement on a user-touchable portion by sensing movement of a ferromagnetic material. In some embodiments sensors are arranged in a two dimensional array, and the user interface device can determine the location of the movement in a plane substantially parallel to the two-dimensional array and the acceleration of movement substantially perpendicular to the two-dimensional array. In other embodiments, user interface devices are described that can cause a raised surface region to be formed on a ferrofluid layer of a user-touchable portion, which is detectable by the touch of a user. Embodiments describe how the raised surface region can be moved on the ferrofluid layer. Embodiments also describe how the raised surface region can be caused to vibrate. | 12-02-2010 |
20100315335 | Pointing Device with Independently Movable Portions - A pointing device with independently movable portions is described. In an embodiment, a pointing device comprises a base unit and a satellite portion. The base unit is arranged to be located under a palm of a user's hand and be movable over a supporting surface. The satellite portion is arranged to be located under a digit of the user's hand and be independently movable over the supporting surface relative to the base unit. In embodiments, data from at least one sensing device is read, and movement of both the base unit and the independently movable satellite portion of the pointing device is calculated from the data. The movement of the base unit and the satellite portion is analyzed to detect a user gesture. | 12-16-2010 |
20100315336 | Pointing Device Using Proximity Sensing - A pointing device using proximity sensing is described. In an embodiment, a pointing device comprises a movement sensor and a proximity sensor. The movement sensor generates a first data sequence relating to sensed movement of the pointing device relative to a surface. The proximity sensor generates a second data sequence relating to sensed movement relative to the pointing device of one or more objects in proximity to the pointing device. In embodiments, data from the movement sensor of the pointing device is read and the movement of the pointing device relative to the surface is determined. Data from the proximity sensor is also read, and a sequence of sensor images of one or more objects in proximity to the pointing device are generated. The sensor images are analyzed to determine the movement of the one or more objects relative to the pointing device. | 12-16-2010 |
20110080341 | Indirect Multi-Touch Interaction - Indirect multi-touch interaction is described. In an embodiment, a user interface is controlled using a cursor and a touch region comprising a representation of one or more digits of a user. The cursor and the touch region are moved together in the user interface in accordance with data received from a cursor control device, such that the relative location of the touch region and the cursor is maintained. The representations of the digits of the user are moved in the touch region in accordance with data describing movement of the user's digits. In another embodiment, a user interface is controlled in a first mode of operation using an aggregate cursor, and switched to a second mode of operation in which the aggregate cursor is divided into separate portions, each of which can be independently controlled by the user. | 04-07-2011 |
20110121950 | UNIQUE IDENTIFICATION OF DEVICES USING COLOR DETECTION - Methods and apparatus for uniquely identifying wireless devices in close physical proximity are described. When two wireless devices are brought into close proximity, one of the devices displays an optical indicator, such as a light pattern. This device then sends messages to other devices which are within wireless range to cause them to use any light sensor to detect a signal. In an embodiment, the light sensor is a camera and the detected signal is an image captured by the camera. Each device then sends data identifying what was detected back to the device displaying the pattern. By analyzing this data, the first device can determine which other device detected the indicator that it displayed and therefore determine that this device is within close physical proximity. In an example, the first device is an interactive surface arranged to identify the wireless addresses of devices which are placed on the surface. | 05-26-2011 |
20110210917 | User Interface Control Using a Keyboard - User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates. | 09-01-2011 |
20110252163 | Integrated Development Environment for Rapid Device Development - An integrated development environment for rapid device development is described. In an embodiment the integrated development environment provides a number of different views to a user which each relate to a different aspect of device design, such as hardware configuration, software development and physical design. The device, which may be a prototype device, is formed from a number of objects which are selected from a database and the database stores multiple data types for each object, such as a 3D model, software libraries and code-stubs for the object and hardware parameters. A user can design the device by selecting different views in any order and can switch between views as they choose. Changes which are made in one view, such as the selection of a new object, are fed into the other views. | 10-13-2011 |
20120038891 | Projection of Images onto Tangible User Interfaces - The techniques described herein provide a surface computing device that includes a surface layer configured to be in a transparent state and a diffuse state. In the diffuse state, an image can be projected onto the surface. In the transparent state, an image can be projected through the surface. | 02-16-2012 |
20120075256 | Touch Sensing Using Shadow and Reflective Modes - A touch panel is described which uses at least one infrared source and an array of infrared sensors to detect objects which are in contact with, or close to, the touchable surface of the panel. The panel may be operated in both reflective and shadow modes, in arbitrary per-pixel combinations which change over time. For example, if the level of ambient infrared is detected and if that level exceeds a threshold, shadow mode is used for detection of touch events over some or all of the display. If the threshold is not exceeded, reflective mode is used to detect touch events. The touch panel includes an infrared source and an array of infrared sensors. | 03-29-2012 |
20120113140 | Augmented Reality with Direct User Interaction - Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects. | 05-10-2012 |
20120113223 | User Interaction in Augmented Reality - Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object. | 05-10-2012 |
20120139841 | User Interface Device With Actuated Buttons - A user interface device with actuated buttons is described. In an embodiment, the user interface device comprises two or more buttons and the motion of the buttons is controlled by actuators under software control such that their motion is inter-related. The position or motion of the buttons may provide a user with feedback about the current state of a software program they are using or provide them with enhanced user input functionality. In another embodiment, the ability to move the buttons is used to reconfigure the user interface buttons and this may be performed dynamically, based on the current state of the software program, or may be performed dependent upon the software program being used. The user interface device may be a peripheral device, such as a mouse or keyboard, or may be integrated within a computing device such as a games device. | 06-07-2012 |
20120139897 | Tabletop Display Providing Multiple Views to Users - A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed. | 06-07-2012 |
20120194516 | Three-Dimensional Environment Reconstruction - Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value. | 08-02-2012 |
20120194517 | Using a Three-Dimensional Environment Model in Gameplay - Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates. | 08-02-2012 |
20120194644 | Mobile Camera Localization Using Depth Maps - Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment. | 08-02-2012 |
20120194650 | Reducing Interference Between Multiple Infra-Red Depth Cameras - Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment. | 08-02-2012 |
20120195471 | Moving Object Segmentation Using Depth Images - Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity. | 08-02-2012 |
20120196679 | Real-Time Camera Tracking Using Depth Maps - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 08-02-2012 |
20120198103 | EMBEDDED SYSTEM DEVELOPMENT PLATFORM - A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds a processor and one or more peripheral modules each having an interface element. The base module and the peripheral modules may be electrically and/or physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet. | 08-02-2012 |
20130082818 | Motion Triggered Data Transfer - Methods of controlling the transfer of data between devices are described in which the manner of control is determined by a movement experienced by at least one of the devices. The method involves detecting a triggering movement and determining a characteristic of this movement. The transfer of data is then controlled based on the characteristic which has been identified. | 04-04-2013 |
20130241806 | Surface Puck - An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display. | 09-19-2013 |
20130244782 | REAL-TIME CAMERA TRACKING USING DEPTH MAPS - Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used. | 09-19-2013 |
20130290910 | USER INTERFACE CONTROL USING A KEYBOARD - User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates. | 10-31-2013 |
20140098018 | WEARABLE SENSOR FOR TRACKING ARTICULATED BODY-PARTS - A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part. In some examples an inertial measurement unit is also included in the sensor to enable tracking of the arm and hand | 04-10-2014 |
20140208274 | CONTROLLING A COMPUTING-BASED DEVICE USING HAND GESTURES - Methods and system for controlling a computing-based device using both input received from a traditional input device (e.g. keyboard) and hand gestures made on or near a reference object (e.g. keyboard). In some examples, the hand gestures may comprise one or more hand touch gestures and/or one or more hand air gestures. | 07-24-2014 |
20150057784 | OPTIMIZING 3D PRINTING USING SEGMENTATION OR AGGREGATION - 3D printing may be optimized by segmenting input jobs and/or combining parts of input jobs together. In an embodiment, a user-defined metric is received associated with each input job and this is used in scheduling input jobs to optimize latency and/or throughput of the 3D printing process, along with the printing envelope and other characteristics of the 3D printers used. In various embodiments, the scheduling may comprise dividing a 3D object into a number of parts and then scheduling these parts separately and/or combining 3D objects, or parts of 3D objects, from various input jobs to be printed at the same time on the same 3D printer. In various embodiments, the scheduling is repeated when a new input job is received and changes made during printing. In various embodiments, a user may submit an updated version of an input job which is already in the process of being printed. | 02-26-2015 |