Patent application number | Description | Published |
20110310010 | GESTURE BASED USER INTERFACE - A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region. | 12-22-2011 |
20120078614 | Virtual keyboard for a non-tactile three dimensional user interface - A method, including presenting, by a computer system executing a non-tactile three dimensional user interface, a virtual keyboard on a display, the virtual keyboard including multiple virtual keys, and capturing a sequence of depth maps over time of a body part of a human subject. On the display, a cursor is presented at positions indicated by the body part in the captured sequence of depth maps, and one of the multiple virtual keys is selected in response to an interruption of a motion of the presented cursor in proximity to the one of the multiple virtual keys. | 03-29-2012 |
20120223882 | Three Dimensional User Interface Cursor Control - A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a first set of multiple 3D coordinates representing a gesture performed by a user positioned within a field of view of a sensing device coupled to the computer, the first set of 3D coordinates comprising multiple points in a fixed 3D coordinate system local to the sensing device. The first set of multiple 3D coordinates are transformed to a second set of corresponding multiple 3D coordinates in a subjective 3D coordinate system local to the user. | 09-06-2012 |
20120313848 | Three Dimensional User Interface Session Control - A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state. | 12-13-2012 |
20130014052 | ZOOM-BASED GESTURE USER INTERFACE - A user interface method, including presenting by a computer executing a user interface, multiple interactive items on a display. A first sequence of images is captured indicating a position in space of a hand of a user in proximity to the display, and responsively to the position, one of the interactive items is associated with the hand. After associating the item, a second sequence of images is captured indicating a movement of the hand, and responsively to the movement, a size of the one of the items is changed on the display. | 01-10-2013 |
20130044053 | Combining Explicit Select Gestures And Timeclick In A Non-Tactile Three Dimensional User Interface - A method including presenting, by a computer, multiple interactive items on a display coupled to the computer, and receiving, from a depth sensor, a sequence of three-dimensional (3D) maps containing at least a hand of a user of the computer. An explicit select gesture performed by the user toward one of the interactive items is detected in the maps, and the one of the interactive items is selected responsively to the explicit select gesture. Subsequent to selecting the one of the interactive items, a TimeClick functionality is actuated for subsequent interactive item selections to be made by the user. | 02-21-2013 |
20130055120 | SESSIONLESS POINTING USER INTERFACE - A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture. | 02-28-2013 |
20130222239 | ASYMMETRIC MAPPING FOR TACTILE AND NON-TACTILE USER INTERFACES - A method, including receiving, by a computer, a sequence of signals indicating a motion of a hand of a user within a predefined area, and segmenting the area into multiple regions. Responsively to the signals, a region is identified in which the hand is located, and a mapping ration is assigned to the motion of the hand based on a direction of the motion and the region in which the hand is located. Using the assigned mapping ratio, a cursor on a display is presented responsively to the indicated motion of the hand. | 08-29-2013 |
20130263036 | Gesture-based interface with enhanced features - A method includes presenting, on a display coupled to a computer, an image of a keyboard comprising multiple keys, and receiving a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display. An initial portion of the sequence of 3D maps is processed to detect a transverse gesture performed by a hand of a user positioned in proximity to the display, and a cursor is presented on the display at a position indicated by the transverse gesture. While presenting the cursor in proximity to the one of the multiple keys, one of the multiple keys is selected upon detecting a grab gesture followed by a pull gesture followed by a release gesture in a subsequent portion of the sequence of 3D maps. | 10-03-2013 |
20130265222 | Zoom-based gesture user interface - A method includes arranging, by a computer, multiple interactive objects as a hierarchical data structure, each node of the hierarchical data structure associated with a respective one of the multiple interactive objects, and presenting, on a display coupled to the computer, a first subset of the multiple interactive objects that are associated with one or more child nodes of one of the multiple interactive objects. A sequence of three-dimensional (3D) maps including at least part of a hand of a user positioned in proximity to the display is received, and the hand performing a transverse gesture followed by a grab gesture followed by a longitudinal gesture followed by an execute gesture is identified in the sequence of three-dimensional (3D) maps, and an operation associated with the selected object is accordingly performed. | 10-10-2013 |
20130283208 | GAZE-ENHANCED VIRTUAL TOUCHSCREEN - A method, including presenting, by a computer, multiple interactive items on a display coupled to the computer, receiving an input indicating a direction of a gaze of a user of the computer. In response to the gaze direction, one of the multiple interactive items is selected, and subsequent to the one of the interactive items being selected, a sequence of three-dimensional (3D) maps is received containing at least a hand of the user. The 3D maps are analyzed to detect a gesture performed by the user, and an operation is performed on the selected interactive item in response to the gesture. | 10-24-2013 |
20130321265 | Gaze-Based Display Control - A method includes receiving an image including an eye of a user of a computerized system and identifying, based the image of the eye, a direction of a gaze performed by the user. Based on the direction of the gaze, a region on a display coupled to the computerized system is identified, an operation is performed on content presented in the region. | 12-05-2013 |
20130321271 | POINTING-BASED DISPLAY INTERACTION - A method includes receiving and segmenting a first sequence of three-dimensional (3D) maps over time of at least a part of a body of a user of a computerized system in order to extract 3D coordinates of a first point and a second point of the user, the 3D maps indicating a motion of the second point with respect to a display coupled to the computerized system. A line segment that intersects the first point and the second point is calculated, and a target point is identified where the line segment intersects the display. An interactive item presented on the display in proximity to the target point is engaged. | 12-05-2013 |
20140028548 | GAZE DETECTION IN A 3D MAPPING ENVIRONMENT - A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user ( | 01-30-2014 |
20140043230 | Three-Dimensional User Interface Session Control - A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state. | 02-13-2014 |
20140152777 | CAMERA HAVING ADDITIONAL FUNCTIONALITY BASED ON CONNECTIVITY WITH A HOST DEVICE - Embodiments may be directed to lens cameras which may be cameras arranged as a sensor in a lens cap. A lens camera may comprise a printed circuit board with a digital image sensor and associated components enclosed in a cylindrical body that may be constructed of metal, plastic, or the like, or combination thereof. Lens cameras may be fitted with lens mounts for attaching host devices, cameras, interchangeable lens, or the like. Lens mounts on a lens camera may be arranged to be compatible with one or more standard lens mounts. Accordingly, a lens camera may be attached to cameras that have compatible lens mounts. Also, interchangeable lens having lens mounts compatible with the lens camera may be attached to the lens camera. Further, lens cameras may communicate with host devices using wired or wireless communication facilities. | 06-05-2014 |
20140160304 | CAMERA HAVING ADDITIONAL FUNCTIONALITY BASED ON CONNECTIVITY WITH A HOST DEVICE - Embodiments may be directed to lens cameras which may be cameras arranged as a sensor in a lens cap. A lens camera may comprise a printed circuit board with a digital image sensor and associated components enclosed in a cylindrical body that may be constructed of metal, plastic, or the like, or combination thereof. Lens cameras may be fitted with lens mounts for attaching host devices, cameras, interchangeable lens, or the like. Lens mounts on a lens camera may be arranged to be compatible with one or more standard lens mounts. Accordingly, a lens camera may be attached to cameras that have compatible lens mounts. Also, interchangeable lens having lens mounts compatible with the lens camera may be attached to the lens camera. Further, lens cameras may communicate with host devices using wired or wireless communication facilities. | 06-12-2014 |
20140380241 | ZOOM-BASED GESTURE USER INTERFACE - A user interface method, including presenting by a computer executing a user interface, multiple interactive items on a display. A first sequence of images is captured indicating a position in space of a hand of a user in proximity to the display, and responsively to the position, one of the interactive items is associated with the hand. After associating the item, a second sequence of images is captured indicating a movement of the hand, and responsively to the movement, a size of the one of the items is changed on the display. | 12-25-2014 |
20150022687 | SYSTEM AND METHOD FOR AUTOMATIC EXPOSURE AND DYNAMIC RANGE COMPRESSION - A method of capturing a digital image with a digital camera includes determining a first exposure level for capturing an image based on a first luminance level of the image, determining a second exposure level for capturing the image based on a threshold exposure level of the image, configuring an exposure level of a sensor of the digital camera based on the second exposure level, capturing the image as a digital image, and adding a non-linear digital gain to the digital image based on a difference between the first exposure level and the second exposure level. | 01-22-2015 |