Patent application number | Description | Published |
20110063404 | REMOTE COMMUNICATION SYSTEM AND METHOD - A method comprises determining a status of an object on a first device and sending an indicator of the status of the object to a remote device, the indicator being configured to allow the remote device to present the status of the object. The method may further comprise establishing audio and video communication with the remote device. The audio and video communication with the remote device may be established over a network. The object may be a book, and the status may be associated with a page number of the book. The method may further comprise displaying animated content based on the determined status of the object. The displaying of animated content may include displaying an animated character providing commentary or asking questions related to content associated with the object. | 03-17-2011 |
20110159923 | Apparatus for a Tangible Interface - In accordance with an example embodiment of the present invention, a cradle comprising: a housing configured to receive a mobile device; wherein the housing configured to receive a mobile device having at least two positions; a detecting element configured to detect a state of the mobile device and; a mechanism configured to change position of the housing in response to detecting a change in the state of the mobile device. | 06-30-2011 |
20110223970 | Image-Based Addressing of Physical Content for Electronic Communication - In one exemplary embodiment, a method includes: capturing image data for physical content, where the physical content includes a recipient image indicative of at least one desired recipient; performing image recognition on the captured image data to identify the recipient image; matching the recipient image with corresponding contact information to obtain address information for the at least one desired recipient; and addressing an electronic communication to the at least one desired recipient using the address information, where the electronic communication includes the captured image data for the physical content. | 09-15-2011 |
20130002532 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR SHARED SYNCHRONOUS VIEWING OF CONTENT - Provided herein is a technique by which content may be shared with a remote user. An example method may include providing for display of content on a first device, synchronizing content between the first device and a second device, providing for display of an image captured by the second device on the first device, and providing for presentation of audio captured by the second device by the first device. The content may include an image of a page of a book. Synchronizing content between the first device and the second device may include directing advancing of a page on the second device in response to receiving an input directing the advancing of a page on the first device. Providing for display of an image captured by the second device on the first device may include providing for display of a video captured by the second device on the first device. | 01-03-2013 |
20130002708 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR PRESENTING INTERACTIVE DYNAMIC CONTENT IN FRONT OF STATIC CONTENT - Provided herein is a technique by which static content may be presented in an underlying relationship to dynamic content. An example method may include providing for display of static content and providing for display of dynamic content, where the static content may be displayed in an underlying relationship relative to the dynamic content. The dynamic content may be responsive to a user input and the dynamic content may change in response to a change in the static content. The dynamic content may include a dynamic content response where the dynamic content response is selected from a plurality of available dynamic content responses. The static content may include an image of a page of a book and the dynamic content may include an animated character configured to read the page of the book. | 01-03-2013 |
20130003962 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING ACTIVE ECHO-CANCELLATION FOR SYSTEMS COMBINING VOICE COMMUNICATION AND SYNCHRONOUS AUDIO CONTENT - An apparatus for removing an echo(es) from audio content may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including receiving combined audio content including voice data associated with speech of users in a call and information including audio data provided to the users. The computer program code may further cause the apparatus to remove a first echo of a first item of voice data associated with a user(s), from the combined audio content, based in part on a prior detection of the first item of voice data. The computer program code may further cause the apparatus to remove a second echo of the audio data, from the combined audio content, based in part on a previous detection of the audio data or a previous detection of data corresponding to the audio data. Corresponding methods and computer program products are also provided. | 01-03-2013 |
20130139082 | Graphical Interface Having Adjustable Borders - Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation. | 05-30-2013 |
20130258270 | WEARABLE DEVICE WITH INPUT AND OUTPUT STRUCTURES - A head-wearable device includes a center support extending in generally lateral directions, a first side arm extending from a first end of the center frame support and a second side arm extending from a second end of the center support. The device may further include a nosebridge that is removably coupled to the center frame support. The device may also include a lens assembly that is removably coupled to the center support or the nosebridge. The lens assembly may have a single lens, or a multi-lens arrangement configured to cooperate with display to correct for a user's ocular disease or disorder. | 10-03-2013 |
20130312070 | METHOD AND APPARATUS FOR A MULTI-PARTY CAPTCHA - In accordance with an example embodiment of the present invention, a method comprising: receiving at least one request for generating a challenge from at least one device; generating the challenge with at least two components; transmitting component of the challenge to the at least one device; causing presentation of at least part of the challenge to at least two users; causing communication between said at least two users; and receiving at least two responses to the challenge from the at least one device. Related apparatus and computer program product are also described. | 11-21-2013 |
Patent application number | Description | Published |
20100151830 | MESSAGING DEVICE - An apparatus may include a housing that is configured to support a picture, a sensor that is configured to receive an input and that is associated with the picture, a communication module that is configured to communicate with a remote device and to retrieve audio messages from the remote device, a speaker that is configured to play the audio messages, an electronics control module that is configured to control playing of the audio messages that are designated as being associated with the picture in response to selection of the sensor, and a power module. | 06-17-2010 |
20130207887 | HEADS-UP DISPLAY INCLUDING EYE TRACKING - Embodiments of an apparatus comprising a light guide including a proximal end, a distal end, a display positioned near the proximal end, an eye-tracking camera positioned at or near the proximal end to image eye-tracking radiation, a proximal optical element positioned in the light guide near the proximal end and a distal optical element positioned in the light guide near the distal end. The proximal optical element is optically coupled to the display, the eye-tracking camera and the distal optical element and the distal optical element is optically coupled to the proximal optical element, the ambient input region and the input/output region. Other embodiments are disclosed and claimed. | 08-15-2013 |
20130222638 | Image Capture Based on Gaze Detection - Methods and systems for capturing an image are provided. In one example, a head-mounted device (HMD) having an image capturing device, a viewfinder, a gaze acquisition system, and a controller may be configured to capture an image. The image capturing device may be configured to have an imaging field of view including at least a portion of a field of view provided by the viewfinder. The gaze acquisition system may be configured to acquire a gaze direction of a wearer. The controller may be configured to determine whether the acquired gaze direction is through the viewfinder and generate an image capture instruction based on a determination that the acquired gaze direction indicates a gaze through the viewfinder. The controller may further be configured to cause the image capturing device to capture an image. | 08-29-2013 |
Patent application number | Description | Published |
20130088413 | Method to Autofocus on Near-Eye Display - An optical system has an aperture through which virtual and real-world images are viewable along a viewing axis. The optical system may be incorporated into a head-mounted display (HMD). By modulating the length of the optical path along an optical axis within the optical system, the virtual image may appear to be at different distances away from the HMD wearer. The wearable computer of the HMD may be used to control the length of the optical path. The length of the optical path may be modulated using, for example, a piezoelectric actuator or stepper motor. By determining the distance to an object with respect to the HMD using a range-finder or autofocus camera, the virtual images may be controlled to appear at various distances and locations in relation to the target object and/or HMD wearer. | 04-11-2013 |
20130106674 | Eye Gaze Detection to Determine Speed of Image Movement | 05-02-2013 |
20130128364 | Method of Using Eye-Tracking to Center Image Content in a Display - A head-mounted display (HMD) may include an eye-tracking system, an HMD-tracking system and a display configured to display virtual images. The virtual images may present an augmented reality to a wearer of the HMD and the virtual images may adjust dynamically based on HMD-tracking data. However, position and orientation sensor errors may introduce drift into the displayed virtual images. By incorporating eye-tracking data, the drift of virtual images may be reduced. In one embodiment, the eye-tracking data could be used to determine a gaze axis and a target object in the displayed virtual images. The HMD may then move the target object towards a central axis. The HMD may also record data based on the gaze axis, central axis and target object to determine a user interface preference. The user interface preference could be used to adjust similar interactions with the HMD. | 05-23-2013 |
20130135204 | Unlocking a Screen Using Eye Tracking Information - Methods and systems for unlocking a screen using eye tracking information are described. A computing system may include a display screen. The computing system may be in a locked mode of operation after a period of inactivity by a user. Locked mode of operation may include a locked screen and reduced functionality of the computing system. The user may attempt to unlock the screen. The computing system may generate a display of a moving object on the display screen of the computing system. An eye tracking system may be coupled to the computing system. The eye tracking system may track eye movement of the user. The computing system may determine that a path associated with the eye movement of the user substantially matches a path associated with the moving object on the display and switch to be in an unlocked mode of operation including unlocking the screen. | 05-30-2013 |
20130176533 | Structured Light for Eye-Tracking - Exemplary methods and systems help provide for tracking an eye. An exemplary method may involve: causing the projection of a pattern onto an eye, wherein the pattern comprises at least one line, and receiving data regarding deformation of the at least one line of the pattern. The method further includes correlating the data to iris, sclera, and pupil orientation to determine a position of the eye, and causing an item on a display to move in correlation with the eye position. | 07-11-2013 |
20130257709 | Proximity Sensing for Wink Detection - This disclosure relates to proximity sensing for wink detection. An illustrative method includes receiving data from a receiver portion of a proximity sensor. The receiver portion is disposed at a side section of a head-mountable device (HMD). When a wearer wears the HMD, the receiver portion is arranged to receive light reflected from an eye area of the wearer, the proximity sensor detects a movement of the eye area, and the data represents the movement. The method includes determining that the data corresponds to a wink gesture. The method also includes selecting a computing action to perform, based on the wink gesture. The method further includes performing the computing action. | 10-03-2013 |
20130300652 | Unlocking a Screen Using Eye Tracking Information - Methods and systems for unlocking a screen using eye tracking information are described. A computing system may include a display screen. The computing system may be in a locked mode of operation after a period of inactivity by a user. Locked mode of operation may include a locked screen and reduced functionality of the computing system. The user may attempt to unlock the screen. The computing system may generate a display of a moving object on the display screen of the computing system. An eye tracking system may be coupled to the computing system. The eye tracking system may track eye movement of the user. The computing system may determine that a path associated with the eye movement of the user substantially matches a path associated with the moving object on the display and switch to be in an unlocked mode of operation including unlocking the screen. | 11-14-2013 |
20140055846 | User Interface - A head-mounted display (HMD) may include an eye-tracking system, an HMD-tracking system and a display configured to display virtual images. The virtual images may present an augmented reality to a wearer of the HMD and the virtual images may adjust dynamically based on HMD-tracking data. However, position and orientation sensor errors may introduce drift into the displayed virtual images. By incorporating eye-tracking data, the drift of virtual images may be reduced. In one embodiment, the eye-tracking data could be used to determine a gaze axis and a target object in the displayed virtual images. The HMD may then move the target object towards a central axis. The HMD may also record data based on the gaze axis, central axis and target object to determine a user interface preference. The user interface preference could be used to adjust similar interactions with the HMD. | 02-27-2014 |
20140098102 | One-Dimensional To Two-Dimensional List Navigation - Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) for a computing device, such as a head-mountable device (HMD). The computing device can display a first card of an ordered plurality of cards using a timeline display. The computing device can receive a first input and, responsively determine a group of cards for a grid view and display the grid view. The group of cards can include the first card. The grid view can include the group of cards arranged in a grid and be focused on the first card. The computing device can receive a second input, and responsively modify the grid view and display the modified grid view. The modified grid view can be focused on a second card. The computing device can receive a third input and responsively display the timeline display, where the timeline display includes the second card. | 04-10-2014 |
20140101608 | User Interfaces for Head-Mountable Devices - Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) for a head-mountable device (HMD). A computing device, such as an HMD, can display at least a portion of a first linear arrangement of cards. The first linear arrangement can include an ordered plurality of cards that can include an actionable card and a bundle card that can correspond to a group of cards. A moveable selection region can be displayed. A given card can be selected by aligning the selection region with the given card. After selection of a bundle card, the computing device can display a second linear arrangement of cards that includes a portion of the corresponding group of cards. After selection of an actionable card, the computing device can display a third linear arrangement of cards that includes action card(s) selectable to perform action(s) based on the actionable card. | 04-10-2014 |
20140258902 | Graphical Interface Having Adjustable Borders - Methods and systems involving navigation of a graphical interface are disclosed herein. An example system may be configured to: (a) cause a head-mounted display (HMD) to provide a graphical interface, the graphical interface comprising (i) a view port having a view-port orientation and (ii) at least one navigable area having at least one border, the at least one border having a first border orientation; (b) receive input data that indicates movement of the view port towards the at least one border; (c) determine that the view-port orientation is within a predetermined threshold distance from the first border orientation; and (d) based on at least the determination that the view-port orientation is within a predetermined threshold distance from the first border orientation, adjust the first border orientation from the first border orientation to a second border orientation. | 09-11-2014 |
20140376765 | HEADPHONES WITH ADAPTABLE FIT - A wearable audio component includes a first cable and an audio source in electrical communication with the first cable. A housing defines an interior and an exterior, the audio source being contained within the interior thereof. The exterior includes an ear engaging surface, an outer surface, and a peripheral surface extending between the front and outer surfaces. The peripheral surface includes a channel open along a length to surrounding portions of the peripheral surface and having a depth to extend partially between the front and outer surfaces. A portion of the channel is covered by a bridge member that defines an aperture between and open to adjacent portions of the channel. The cable is connected with the housing at a first location disposed within the channel remote from the bridge member and is captured in so as to extend through the aperture in a slidable engagement therewith. | 12-25-2014 |
20150084864 | Input Method - Methods and systems for authenticating a user using eye tracking information are described. A wearable computing system may include a head mounted display (HMD). The wearable computing system may be operable to be in a locked mode of operation after a period of inactivity by a user. Locked mode of operation may include a locked screen and reduced functionality of the wearable computing system. The user may be authenticated to be able to use the wearable computing system after the period of inactivity. The wearable computing system may generate a display of a random content on the HMD including a content personalized to the user. The wearable computing system may receive information associated with a gaze location of an eye of the user and determine that the gaze location substantially matches a predetermined location of the content personalized to the user on the HMD and authenticate the user. | 03-26-2015 |