Patent application number | Description | Published |
20100053151 | IN-LINE MEDIATION FOR MANIPULATING THREE-DIMENSIONAL CONTENT ON A DISPLAY DEVICE - A user holds the mobile device upright or sits in front of a nomadic or stationary device, views the monitor from a suitable distance, and physically reaches behind the device with her hand to manipulate a 3D object displayed on the monitor. The device functions as a 3D in-line mediator that provides visual coherency to the user when she reaches behind the device to use hand gestures and movements to manipulate a perceived object behind the device and sees that the 3D object on the display is being manipulated. The perceived object that the user manipulates behind the device with bare hands corresponds to the 3D object displayed on the device. The visual coherency arises from the alignment of the user's head or eyes, the device, and the 3D object. The user's hand may be represented as an image of the actual hand or as a virtualized representation of the hand, such as part of an avatar. | 03-04-2010 |
20100053164 | SPATIALLY CORRELATED RENDERING OF THREE-DIMENSIONAL CONTENT ON DISPLAY COMPONENTS HAVING ARBITRARY POSITIONS - Two or more display components are used to provide spatially correlated displays of 3D content. Three-dimensional content is rendered on multiple displays where the 3D content refers to the same virtual 3D coordinates, in which the relative position of the displays to each other determines the 3D virtual camera position for each display. Although not required, one of the displays may be mobile, such as a cell phone, and the other stationary or nomadic, such as a laptop. Each display shows a view based on a virtual camera into 3D content, such as an online virtual world. By continuously sensing and updating the relative physical distances and orientations of each device to one another, the devices show the user a view into the 3D content that is spatially correlated. Each device has a virtual camera that uses a common pool of 3D geometrical data and renders this data to display images. | 03-04-2010 |
20100053322 | DETECTING EGO-MOTION ON A MOBILE DEVICE DISPLAYING THREE-DIMENSIONAL CONTENT - A method of measuring ego-motion speed of a mobile device is described. The linear motion of the device is measured using an image sensor component, thereby creating linear motion data. The rotational or angular motion of the device is measured using an inertial sensor component, thereby creating rotational motion data. The rotational and linear motion data of the device are used to calculate the ego-motion speed of the mobile device. This ego-motion speed can then be used to control a virtual camera control module for adjusting the view of 3D content viewed by the user on the mobile device as the user moves the device, changing the position of the virtual camera. | 03-04-2010 |
20100053324 | EGOMOTION SPEED ESTIMATION ON A MOBILE DEVICE - Linear and rotational speeds of a mobile device are calculated using distance estimates between imaging sensors in the device and objects or scenes in front of the sensors. The distance estimates are used to modify optical flow vectors from the sensors. Shifting and rotational speeds of the mobile device may then be calculated using the modified optical flow vector values. For example, given a configuration where the first imaging sensor and the second imaging sensor face opposite directions on a single axis, a shifting speed is calculated in the following way: multiplying a first optical flow vector and a first distance estimate, thereby deriving a first modified optical flow vector value; multiplying a second optical flow vector and a second distance estimate, thereby deriving a second modified optical flow vector value; the second modified optical flow vector value may then be subtracted from the first modified optical flow vector value, resulting in a measurement of the shifting speed. | 03-04-2010 |
20100128112 | IMMERSIVE DISPLAY SYSTEM FOR INTERACTING WITH THREE-DIMENSIONAL CONTENT - A system for displaying three-dimensional (3-D) content and enabling a user to interact with the content in an immersive, realistic environment is described. The system has a display component that is non-planar and provides the user with an extended field-of-view (FOV), one factor in the creating the immersive user environment. The system also has a tracking sensor component for tracking a user face. The tracking sensor may include one or more 3-D and 2-D cameras. In addition to tracking the face or head, it may also track other body parts, such as hands and arms. An image perspective adjustment module processes data from the face tracking and enables the user to perceive the 3-D content with motion parallax. The hand and other body part output data is used by gesture detection modules to detect collisions between the user's hand and 3-D content. When a collision is detected, there may be tactile feedback to the user to indicate that there has been contact with a 3-D object. All these components contribute towards creating an immersive and realistic environment for viewing and interacting with 3-D content. | 05-27-2010 |
20100134618 | EGOMOTION SPEED ESTIMATION ON A MOBILE DEVICE USING A SINGLE IMAGER - Linear and rotational speeds of a mobile device are calculated using distance estimates between imaging sensors in the device and objects or scenes in front of the sensors. The distance estimates are used to modify optical flow vectors from the sensors. Shifting and rotational speeds of the mobile device may then be calculated using the modified optical flow vector values. For example, given a configuration where the first imaging sensor and the second imaging sensor face opposite directions on a single axis, a shifting speed is calculated in the following way: multiplying a first optical flow vector and a first distance estimate, thereby deriving a first modified optical flow vector value; multiplying a second optical flow vector and a second distance estimate, thereby deriving a second modified optical flow vector value; the second modified optical flow vector value may then be subtracted from the first modified optical flow vector value, resulting in a measurement of the shifting speed. | 06-03-2010 |
20100208029 | MOBILE IMMERSIVE DISPLAY SYSTEM - A mobile content delivery and display system enables a user to use a communication device, such as a cell phone or smart handset device, to view data, images, and video, make phone calls, and perform other functions, in an immersive environment while being mobile. The system, also referred to as a platform, includes a display component which may have one of numerous configurations, each providing extended field-of-views (FOVs). Display component shapes may include hemispherical, ellipsoidal, tubular, conical, pyramidal, or square/rectangular. The display component may have one or more vertical and/or horizontal cuts, each having various degrees of inclination, thereby providing the user with partial physical enclosure creating extended horizontal and/or vertical FOVs. The platform may also have one or more projectors for displaying data (e.g., text, images, or video) on the display component. Other sensors in the system may include 2-D and 3-D cameras, location sensors, speakers, microphones, communication devices, and interfaces. The platform may be worn or attached to the user as an accessory facilitating user mobility. | 08-19-2010 |
20110285622 | RENDITION OF 3D CONTENT ON A HANDHELD DEVICE - A handheld device having a display and a front-facing sensor and a back-facing sensor is able to render 3D content in a realistic and spatially correct manner using position-dependent rendering and view-dependent rendering. In one scenario, the 3D content is only computer-generated content and the display on the device is a typical, non-transparent (opaque) display. The position-dependent rendering is performed using either the back-facing sensor or a front-facing sensor having a wide-angle lens. In another scenario, the 3D content is composed of computer-generated 3D content and images of physical objects and the display is either a transparent or semi-transparent display where physical objects behind the device show through the display. In this case, position-dependent rendering is performed using a back-facing sensor that is actuated (capable of physical panning and tilting) or is wide-angle, thereby enabling virtual panning. | 11-24-2011 |