Patent application number | Description | Published |
20090189857 | TOUCH SENSING FOR CURVED DISPLAYS - Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface. | 07-30-2009 |
20090189917 | PROJECTION OF GRAPHICAL OBJECTS ON INTERACTIVE IRREGULAR DISPLAYS - A method for displaying images on a curved display surface is described herein. The method includes receiving a graphical object and distorting the graphical object at run-time such that an appearance of the graphical object on the curved display surface will be substantially similar regardless of a position of the graphical object on the curved display surface when viewed at a viewing axis that is approximately orthogonal to a plane that is tangential to the curved display surface at a center of the graphical object. The method may further include displaying the graphical object on the curved display surface. | 07-30-2009 |
20100020026 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature. | 01-28-2010 |
20100023895 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved. | 01-28-2010 |
20100149090 | GESTURES, INTERACTIONS, AND COMMON GROUND IN A SURFACE COMPUTING ENVIRONMENT - Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur. | 06-17-2010 |
20100182220 | SURFACE PUCK - An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display. | 07-22-2010 |
20100218249 | AUTHENTICATION VIA A DEVICE - The claimed subject matter provides a system and/or a method that facilitates authentication of a user in a surface computing environment. A device or authentication object can be carried by a user and employed to retain authentication information. An authentication component can obtain the authentication information from the device and analyze the information to verify an identity of the user. A touch input component can ascertain if a touch input is authentication by associating touch input with the user. In addition, authentication information can be employed to establish a secure communications channel for transfer of user data. | 08-26-2010 |
20100225595 | TOUCH DISCRIMINATION - The claimed subject matter provides a system and/or a method that facilitates distinguishing input among one or more users in a surface computing environment. A variety of information can be obtained and analyzed to infer an association between a particular input and a particular user. Touch point information can be acquired from a surface wherein the touch point information relates to a touch point. In addition, one or more environmental sensors can monitor the surface computing environment and provide environmental information. The touch point information and the environmental information can be analyzed to determine direction of inputs, location of users, and movement of users and so on. Individual analysis results can be correlated and/or aggregated to generate a inference of association between a touch point and user. | 09-09-2010 |
20100242274 | DETECTING TOUCH ON A CURVED SURFACE - Embodiments are disclosed herein that are related to input devices with curved multi-touch surfaces. For example, in one disclosed embodiment, a method of making a multi-touch input device having a curved touch-sensitive surface comprises forming on a substrate an array of sensor elements defining a plurality of pixels of the multi-touch sensor, forming the substrate into a shape that conforms to a surface of the curved geometric feature of the body of the input device, and fixing the substrate to the curved geometric feature of the body of the input device. | 09-30-2010 |
20100245246 | DETECTING TOUCH ON A CURVED SURFACE - Embodiments are disclosed herein that are related to input devices with curved multi-touch surfaces. One disclosed embodiment comprises a touch-sensitive input device having a curved geometric feature comprising a touch sensor, the touch sensor comprising an array of sensor elements integrated into the curved geometric feature and being configured to detect a location of a touch made on a surface of the curved geometric feature. | 09-30-2010 |
20100265178 | CAMERA-BASED MULTI-TOUCH MOUSE - Technologies for a camera-based multi-touch input device operable to provide conventional mouse movement data as well as three-dimensional multi-touch data. Such a device is based on an internal camera focused on a mirror or set of mirrors enabling the camera to image the inside of a working surface of the device. The working surface allows light to pass through. An internal light source illuminates the inside of the working surface and reflects off of any objects proximate to the outside of the device. This reflected light is received by the mirror and then directed to the camera. Imaging from the camera can be processed to extract touch points corresponding to the position of one or more objects outside the working surface as well as to detect gestures performed by the objects. Thus the device can provide conventional mouse functionality as well as three-dimensional multi-touch functionality. | 10-21-2010 |
20100302137 | Touch Sensitive Display Apparatus using sensor input - Described herein is a system that includes a receiver component that receives gesture data from a sensor unit that is coupled to a body of a gloveless user, wherein the gesture data is indicative of a bodily gesture of the user, wherein the bodily gesture comprises movement pertaining to at least one limb of the gloveless user. The system further includes a location determiner component that determines location of the bodily gesture with respect to a touch-sensitive display apparatus. The system also includes a display component that causes the touch-sensitive display apparatus to display an image based at least in part upon the received gesture data and the determined location of the bodily gesture with respect to the touch-sensitive display apparatus. | 12-02-2010 |
20100315335 | Pointing Device with Independently Movable Portions - A pointing device with independently movable portions is described. In an embodiment, a pointing device comprises a base unit and a satellite portion. The base unit is arranged to be located under a palm of a user's hand and be movable over a supporting surface. The satellite portion is arranged to be located under a digit of the user's hand and be independently movable over the supporting surface relative to the base unit. In embodiments, data from at least one sensing device is read, and movement of both the base unit and the independently movable satellite portion of the pointing device is calculated from the data. The movement of the base unit and the satellite portion is analyzed to detect a user gesture. | 12-16-2010 |
20100315336 | Pointing Device Using Proximity Sensing - A pointing device using proximity sensing is described. In an embodiment, a pointing device comprises a movement sensor and a proximity sensor. The movement sensor generates a first data sequence relating to sensed movement of the pointing device relative to a surface. The proximity sensor generates a second data sequence relating to sensed movement relative to the pointing device of one or more objects in proximity to the pointing device. In embodiments, data from the movement sensor of the pointing device is read and the movement of the pointing device relative to the surface is determined. Data from the proximity sensor is also read, and a sequence of sensor images of one or more objects in proximity to the pointing device are generated. The sensor images are analyzed to determine the movement of the one or more objects relative to the pointing device. | 12-16-2010 |
20110080341 | Indirect Multi-Touch Interaction - Indirect multi-touch interaction is described. In an embodiment, a user interface is controlled using a cursor and a touch region comprising a representation of one or more digits of a user. The cursor and the touch region are moved together in the user interface in accordance with data received from a cursor control device, such that the relative location of the touch region and the cursor is maintained. The representations of the digits of the user are moved in the touch region in accordance with data describing movement of the user's digits. In another embodiment, a user interface is controlled in a first mode of operation using an aggregate cursor, and switched to a second mode of operation in which the aggregate cursor is divided into separate portions, each of which can be independently controlled by the user. | 04-07-2011 |
20110117526 | TEACHING GESTURE INITIATION WITH REGISTRATION POSTURE GUIDES - A method for providing multi-touch input initiation training on a display surface is disclosed. A set of one or more registration hand postures is determined, where each registration hand posture corresponds to one or more gestures executable from that registration hand posture. A registration posture guide is displayed on the display surface. The registration posture guide includes a catalogue for each registration hand posture, where the catalogue includes a contact silhouette showing a model touch-contact interface between the display surface and that registration hand posture. | 05-19-2011 |
20110117535 | TEACHING GESTURES WITH OFFSET CONTACT SILHOUETTES - A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input. | 05-19-2011 |
20110157025 | HAND POSTURE MODE CONSTRAINTS ON TOUCH INPUT - A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint. | 06-30-2011 |
20110195781 | MULTI-TOUCH MOUSE IN GAMING APPLICATIONS - Keyboards, mice, joysticks, customized gamepads, and other peripherals are continually being developed to enhance a user's experience when playing computer video games. Unfortunately, many of these devices provide users with limited input control because of the complexity of today's gaming applications. For example, many computer video games require a combination of mouse and keyboard to control even the simplest of in-game tasks (e.g., walking into a room and looking around may require several keyboard keystrokes and mouse movements). Accordingly, one or more systems and/or techniques for performing in-game tasks based upon user input within a multi-touch mouse are disclosed herein. User input comprising one or more user interactions detect by spatial sensors within the multi-touch mouse may be received. A wide variety of in-game tasks (e.g., character movements, character actions, character view, etc.) may be performed based upon the user interactions (e.g., a swipe gesture, a mouse position change, etc.). | 08-11-2011 |
20110205147 | Interacting With An Omni-Directionally Projected Display - Concepts and technologies are described herein for interacting with an omni-directionally projected display. The omni-directionally projected display includes, in some embodiments, visual information projected on a display surface by way of an omni-directional projector. A user is able to interact with the projected visual information using gestures in free space, voice commands, and/or other tools, structures, and commands. The visual information can be projected omni-directionally, to provide a user with an immersive interactive experience with the projected display. The concepts and technologies disclosed herein can support more than one interacting user. Thus, the concepts and technologies disclosed herein may be employed to provide a number of users with immersive interactions with projected visual information. | 08-25-2011 |
20110227947 | Multi-Touch User Interface Interaction - Multi-touch user interface interaction is described. In an embodiment, an object in a user interface (UI) is manipulated by a cursor and a representation of a plurality of digits of a user. At least one parameter, which comprises the cursor location in the UI, is used to determine that multi-touch input is to be provided to the object. Responsive to this, the relative movement of the digits is analyzed and the object manipulated accordingly. In another embodiment, an object in a UI is manipulated by a representation of a plurality of digits of a user. Movement of each digit by the user moves the corresponding representation in the UI, and the movement velocity of the representation is a non-linear function of the digit's velocity. After determining that multi-touch input is to be provided to the object, the relative movement of the representations is analyzed and the object manipulated accordingly. | 09-22-2011 |
20110252316 | TRANSLATING TEXT ON A SURFACE COMPUTING DEVICE - A system described herein includes an acquirer component that acquires an electronic document that comprises text in a first language, wherein the acquirer component acquires the electronic document based at least in part upon a physical object comprising the text contacting or becoming proximate to the interactive display of the surface computing device. The system also includes a language selector component that receives an indication of a second language from a user of the surface computing device and selects the second language. A translator component translates the text in the electronic document from the first language to the second language, and a formatter component formats the electronic document for display to the user on the interactive display of the surface computing device, wherein the electronic document comprises the text in the second language. | 10-13-2011 |
20110260962 | INTERFACING WITH A COMPUTING APPLICATION USING A MULTI-DIGIT SENSOR - A technology is described for interfacing with a computing application using a multi-digit sensor. A method may include obtaining an initial stroke using a single digit of a user on the multi-digit sensor. A direction change point for the initial stroke can be identified. At the direction change point for the initial stroke, a number of additional digits can be presented by the user to the multi-digit sensor. Then a completion stroke can be identified as being made with the number of additional digits. A user interface signal to can be sent to the computing application based on the number of additional digits used in the completion touch stroke. In another configuration of the technology, the touch stroke or gesture may include a single stroke where user interface items can be selected when additional digits are presented at the end of a gesture. | 10-27-2011 |
20120056840 | Precise selection techniques for multi-touch screens - A unique system and method is provided that facilitates pixel-accurate targeting with respect to multi-touch sensitive displays when selecting or viewing content with a cursor. In particular, the system and method can track dual inputs from a primary finger and a secondary finger, for example. The primary finger can control movement of the cursor while the secondary finger can adjust a control-display ratio of the screen. As a result, cursor steering and selection of an assistance mode can be performed at about the same time or concurrently. In addition, the system and method can stabilize a cursor position at a top middle point of a user's finger in order to mitigate clicking errors when making a selection. | 03-08-2012 |
20120092286 | Synthetic Gesture Trace Generator - A synthetic gesture trace generator is described. In an embodiment, a synthetic gesture trace is generated using a gesture synthesizer which may be implemented in software. The synthesizer receives a number of inputs, including parameters associated with a touch sensor to be used in the synthesis and a gesture defined in terms of gesture components. The synthesizer breaks each gesture component into a series of time-stamped contact co-ordinates at the frame rate of the sensor, with each time-stamped contact co-ordinate detailing the position of any touch events at a particular time. Sensor images are then generated from the time-stamped contact co-ordinates using a contact-to-sensor transformation function. Where there are multiple simultaneous contacts, there may be multiple sensor images generated having the same time-stamp and these are combined to form a single sensor image for each time-stamp. This sequence of sensor images is formatted to create the synthetic gesture trace. | 04-19-2012 |
20120113017 | RESOLVING MERGED TOUCH CONTACTS - A method for resolving merged contacts detected by a multi-touch sensor includes resolving a first touch contact to a first centroid(N) for a frame(N) and resolving a second touch contact, distinct from the first touch contact, to a second centroid(N) for the frame(N). Responsive to the first touch contact and the second touch contact merging into a merged touch contact in a frame(N+1), the merged touch contact is resolved to a first centroid(N+1) and a second centroid(N+1). | 05-10-2012 |
20120206330 | MULTI-TOUCH INPUT DEVICE WITH ORIENTATION SENSING - A multi-touch orientation sensing input device may enhance task performance efficiency. The multi-touch orientation sensing input device may include a device body that is partially enclosed or completely enclosed by a multi-touch sensor. The multi-touch orientation sensing input device may further include an inertia measurement unit that is disposed on the device body, The inertia measurement unit may measures a tilt angle of the device body with respect to a horizontal surface, as well as a roll angle of the device body along a length-wise axis of the device body with respect to an initial point on the device body. | 08-16-2012 |
20120206380 | PREDICTION-BASED TOUCH CONTACT TRACKING - In embodiments of prediction-based touch contact tracking, touch input sensor data is recognized as a series of components of a contact on a touch-screen display. A first component of the contact can be identified, and a second component can be determined to correlate to the contact. The first component and the second component can then be associated to represent a tracking of the contact. Subsequent components of the contact can be determined and associated with the previous components of the contact to further represent the tracking of the contact. | 08-16-2012 |
20120262407 | TOUCH AND STYLUS DISCRIMINATION AND REJECTION FOR CONTACT SENSITIVE COMPUTING DEVICES - A “Contact Discriminator” provides various techniques for differentiating between valid and invalid contacts received from any input methodology by one or more touch-sensitive surfaces of a touch-sensitive computing device. Examples of contacts include single, sequential, concurrent, or simultaneous user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof. The Contact Discriminator then acts on valid contacts (i.e., contacts intended as inputs) while rejecting or ignoring invalid contacts or inputs. Advantageously, the Contact Discriminator is further capable of disabling or ignoring regions of input surfaces, such tablet touch screens, that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen. | 10-18-2012 |
20120293402 | MONITORING INTERACTIONS BETWEEN TWO OR MORE OBJECTS WITHIN AN ENVIRONMENT - One or more techniques and/or systems are provided for monitoring interactions by an input object with an interactive interface projected onto an interface object. That is, an input object (e.g., a finger) and an interface object (e.g., a wall, a hand, a notepad, etc.) may be identified and tracked in real-time using depth data (e.g., depth data extracted from images captured by a depth camera). An interactive interface (e.g., a calculator, an email program, a keyboard, etc.) may be projected onto the interface object, such that the input object may be used to interact with the interactive interface. For example, the input object may be tracked to determine whether the input object is touching or hovering above the interface object and/or a projected portion of the interactive interface. If the input object is in a touch state, then a corresponding event associated with the interactive interface may be invoked. | 11-22-2012 |
20120293420 | DISAMBIGUATING INTENTIONAL AND INCIDENTAL CONTACT AND MOTION IN MULTI-TOUCH POINTING DEVICES - An input device has both a touch sensor and a position sensor. A computer using data from the input device uses the relative motion of a contact on a touch sensor with respect to motion from a position detector to disambiguate intentional from incidental motion. The input device provides synchronized position sensor and touch sensor data to the computer to permit processing the relative motion and performing other computations on both position sensor and touch sensor data. The input device can encode the magnitude and direction of motion of the position sensor and combines it with the touch sensor data from the same time frame, and output the synchronized data to the computer. | 11-22-2012 |
20120299837 | IDENTIFYING CONTACTS AND CONTACT ATTRIBUTES IN TOUCH SENSOR DATA USING SPATIAL AND TEMPORAL FEATURES - A touch sensor provides frames of touch sensor data, as the touch sensor is sampled over time. Spatial and temporal features of the touch sensor data from a plurality of frames, and contacts and attributes of the contacts in previous frames, are processed to identify contacts and attributes of the contacts in a current frame. Attributes of the contacts can include, whether the contact is reliable, shrinking, moving, or related to a fingertip touch. The characteristics of contacts can include information about the shape and rate of change of the contact, including but not limited to a sum of its pixels, its shape, size and orientation, motion, average intensities and aspect ratio. | 11-29-2012 |
20120319959 | DEVICE INTERACTION THROUGH BARRIER - There is provided an electronic device having a touch-sensing element configured for sensing touches on a surface thereof. A baseline sensitivity setting determines a sensitivity of the touch-sensing element. The touch-sensing element is configured to register a touch that meets or exceeds the baseline sensitivity setting, and to ignore a touch that does not meet the baseline sensitivity setting. The device further includes a sensor that senses an operating condition of the device. A memory of the device includes code executable by the device and configured to adjust the baseline sensitivity setting based upon the sensed operating condition. | 12-20-2012 |
20130069931 | CORRELATING MOVEMENT INFORMATION RECEIVED FROM DIFFERENT SOURCES - A system is described herein which receives internal-assessed (IA) movement information from a mobile device. The system also receives external-assessed (EA) movement information from at least one monitoring system which captures a scene containing the mobile device. The system then compares the IA movement information with the EA movement information with respect to each candidate object in the scene. If the IA movement information matches the EA movement information for a particular candidate object, the system concludes that the candidate object is associated with the mobile device. For example, the object may correspond to a hand that holds the mobile device. The system can use the correlation results produced in the above-indicated manner to perform various environment-specific actions. | 03-21-2013 |
20130082978 | OMNI-SPATIAL GESTURE INPUT - Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space. | 04-04-2013 |
20130181902 | SKINNABLE TOUCH DEVICE GRIP PATTERNS - Skinnable touch device grip pattern techniques are described herein. A touch-aware skin may be configured to substantially cover the outer surfaces of a computing device. The touch-aware skin may include a plurality of skin sensors configured to detect interaction with the skin at defined locations. The computing device may include one or more modules operable to obtain input from the plurality of skin sensors and decode the input to determine grips patterns that indicate how the computing device is being held by a user. Various functionality provided by the computing device may be selectively enabled and/or adapted based on a determined grip pattern such that the provided functionality may change to match the grip pattern. | 07-18-2013 |
20130182892 | GESTURE IDENTIFICATION USING AN AD-HOC MULTIDEVICE NETWORK - Methods, systems, and computer-readable media for establishing an ad hoc network of devices that can be used to interpret gestures. Embodiments of the invention use a network of sensors with an ad hoc spatial configuration to observe physical objects in a performance area. The performance area may be a room or other area within range of the sensors. Initially, devices within the performance area, or with a view of the performance area, are indentified. Once identified, the sensors go through a discovery phase to locate devices within an area. Once the discovery phase is complete and the devices within the ad hoc network are located, the combined signals received from the devices may be used to interpret gestures made within the performance area. | 07-18-2013 |
20130201095 | PRESENTATION TECHNIQUES - Techniques involving presentations are described. In one or more implementations, a user interface is output by a computing device that includes a slide of a presentation, the slide having an object that is output for display in three dimensions. Responsive to receipt of one or more inputs by the computing device, how the object in the slide is output for display in the three dimensions is altered. | 08-08-2013 |
20130215454 | THREE-DIMENSIONAL PRINTING - Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. | 08-22-2013 |
20130234992 | Touch Discrimination - In some implementations, a touch point on a surface of a touchscreen device may be determined. An image of a region of space above the surface and surrounding the touch point may be determined The image may include a brightness gradient that captures a brightness of objects above the surface. A binary image that includes one or more binary blobs may be created based on a brightness of portions of the image. A determination may be made as to which of the one more binary blobs are connected to each other to form portions of a particular user. A determination may be made that the particular user generated the touch point. | 09-12-2013 |
20130241806 | Surface Puck - An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display. | 09-19-2013 |
20130257777 | MOTION AND CONTEXT SHARING FOR PEN-BASED COMPUTING INPUTS - A “Motion and Context Sharing Technique” uses a pen or stylus enhanced to incorporate multiple sensors, i.e., a “sensor pen,” and a power supply to enable various input techniques and gestures. Various combinations of pen stroke, pressure, motion, and other sensor pen inputs are used to enable various hybrid input techniques that incorporate simultaneous, concurrent, sequential, and/or interleaved, sensor pen inputs and touch inputs (i.e., finger, palm, hand, etc.) on displays or other touch sensitive surfaces. This enables a variety of motion-gesture inputs relating to the context of how the sensor pen is used or held, even when the pen is not in contact or within sensing range of the computing device digitizer. In other words, any particular touch inputs or combinations of touch inputs are correlated with any desired sensor pen inputs, with those correlated inputs then being used to initiate any desired action by the computing device. | 10-03-2013 |
20130286223 | PROXIMITY AND CONNECTION BASED PHOTO SHARING - Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared. | 10-31-2013 |
20130295539 | PROJECTED VISUAL CUES FOR GUIDING PHYSICAL MOVEMENT - Physical movement of a human subject may be guided by a visual cue. A physical environment may be observed to identify a current position of a body portion of the human subject. A model path of travel may be obtained for the body portion of the human subject. The visual cue may be projected onto the human subject and/or into a field of view of the human subject. The visual cue may indicate the model path of travel for the body portion of the human subject. | 11-07-2013 |
20130300668 | Grip-Based Device Adaptations - Grip-based device adaptations are described in which a touch-aware skin of a device is employed to adapt device behavior in various ways. The touch-aware skin may include a plurality of sensors from which a device may obtain input and decode the input to determine grip characteristics indicative of a user's grip. On-screen keyboards and other input elements may then be configured and located in a user interface according to a determined grip. In at least some embodiments, a gesture defined to facilitate selective launch of on-screen input element may be recognized and used in conjunction with grip characteristics to launch the on-screen input element in dependence upon grip. Additionally, touch and gesture recognition parameters may be adjusted according to a determined grip to reduce misrecognition. | 11-14-2013 |
20130335594 | ENHANCING CAPTURED DATA - Captured data is obtained, including various types of captured or recorded data (e.g., image data, audio data, video data, etc.) and/or metadata describing various aspects of the capture device and/or the manner in which the data is captured. One or more elements of the captured data that can be replaced by one or more substitute elements are determined, the replaceable elements are removed from the captured data, and links to the substitute elements are associated with the captured data. Links to additional elements to enhance the captured data are also associated with the captured data. Enhanced content can subsequently be constructed based on the captured data as well as the links to the substitute elements and additional elements. | 12-19-2013 |
20130342525 | FOCUS GUIDANCE WITHIN A THREE-DIMENSIONAL INTERFACE - Methods, systems, and computer-readable media providing focal feedback and control in a three-dimensional display. Focal anchors are provided at different depths and used to determine at what depth the user is currently focusing. The focal anchors are also used to receive input from the user. By looking at a focal anchor, the use can cause the portion of content associated with the focal anchor to be displayed more prominently relative to content displayed at different depths. In one embodiment, predictive feedback is provided at a depth associated with one of the focal anchors. | 12-26-2013 |
20130343639 | AUTOMATICALLY MORPHING AND MODIFYING HANDWRITTEN TEXT - An automatic handwriting morphing and modification system and method for digitally altering the handwriting of a user while maintaining the overall appearance and style of the user's handwriting. Embodiments of the system and method do not substitute or replace characters or words but instead morph and modify the user's handwritten strokes to retain a visual correlation between the original user's handwriting and the morphed and modified version of the user's handwriting. Embodiments of the system and method input the user's handwriting and a set of morph rules that determine what the handwritten strokes of the user can look more like after processing. Morphs, which are a specific type or appearance of a handwritten stroke, are selected based on the target handwriting. The selected morphs are applied using geometric tuning, semantic tuning, or both. The result is a morphed and modified version of the user's handwriting. | 12-26-2013 |
20140005886 | CONTROLLING AUTOMOTIVE FUNCTIONALITY USING INTERNAL- AND EXTERNAL-FACING SENSORS | 01-02-2014 |
20140049609 | WIDE ANGLE DEPTH DETECTION - Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses. | 02-20-2014 |
20140051510 | IMMERSIVE DISPLAY WITH PERIPHERAL ILLUSIONS - A primary display displays a primary image. A peripheral illusion is displayed around the primary display by an environmental display so that the peripheral illusion appears as an extension of the primary image. | 02-20-2014 |
20140063174 | MOBILE VIDEO CONFERENCING WITH DIGITAL ANNOTATION - A local user of a local mobile device is allowed to participate in a video conference session with a remote user of a remote mobile device. Live video can be shared between and collaboratively digitally annotated by the local and remote users. An application can also be shared between and collaboratively digitally annotated by the local and remote users. A digital object can also be shared between and collaboratively digitally annotated by the local and remote users. | 03-06-2014 |
20140109017 | Precise Selection Techniques For Multi-Touch Screens - A unique system and method is provided that facilitates pixel-accurate targeting with respect to multi-touch sensitive displays when selecting or viewing content with a cursor. In particular, the system and method can track dual inputs from a primary finger and a secondary finger, for example. The primary finger can control movement of the cursor while the secondary finger can adjust a control-display ratio of the screen. As a result, cursor steering and selection of an assistance mode can be performed at about the same time or concurrently. In addition, the system and method can stabilize a cursor position at a top middle point of a user's finger in order to mitigate clicking errors when making a selection. | 04-17-2014 |
20140111483 | MONITORING INTERACTIONS BETWEEN TWO OR MORE OBJECTS WITHIN AN ENVIRONMENT - One or more techniques and/or systems are provided for monitoring interactions by an input object with an interactive interface projected onto an interface object. That is, an input object (e.g., a finger) and an interface object (e.g., a wall, a hand, a notepad, etc.) may be identified and tracked in real-time using depth data (e.g., depth data extracted from images captured by a depth camera). An interactive interface (e.g., a calculator, an email program, a keyboard, etc.) may be projected onto the interface object, such that the input object may be used to interact with the interactive interface. For example, the input object may be tracked to determine whether the input object is touching or hovering above the interface object and/or a projected portion of the interactive interface. If the input object is in a touch state, then a corresponding event associated with the interactive interface may be invoked. | 04-24-2014 |
20140120518 | TEACHING GESTURES WITH OFFSET CONTACT SILHOUETTES - A method for providing multi-touch input training on a display surface is disclosed. A touch/hover input is detected at one or more regions of the display surface. A visualization of the touch/hover input is displayed at a location of the display surface offset from the touch/hover input. One or more annotations are displayed at a location of the display surface offset from the touch/hover input and proximate to the visualization, where each annotation shows a different legal continuation of the touch/hover input. | 05-01-2014 |
20140160424 | MULTI-TOUCH INTERACTIONS ON EYEWEAR - The subject disclosure is directed towards eyewear configured as an input device, such as for interaction with a computing device. The eyewear includes a multi-touch sensor set, e.g., located on the frames of eyeglasses, that outputs signals representative of user interaction with the eyewear, such as via taps, presses, swipes and pinches. Sensor handling logic may be used to provide input data corresponding to the signals to a program, e.g., the program with which the user wants to interact. | 06-12-2014 |
20140168096 | REDUCING LATENCY IN INK RENDERING - A reduced-latency ink rendering system and method that reduces latency in rendering ink on a display by bypassing at least some layers of the operating system. “Ink” is any input from a user through a touchscreen device using the user's finger or a pen. Moreover, some embodiments of the system and method avoid the operating system and each central-processing unit (CPU) on a computing device when initially rendering the ink by going directly from the digitizer to the display controller. Any correction or additional processing of the rendered ink is performed after the initial rendering of the ink. Embodiments of the system and method address ink-rendering latency in software embodiments, which include techniques to bypass the typical rendering pipeline and quickly render ink on the display, and hardware embodiments, which use hardware and techniques that locally change display pixels. These embodiments can be mixed and matched in any manner. | 06-19-2014 |
20140232816 | PROVIDING A TELE-IMMERSIVE EXPERIENCE USING A MIRROR METAPHOR - A tele-immersive environment is described that provides interaction among participants of a tele-immersive session. The environment includes two or more set-ups, each associated with a participant. Each set-up, in turn, includes mirror functionality for presenting a three-dimensional virtual space for viewing by a local participant. The virtual space shows at least some of the participants as if the participants were physically present at a same location and looking into a mirror. The mirror functionality can be implemented as a combination of a semi-transparent mirror and a display device, or just a display device acting alone. According to another feature, the environment may present a virtual object in a manner that allows any of the participants of the tele-immersive session to interact with the virtual object. | 08-21-2014 |
20140247240 | PROVIDING MULTI-DIMENSIONAL HAPTIC TOUCH SCREEN INTERACTION - A method, system, and one or more computer-readable storage media for providing multi-dimensional haptic touch screen interaction are provided herein. The method includes detecting a force applied to a touch screen by an object and determining a magnitude, direction, and location of the force. The method also includes determining a haptic force feedback to be applied by the touch screen on the object based on the magnitude, direction, and location of the force applied to the touch screen, and displacing the touch screen in a specified direction such that the haptic force feedback is applied by the touch screen on the object. | 09-04-2014 |
20140247263 | STEERABLE DISPLAY SYSTEM - A steerable display system includes a projector and a projector steering mechanism that selectively changes a projection direction of the projector. An aiming controller causes the projector steering mechanism to aim the projector at a target location of a physical environment. An image controller supplies the aimed projector with information for projecting an image that is geometrically corrected for the target location. | 09-04-2014 |
20140253692 | PROJECTORS AND DEPTH CAMERAS FOR DEVICELESS AUGMENTED REALITY AND INTERACTION - Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data. | 09-11-2014 |
20140327641 | DISAMBIGUATING INTENTIONAL AND INCIDENTAL CONTACT AND MOTION IN MULTI-TOUCH POINTING DEVICES - An input device has both a touch sensor and a position sensor. A computer using data from the input device uses the relative motion of a contact on a touch sensor with respect to motion from a position detector to disambiguate intentional from incidental motion. The input device provides synchronized position sensor and touch sensor data to the computer to permit processing the relative motion and performing other computations on both position sensor and touch sensor data. The input device can encode the magnitude and direction of motion of the position sensor and combines it with the touch sensor data from the same time frame, and output the synchronized data to the computer. | 11-06-2014 |
20140337806 | INTERFACING WITH A COMPUTING APPLICATION USING A MULTI-DIGIT SENSOR - A technology is described for interfacing with a computing application using a multi-digit sensor. A method may include obtaining an initial stroke using a single digit of a user on the multi-digit sensor. A direction change point for the initial stroke can be identified. At the direction change point for the initial stroke, a number of additional digits can be presented by the user to the multi-digit sensor. Then a completion stroke can be identified as being made with the number of additional digits. A user interface signal to can be sent to the computing application based on the number of additional digits used in the completion touch stroke. In another configuration of the technology, the touch stroke or gesture may include a single stroke where user interface items can be selected when additional digits are presented at the end of a gesture. | 11-13-2014 |