Patent application number | Description | Published |
20080214233 | CONNECTING MOBILE DEVICES VIA INTERACTIVE INPUT MEDIUM - A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies. | 09-04-2008 |
20080250012 | IN SITU SEARCH FOR ACTIVE NOTE TAKING - A system and method that facilitates and effectuates in situ search for active note taking. The system and method includes receiving gestures from a stylus and a tablet associated with the system. Upon recognizing the gesture as belonging to a set of known and recognized gestures, the system creates an embeddable object, initiates a search with terms indicated by the gesture, associates the search results with the created object and inserts the object in close proximity with the terms that instigated the search. | 10-09-2008 |
20090058820 | FLICK-BASED IN SITU SEARCH FROM INK, TEXT, OR AN EMPTY SELECTION REGION - The claimed subject matter provides a system and/or a method that facilitates in situ searching of data. An interface can receive a flick gesture from an input device. An in situ search component can employ an in situ search triggered by the flick gesture, wherein the in situ search is executed on at least one of a portion of data selected on the input device. | 03-05-2009 |
20090094283 | ACTIVE USE LOOKUP VIA MOBILE DEVICE - A system and methodology that enables a mobile device user to privately retrieve information while engaged in an active communication session is provided. The innovation enables a user to prompt lookup and retrieval of information (e.g., calendar appointments, contact information, task information) without interruption of the active communication session. The content of the information can be configured and conveyed by way of private audible feedback only detectible by the requesting party. | 04-09-2009 |
20090094560 | HANDLE FLAGS - The claimed subject matter provides techniques to effectuate and facilitate efficient and flexible selection of display objects. The system can include devices and components that acquire gestures from pointing instrumentalities and thereafter ascertains velocities and proximities in relation to the displayed objects. Based at least upon these ascertained velocities and proximities falling below or within threshold levels, the system displays flags associated with the display object. | 04-09-2009 |
20090187824 | SELF-REVELATION AIDS FOR INTERFACES - Systems and/or methods are provided that facilitates revealing assistance information associated with a user interface. An interface can obtain input information related to interactions between the interface and a user. In addition, the interface can output assistance information in situ with the user interface. Further, a decision component that determines the in situ assistance information output by the interface based at least in part on the obtained input information. | 07-23-2009 |
20100013777 | TRACKING INPUT IN A SCREEN-REFLECTIVE INTERFACE ENVIRONMENT - In an example embodiment, a method is adapted to tracking input with a device. The method includes an act of monitoring and acts of activating and displaying if a touch input is detected. The device has a first side and a second side, with the second side opposite the first side. The device has a display screen disposed on the first side, and a screen-reflective interface disposed on the second side. Respective positions on the screen-reflective interface correspond to respective locations of the display screen. The screen-reflective interface of the device is monitored. If a touch input is detected on the screen-reflective interface, the device performs acts of activating and displaying. Specifically, a tracking state is activated for the screen-reflective interface responsive to the detected touch input on the screen-reflective interface. The interface icon is displayed on the display screen to indicate that the tracking state has been activated. | 01-21-2010 |
20100149090 | GESTURES, INTERACTIONS, AND COMMON GROUND IN A SURFACE COMPUTING ENVIRONMENT - Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur. | 06-17-2010 |
20100180254 | Graphical Mashup - This document describes various techniques for creating, modifying, and using graphical mashups. In one embodiment, a graphical mashup is created based on locations of graphical representations of objects in a working area. Logical connections between the objects are created based on the objects' locations relative to each other. Alternatively or additionally, the techniques may enable a user to create or modify a graphical mashup by adding or deleting objects, modifying logical connections between objects, annotating objects, or abstracting the graphical mashup. | 07-15-2010 |
20100207908 | TOUCH-SENSITIVE DEVICE FOR SCROLLING A DOCUMENT ON A DISPLAY - A touch-sensitive device for use as an electronic input device for controlling by scrolling the visible portion of a document or image relative to a display. The device can include various improved configurations such as physically separate opposed input surfaces at opposite longitudinal ends and/or lateral sides. The end regions of a touch sensitive surface may be rounded and/or tapered to provide relative positional feedback to the user. Tactile positional feedback can also include surface texture changes on the scrolling area and/or changes in the surface of the frame in the region immediately adjacent the scrolling area. The touch sensitive areas may be provided within a split alphanumeric section of an ergonomic keyboard to enable scrolling without the user having to remove his or her hands from the alphanumeric section. | 08-19-2010 |
20100225595 | TOUCH DISCRIMINATION - The claimed subject matter provides a system and/or a method that facilitates distinguishing input among one or more users in a surface computing environment. A variety of information can be obtained and analyzed to infer an association between a particular input and a particular user. Touch point information can be acquired from a surface wherein the touch point information relates to a touch point. In addition, one or more environmental sensors can monitor the surface computing environment and provide environmental information. The touch point information and the environmental information can be analyzed to determine direction of inputs, location of users, and movement of users and so on. Individual analysis results can be correlated and/or aggregated to generate a inference of association between a touch point and user. | 09-09-2010 |
20110181524 | Copy and Staple Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185299 | Stamp Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185300 | BRUSH, CARBON-COPY, AND FILL GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185318 | EDGE GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110185320 | Cross-reference Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 07-28-2011 |
20110191704 | CONTEXTUAL MULTIPLEXING GESTURES - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110191718 | Link Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110191719 | Cut, Punch-Out, and Rip Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 08-04-2011 |
20110205163 | Off-Screen Gestures to Create On-Screen Input - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209039 | MULTI-SCREEN BOOKMARK HOLD GESTURE - Embodiments of a multi-screen bookmark hold gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held in place proximate an edge of a journal page that is displayed on the first screen. A motion input is recognized at a second screen of the multi-screen system while the hold input remains held in place. A bookmark hold gesture can then be determined from the recognized hold and motion inputs, and the bookmark hold gesture is effective to bookmark the journal page at a location of the hold input on the first screen. | 08-25-2011 |
20110209057 | MULTI-SCREEN HOLD AND PAGE-FLIP GESTURE - Embodiments of a multi-screen hold and page-flip gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held to select a journal page that is displayed on the first screen. A motion input is recognized at a second screen of the multi-screen system, and the motion input is recognized while the hold input remains held in place. A hold and page-flip gesture can then be determined from the recognized hold and motion inputs, and the hold and page-flip gesture is effective to maintain the display of the journal page while one or more additional journal pages are flipped for display on the second screen. | 08-25-2011 |
20110209058 | MULTI-SCREEN HOLD AND TAP GESTURE - Embodiments of a multi-screen hold and tap gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held to select a displayed object on the first screen. A tap input is recognized at a second screen of the multi-screen system, and the tap input is recognized when the displayed object continues being selected. A hold and tap gesture can then be determined from the recognized hold and tap inputs. | 08-25-2011 |
20110209088 | Multi-Finger Gestures - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209089 | MULTI-SCREEN OBJECT-HOLD AND PAGE-CHANGE GESTURE - Embodiments of a multi-screen object-hold and page-change gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system, and the hold input is recognized when held in place to select a displayed object on the first screen. A motion input is recognized at a second screen of the multi-screen system, where the motion input is recognized while the displayed object remains held in place and is effective to change one or more journal pages. An object-hold and page-change gesture can then be determined from the recognized hold and motion inputs. | 08-25-2011 |
20110209093 | RADIAL MENUS WITH BEZEL GESTURES - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209097 | Use of Bezel as an Input Mechanism - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209098 | On and Off-Screen Gesture Combinations - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209099 | Page Manipulations Using On and Off-Screen Gestures - Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, off-screen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple-finger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures. | 08-25-2011 |
20110209100 | MULTI-SCREEN PINCH AND EXPAND GESTURES - Embodiments of multi-screen pinch and expand gestures are described. In various embodiments, a first input is recognized at a first screen of a multi-screen system, and the first input includes a first motion input. A second input is recognized at a second screen of the multi-screen system, and the second input includes a second motion input. A pinch gesture or an expand gesture can then be determined from the first and second motion inputs that are associated with the recognized first and second inputs. | 08-25-2011 |
20110209101 | MULTI-SCREEN PINCH-TO-POCKET GESTURE - Embodiments of a multi-screen pinch-to-pocket gesture are described. In various embodiments, a first motion input to a first screen region is recognized at a first screen of a multi-screen system, and the first motion input is recognized to select a displayed object. A second motion input to a second screen region is recognized at a second screen of the multi-screen system, and the second motion input is recognized to select the displayed object. A pinch-to-pocket gesture can then be determined from the recognized first and second motion inputs within the respective first and second screen regions, the pinch-to-pocket gesture effective to pocket the displayed object. | 08-25-2011 |
20110209102 | MULTI-SCREEN DUAL TAP GESTURE - Embodiments of a multi-screen dual tap gesture are described. In various embodiments, a first tap input to a displayed object is recognized at a first screen of a multi-screen system. A second tap input to the displayed object is recognized at a second screen of the multi-screen system, and the second tap input is recognized approximately when the first tap input is recognized. A dual tap gesture can then be determined from the recognized first and second tap inputs. | 08-25-2011 |
20110209103 | MULTI-SCREEN HOLD AND DRAG GESTURE - Embodiments of a multi-screen hold and drag gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system when the hold input is held in place. A motion input is recognized at a second screen of the multi-screen system, and the motion input is recognized to select a displayed object while the hold input remains held in place. A hold and drag gesture can then be determined from the recognized hold and motion inputs. | 08-25-2011 |
20110209104 | MULTI-SCREEN SYNCHRONOUS SLIDE GESTURE - Embodiments of a multi-screen synchronous slide gesture are described. In various embodiments, a first motion input is recognized at a first screen of a multi-screen system, and the first motion input is recognized when moving in a particular direction across the first screen. A second motion input is recognized at a second screen of the multi-screen system, where the second motion input is recognized when moving in the particular direction across the second screen and approximately when the first motion input is recognized. A synchronous slide gesture can then be determined from the recognized first and second motion inputs. | 08-25-2011 |
20110264928 | CHANGING POWER MODE BASED ON SENSORS IN A DEVICE - An orientation of a device is detected based on a signal from at least one orientation sensor in the device. In response to the detected orientation, the device is placed in a full power mode. | 10-27-2011 |
20110265046 | THROWING GESTURES FOR MOBILE DEVICES - At least one tilt sensor generates a sensor value. A context information server, receives the sensor value and sets at least one context attribute. An application uses at least one context attribute to determine that a flinging gesture has been made and to change an image on a display in response to the flinging gesture. | 10-27-2011 |
20110267263 | CHANGING INPUT TOLERANCES BASED ON DEVICE MOVEMENT - Movement of a device is detected using at least one sensor. In response to the detected movement, at least one value is altered to make it easier for a user to select an object on a display. | 11-03-2011 |
20110273368 | Extending Digital Artifacts Through An Interactive Surface - A unique system and method that facilitates extending input/output capabilities for resource deficient mobile devices and interactions between multiple heterogeneous devices is provided. The system and method involve an interactive surface to which the desired mobile devices can be connected. The interactive surface can provide an enhanced display space and customization controls for mobile devices that lack adequate displays and input capabilities. In addition, the interactive surface can be employed to permit communication and interaction between multiple mobile devices that otherwise are unable to interact with each other. When connected to the interactive surface, the mobile devices can share information, view information from their respective devices, and store information to the interactive surface. Furthermore, the interactive surface can resume activity states of mobile devices that were previously communicating upon re-connection to the surface. | 11-10-2011 |
20120154255 | COMPUTING DEVICE HAVING PLURAL DISPLAY PARTS FOR PRESENTING PLURAL SPACES - A computing device is described which includes plural display parts provided on respective plural device parts. The display parts define a display surface which provides interfaces to different tools. The tools, in turn, allow a local participant to engage in an interactive session with one or more remote participants. In one case, the tools include: a shared workspace processing module for providing a shared workspace for use by the participants; an audio-video conferencing module for enabling audio-video communication among the participants; and a reference space module for communicating hand gestures and the like among the participants, etc. In one case, the computing device is implemented as a portable computing device that can be held in a participant's hand during use. | 06-21-2012 |
20120154293 | DETECTING GESTURES INVOLVING INTENTIONAL MOVEMENT OF A COMPUTING DEVICE - A computing device is described herein which accommodates gestures that involve intentional movement of the computing device, either by establishing an orientation of the computing device and/or by dynamically moving the computing device, or both. The gestures may also be accompanied by contact with a display surface (or other part) of the computing device. For example, the user may establish contact with the display surface via a touch input mechanism and/or a pen input mechanism and then move the computing device in a prescribed manner. | 06-21-2012 |
20120154294 | USING MOVEMENT OF A COMPUTING DEVICE TO ENHANCE INTERPRETATION OF INPUT EVENTS PRODUCED WHEN INTERACTING WITH THE COMPUTING DEVICE - A computing device is described herein which collects input event(s) from at least one contact-type input mechanism (such as a touch input mechanism) and at least one movement-type input mechanism (such as an accelerometer and/or gyro device). The movement-type input mechanism can identify the orientation of the computing device and/or the dynamic motion of the computing device. The computing device uses these input events to interpret the type of input action that has occurred, e.g., to assess when at least part of the input action is unintentional. The computing device can then perform behavior based on its interpretation, such as by ignoring part of the input event(s), restoring a pre-action state, correcting at least part of the input event(s), and so on. | 06-21-2012 |
20120154295 | COOPERATIVE USE OF PLURAL INPUT MECHANISMS TO CONVEY GESTURES - A computing device is described which allows a user to convey a gesture through the cooperative use of two input mechanisms, such as a touch input mechanism and a pen input mechanism. A user uses a first input mechanism to demarcate content presented on a display surface of the computing device or other part of the computing device, e.g., by spanning the content with two fingers of a hand. The user then uses a second input mechanism to make gestures within the content that is demarcated by first input mechanism. In doing so, the first input mechanism establishes a context which governs the interpretation of gestures made by the second input mechanism. The computing device can also activate the joint use mode using two applications of the same input mechanism, such as two applications of a touch input mechanism. | 06-21-2012 |
20120154296 | SUPPLEMENTING A TOUCH INPUT MECHANISM WITH FINGERPRINT DETECTION - A computing device includes a fingerprint detection module for detecting fingerprint information that may be contained within touch input event(s) provided by a touch input mechanism. The computing device can leverage the fingerprint information in various ways. In one approach, the computing device can use the fingerprint information to enhance an interpretation of the touch input event(s), such as by rejecting parts of the touch input event(s) associated with an unintended input action. In another approach, the computing device can use the fingerprint information to identify an individual associated with the fingerprint information. The computing device can apply this insight to provide a customized user experience to that individual, such as by displaying content that is targeted to that individual. | 06-21-2012 |
20120158629 | DETECTING AND RESPONDING TO UNINTENTIONAL CONTACT WITH A COMPUTING DEVICE - A computing device is described herein for detecting and addressing unintended contact of a hand portion (such as a palm) or other article with a computing device. The computing device uses multiple factors to determine whether input events are accidental, including, for instance, the tilt of a pen device as it approaches a display surface of the computing device. The computing device can also capture and analyze input events which represent a hand that is close to the display surface, but not making physical contact with the display surface. The computing device can execute one or more behaviors to counteract the effect of any inadvertent input actions that it may detect. | 06-21-2012 |
20120162093 | Touch Screen Control - This document relates to touch screen controls. For instance, the touch screen controls can allow a user to control a computing device by engaging a touch screen associated with the computing device. One implementation can receive at least one tactile contact from a region of a touch screen. This implementation can present a first command functionality on the touch screen proximate the region for a predefined time. It can await user engagement of the first command functionality. Lacking user engagement within the predefined time, the implementation can remove the first command functionality and offer a second command functionality. | 06-28-2012 |
20120206330 | MULTI-TOUCH INPUT DEVICE WITH ORIENTATION SENSING - A multi-touch orientation sensing input device may enhance task performance efficiency. The multi-touch orientation sensing input device may include a device body that is partially enclosed or completely enclosed by a multi-touch sensor. The multi-touch orientation sensing input device may further include an inertia measurement unit that is disposed on the device body, The inertia measurement unit may measures a tilt angle of the device body with respect to a horizontal surface, as well as a roll angle of the device body along a length-wise axis of the device body with respect to an initial point on the device body. | 08-16-2012 |
20120236026 | Brush, Carbon-Copy, and Fill Gestures - Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device. | 09-20-2012 |
20120240043 | Self-Revelation Aids for Interfaces - Systems and/or methods are provided that facilitates revealing assistance information associated with a user interface. An interface can obtain input information related to interactions between the interface and a user. In addition, the interface can output assistance information in situ with the user interface. Further, a decision component that determines the in situ assistance information output by the interface based at least in part on the obtained input information. | 09-20-2012 |
20130082978 | OMNI-SPATIAL GESTURE INPUT - Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space. | 04-04-2013 |
20130115879 | Connecting Mobile Devices via Interactive Input Medium - A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies. | 05-09-2013 |
20130138424 | Context-Aware Interaction System Using a Semantic Model - The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism. | 05-30-2013 |
20130154952 | GESTURE COMBINING MULTI-TOUCH AND MOVEMENT - Functionality is described herein for interpreting gestures made by a user in the course of interacting with a handheld computing device. The functionality operates by: (a) receiving a touch input event from at least one touch input mechanism; (b) receiving a movement input event from at least one movement input mechanism in response to movement of the computing device; and (c) determining whether the touch input event and the movement input event indicate that a user has made a multi-touch-movement (MTM) gesture. A user performs a MTM gesture by touching a surface of the touch input mechanism to establish two or more contacts in conjunction with moving the computing device in a prescribed manner. The functionality can define an action space in response to the MTM gesture and perform an action which affects the action space. | 06-20-2013 |
20130181902 | SKINNABLE TOUCH DEVICE GRIP PATTERNS - Skinnable touch device grip pattern techniques are described herein. A touch-aware skin may be configured to substantially cover the outer surfaces of a computing device. The touch-aware skin may include a plurality of skin sensors configured to detect interaction with the skin at defined locations. The computing device may include one or more modules operable to obtain input from the plurality of skin sensors and decode the input to determine grips patterns that indicate how the computing device is being held by a user. Various functionality provided by the computing device may be selectively enabled and/or adapted based on a determined grip pattern such that the provided functionality may change to match the grip pattern. | 07-18-2013 |
20130181953 | STYLUS COMPUTING ENVIRONMENT - A stylus computing environment is described. In one or more implementations, one or more inputs are detected using one or more sensors of a stylus. A user that has grasped the stylus, using fingers of the user's hand, is identified from the received one or more inputs. One or more actions are performed based on the identification of the user that was performed using the one or more inputs received from the one or more sensors of the stylus | 07-18-2013 |
20130182892 | GESTURE IDENTIFICATION USING AN AD-HOC MULTIDEVICE NETWORK - Methods, systems, and computer-readable media for establishing an ad hoc network of devices that can be used to interpret gestures. Embodiments of the invention use a network of sensors with an ad hoc spatial configuration to observe physical objects in a performance area. The performance area may be a room or other area within range of the sensors. Initially, devices within the performance area, or with a view of the performance area, are indentified. Once identified, the sensors go through a discovery phase to locate devices within an area. Once the discovery phase is complete and the devices within the ad hoc network are located, the combined signals received from the devices may be used to interpret gestures made within the performance area. | 07-18-2013 |
20130201095 | PRESENTATION TECHNIQUES - Techniques involving presentations are described. In one or more implementations, a user interface is output by a computing device that includes a slide of a presentation, the slide having an object that is output for display in three dimensions. Responsive to receipt of one or more inputs by the computing device, how the object in the slide is output for display in the three dimensions is altered. | 08-08-2013 |
20130201113 | MULTI-TOUCH-MOVEMENT GESTURES FOR TABLET COMPUTING DEVICES - Functionality is described herein for detecting and responding to gestures performed by a user using a computing device, such as, but not limited to, a tablet computing device. In one implementation, the functionality operates by receiving touch input information in response to the user touching the computing device, and movement input information in response to the user moving the computing device. The functionality then determines whether the input information indicates that a user has performed or is performing a multi-touch-movement (MTM) gesture. The functionality can then perform any behavior in response to determining that the user has performed an MTM gesture, such as by modifying a view or invoking a function, etc. | 08-08-2013 |
20130215454 | THREE-DIMENSIONAL PRINTING - Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. | 08-22-2013 |
20130234992 | Touch Discrimination - In some implementations, a touch point on a surface of a touchscreen device may be determined. An image of a region of space above the surface and surrounding the touch point may be determined The image may include a brightness gradient that captures a brightness of objects above the surface. A binary image that includes one or more binary blobs may be created based on a brightness of portions of the image. A determination may be made as to which of the one more binary blobs are connected to each other to form portions of a particular user. A determination may be made that the particular user generated the touch point. | 09-12-2013 |
20130286223 | PROXIMITY AND CONNECTION BASED PHOTO SHARING - Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared. | 10-31-2013 |
20130300668 | Grip-Based Device Adaptations - Grip-based device adaptations are described in which a touch-aware skin of a device is employed to adapt device behavior in various ways. The touch-aware skin may include a plurality of sensors from which a device may obtain input and decode the input to determine grip characteristics indicative of a user's grip. On-screen keyboards and other input elements may then be configured and located in a user interface according to a determined grip. In at least some embodiments, a gesture defined to facilitate selective launch of on-screen input element may be recognized and used in conjunction with grip characteristics to launch the on-screen input element in dependence upon grip. Additionally, touch and gesture recognition parameters may be adjusted according to a determined grip to reduce misrecognition. | 11-14-2013 |
20140123049 | KEYBOARD WITH GESTURE-REDUNDANT KEYS REMOVED - The subject disclosure is directed towards a graphical or printed keyboard having keys removed, in which the removed keys are those made redundant by gesture input. For example, a graphical or printed keyboard may be the same overall size and have the same key sizes as other graphical or printed keyboards with no numeric keys, yet via the removed keys may fit numeric and alphabetic keys into the same footprint. Also described is having three or more characters per key, with a tap corresponding to one character, and different gestures on the key differentiating among the other characters. | 05-01-2014 |
20140247207 | Causing Specific Location of an Object Provided to a Device - Techniques for causing a specific location of an object provided to a shared device. These techniques may include connecting the computing device with an individual device. The individual device may transmit the object to the shared device and displayed at an initial object position on a display of the shared device. The initial object position may be updated in response to movement of the individual device, and the object may be displayed at the updated object position on the display. The object position may be locked in response to a signal, and the object may be displayed at the locked object position. | 09-04-2014 |
20140250245 | Modifying Functionality Based on Distances Between Devices - Described herein are techniques and systems that allow modification of functionalities based on distances between a shared device (e.g., a shared display, etc.) and an individual device (e.g., a mobile computing device, etc.). The shared device and the individual device may establish a communication to enable exchange of data. In some embodiments, the shared device or the individual device may measure a distance between the shared device and the individual device. Based on the distance, the individual device may operate in a different mode. In some instances, the shared device may then instruct the individual device to modify a functionality corresponding to the mode. | 09-04-2014 |
20140267184 | Multimode Stylus - A stylus for use as an input device automatically switches its mode of operation. | 09-18-2014 |
20140280748 | COOPERATIVE FEDERATION OF DIGITAL DEVICES VIA PROXEMICS AND DEVICE MICRO-MOBILITY - The subject disclosure is directed towards co-located collaboration/data sharing that is based upon detecting the proxemics of people and/or the proxemics of devices. A federation of devices is established based upon proxemics, such as when the users have entered into a formation based upon distance between them and orientation. User devices may share content with other devices in the federation based upon micro-mobility actions performed on the devices, e.g., tilting and/or otherwise interacting with a sending device. | 09-18-2014 |