Patent application number | Description | Published |
20120146932 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120146933 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120146934 | Event Registration and Dispatch System and Method for Multi-Point Controls - Dynamic registration of event handlers in a computer application or operating system recognizes multiple synchronous input streams by identifying each new stroke in a frame representing a single moment in time and mapping in a registration process each identified new stroke to a listening process that is associated with the user interface element to which the new input stream is to be applied. In the same frame, released strokes are unmapped and then each active listening process is called to carry out a respective control operation. When called, the strokes have the correct data for the given frame. The process is repeated for subsequent frames. By carrying out various processes in a sequence of frames, the concept of concurrency is preserved, which is particularly beneficial to multi-touch and multi-user systems. | 06-14-2012 |
20120227012 | Graphical User Interface for Large-Scale, Multi-User, Multi-Touch Systems - A method implemented on the graphical user interface device to invoke an independent, user-localized menu in an application environment, by making a predetermined gesture with a pointing device on an arbitrary part of a display screen or surface, especially when applied in a multi-touch, multi-user environment, and in environments where multiple concurrent pointing devices are present. As an example, the user may trace out a closed loop of a specific size that invokes a default system menu at any location on the surface, even when a second user may be operating a different portion of the system elsewhere on the same surface. As an additional aspect of the invention, the method allows the user to smoothly transition between the menu-invocation and menu control. | 09-06-2012 |
20130069860 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Engagement of an input mechanism with one of the target element and a particular one of the objects displayed on the multi-input display device is detected. Movement of the input mechanism is monitored while the input mechanism remains engaged with whichever one of the target element and the particular displayed object that the input mechanism engaged. A determination is made that at least a portion of a particular displayed object is overlapping at least a portion of a target element on the multi-input display device upon detecting disengagement of the input mechanism. As a consequence of disengagement and the overlap, processes are invoked that establish a relationship between the particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20130069885 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20130069991 | ORGANIZATIONAL TOOLS ON A MULTI-TOUCH DISPLAY DEVICE - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 03-21-2013 |
20130093693 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093694 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093695 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130093756 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 04-18-2013 |
20130093792 | Organizational Tools on a Multi-touch Display Device - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 04-18-2013 |
20130127833 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-23-2013 |
20130135290 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-30-2013 |
20130135291 | Volumetric Data Exploration Using Multi-Point Input Controls - A three-dimensional data set is accessed. A two-dimensional plane is defined that intersects a space defined by the three-dimensional data set. The two-dimensional plane defines a two-dimensional data set within the three-dimensional data set and divides the three-dimensional data set into first and second subsets. A three-dimensional view based on the three-dimensional data set is rendered on such that at least a portion of the first subset of the three-dimensional data set is removed and at least a portion of the two-dimensional data set is displayed. A two-dimensional view of a first subset of the two-dimensional data set also is rendered. Controls are provided that enable visual navigation through the three-dimensional data set by engaging points on the multi-touch display device that correspond to either the three-dimensional view based on the three-dimensional data set and/or the two-dimensional view of the first subset of the two-dimensional data set. | 05-30-2013 |
20130300674 | Overscan Display Device and Method of Using the Same - A display device comprises a display panel including a display region configured to display one or more image objects and an overscan region configured to prevent the display of images within the overscan region, a touchscreen panel overlying the display region and the overscan region configured to detect and generate engagement data indicative of detected engagement, and a computer processor configured to access first engagement data, determine that the first engagement data reflects engagement with at least one portion of the touchscreen panel correspondingly overlapping with the overscan region, identify a first particular engagement input type based on the first engagement data, and instruct the display panel to invoke the display of the one or more image objects in the display region or to change the display of the one or more image objects in the display region. | 11-14-2013 |
20130307827 | 3D MANIPULATION USING APPLIED PRESSURE - Placement by one or more input mechanisms of a touch point on a multi-touch display device that is displaying a three-dimensional object is detected. A two-dimensional location of the touch point on the multi-touch display device is determined, and the touch point is matched with a three-dimensional contact point on a surface of the three-dimensional object that is projected for display onto the image plane of the camera at the two-dimensional location of the touch point. A change in applied pressure at the touch point is detected, and a target depth value for the contact point is determined based on the change in applied pressure. A solver is used to calculate a three-dimensional transformation of the three-dimensional object using an algorithm that reduces a difference between a depth value of the contact point after object transformation and the target depth value. | 11-21-2013 |
20130314353 | ORGANIZATIONAL TOOLS ON A MULTI-TOUCH DISPLAY DEVICE - A process for enabling objects displayed on a multi-input display device to be grouped together is disclosed that includes defining a target element that enables objects displayed on a multi-input display device to be grouped together through interaction with the target element. Operations are invoked that establish a relationship between a particular displayed object and a position on the target element and that causes transformations applied to the target element also to be applied to the particular displayed object while maintaining the relationship between the particular displayed object and the position on the target element. | 11-28-2013 |
20140104190 | Selective Reporting of Touch Data - A graphical user interface is rendered on a display screen of a touch screen device. The display screen includes a display area for rendering images, and the graphical user interface of the application is rendered in a portion of the display area. Digital touch data is generated in response to user interactions with a touch-sensitive surface of the touch screen device. A module of an operating system residing on the touch screen device is used to convert the digital touch data into OS touch events. The OS touch events and application location information are received at a system hook. The application location information identifies the portion of the display area of the touch screen device in which the graphical user interface of the application is rendered. The system hook filters the OS touch events based on the application location information and provides the filtered OS touch events to the application. | 04-17-2014 |
20140104191 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving first and second contact data describing a first and second series of contacts with a touch sensitive display, the first and second series of contacts occurring over a time range. The method includes classifying the first series of contacts as being a series of touch inputs provided by a user's body part, and classifying the second series of contacts as being a series of stylus inputs provided by a stylus. The method includes comparing motion represented by the series of touch inputs with motion represented by the series of stylus inputs, and determining that the motion represented by the series of touch inputs correlates with the motion represented by the series of stylus inputs. The method includes classifying the series of touch inputs. | 04-17-2014 |
20140104192 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes maintaining a history of prior state information related to a touch sensitive display. The method further includes detecting that a previous contact with the touch sensitive display was incorrectly classified. The method further includes updating a classification of the previous contact based on the detection that the previous contact was incorrectly classified. The method further includes rewinding a state of the touch sensitive display to reflect a state that would have resulted had the previous contact been correctly classified based on the history of prior state information and the updated classification of the previous contact. | 04-17-2014 |
20140104193 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving data describing a first region of contact with a touch sensitive display, a second region of contact with the touch sensitive display, and a third region of contact with the touch sensitive display, the second region of contact being separate from the first region of contact and the third region of contact being separate from the first region of contact and the second region of contact. The method includes classifying the first region of contact as a touch point provided by a user's body part. The method includes classifying the second region of contact as incidental touch input provided by a user's resting body part. The method includes classifying the third region of contact as a stylus input. | 04-17-2014 |
20140104194 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving data describing a first region of contact with a touch sensitive display and a second region of contact with the touch sensitive display, the second region of contact being separate from the first region of contact. The method includes classifying the first region of contact as a touch point provided by a user's body part. The method includes classifying the second region of contact as incidental touch input provided by a user's resting body part. The method includes determining an area that is outside of the second region of contact and that extends at least a threshold distance from the second region of contact. The method includes determining a location of the touch point associated with the first region of contact. | 04-17-2014 |
20140104195 | Selective Reporting of Touch Data - A graphical user interface is rendered on a display screen of a touch screen device. The display screen includes a display area for rendering images, and the graphical user interface of the application is rendered in a portion of the display area. Digital touch data is generated in response to user interactions with a touch-sensitive surface of the touch screen device. the digital touch data into OS touch events and application touch events. The OS touch events, application touch events, and application location information are received at a system hook. The application location information identifies the portion of the display area of the touch screen device in which the graphical user interface of the application is rendered. The system hook filters the OS touch events and the application touch events based on the application location information and provides the filtered OS touch events and application touch events to the application. | 04-17-2014 |
20140104225 | Input Classification for Multi-Touch Systems - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for input classification for multi-touch systems. In one aspect, a method includes receiving data describing a first region of contact with a touch sensitive display and a second region of contact with the touch sensitive display. The method includes determining at least one characteristic of the first region of contact. The method includes based on the at least one characteristic of the first region of contact, determining that the first region of contact corresponds to intended touch input provided by a user's body part or stylus. The method includes determining at least one characteristic of the second region of contact. The method includes based on the at least one characteristic of the second region of contact, determining that the second region of contact corresponds to incidental touch input provided by a user's resting body part. | 04-17-2014 |
20140104320 | Controlling Virtual Objects - Controlling virtual objects displayed on a display device includes controlling display, on a display device, of multiple virtual objects, each of the multiple virtual objects being capable of movement based on a first type of input and being capable of alteration based on a second type of input that is different than the first type of input, the alteration being different from movement. A subset of the multiple virtual objects as candidates for restriction is identified, and based on identifying the subset of virtual objects as candidates for restriction, a responsiveness to the first type of input for the subset of virtual objects is restricted. The first type of input applied to a first virtual object included in the subset of virtual objects and a second virtual object included in the multiple virtual objects is detected, with the second virtual object being excluded from the subset of virtual objects. Based on detecting the first type of input applied to the first virtual object and the second virtual object, movement of the first virtual object is controlled in accordance with the restricted responsiveness to the first type of input, and movement of the second virtual object is controlled without restriction. | 04-17-2014 |
20140108979 | Controlling Virtual Objects - Controlling virtual objects displayed on a display device comprises controlling display, on a display device, of multiple virtual objects, each of the multiple virtual objects being capable of movement based on a first type of input and being capable of alteration based on a second type of input that is different than the first type of input, the alteration being different from movement. User interaction relative to the display device on which the multiple virtual objects are displayed is sensed. Positions of the multiple virtual objects on the display device at a time corresponding to the sensed user interaction is determined. A subset of the multiple virtual objects as candidates for restriction based on the sensed user interaction and the determined positions of the multiple virtual objects on the display device at the time corresponding to the sensed user interaction is determined. An operation related to restricting movement of the determined subset of virtual objects based on the first type of input is performed. | 04-17-2014 |
20140168128 | 3D MANIPULATION USING APPLIED PRESSURE - Placement by one or more input mechanisms of a touch point on a multi-touch display device that is displaying a three-dimensional object is detected. A two-dimensional location of the touch point on the multi-touch display device is determined, and the touch point is matched with a three-dimensional contact point on a surface of the three-dimensional object that is projected for display onto the image plane of the camera at the two-dimensional location of the touch point. A change in applied pressure at the touch point is detected, and a target depth value for the contact point is determined based on the change in applied pressure. A solver is used to calculate a three-dimensional transformation of the three-dimensional object using an algorithm that reduces a difference between a depth value of the contact point after object transformation and the target depth value. | 06-19-2014 |
20140208248 | PRESSURE-SENSITIVE LAYERING OF DISPLAYED OBJECTS - First and second objects are displayed on a pressure-sensitive touch-screen display device. An intersection is detected between the objects. Contact by one or more input mechanisms is detected in a region that corresponds to the first displayed object. Pressure applied by at least one input mechanisms is sensed. The depth of the first displayed object is adjusted as a function of the sensed pressure. The depth of the displayed objects are determined at their detected intersection. The determined depths of the displayed objects are compared. Based on a result of comparing the determined depths, data is stored indicating that one of the displayed objects is overlapping the other. In addition, the displayed objects are displayed such that the overlapping displayed object is displayed closer to a foreground of the pressure-sensitive touch-screen display device than the other displayed object. | 07-24-2014 |
20140325411 | MANIPULATION OF OVERLAPPING OBJECTS DISPLAYED ON A MULTI-TOUCH DEVICE - A multi-touch display device that is configured to display multiple objects concurrently and that provides multi-touch controls for manipulating multiple of the displayed objects with multiple degrees of freedom concurrently but independently is configured to reduce the degrees of freedom provided by the manipulation controls for at least two of the displayed objects in response to detecting that an input mechanism is exerting control over the two displayed objects concurrently. | 10-30-2014 |