Patent application number | Description | Published |
20100240390 | Dual Module Portable Devices - A dual module portable device may be provided. A motion of a first module of the dual module portable device may be detected. Based at least in part on the detected motion, a position of the first module may be determined relative to the second module of the portable device. Once the relative position of the first module has been determined, a portion of a user interface associated with the relative position may be displayed at the first module. | 09-23-2010 |
20100241348 | Projected Way-Finding - Navigation information may be provided. First, a destination location may be received at a portable device. Next, a current location of the portable device maybe detected. Then, at least one way-point may be calculated based on the current location and the destination location. An orientation and a level of the portable device may be determined and the at least one way-point may then be projected from the portable device. | 09-23-2010 |
20100241987 | Tear-Drop Way-Finding User Interfaces - A tear-drop way-finding user interface (UI) may be provided. A first UI portion corresponding to a device location may be provided. In addition, an object may be displayed at a first relative position within the first UI portion. Then, upon a detected change in device location, a second UI portion corresponding to the changed device location may be provided. In response to the changed device location, a second relative position of the object may be calculated. Next, a determination may be made as to whether the second relative position of the object is within a displayable range of the second UI portion. If the second relative position of the object is not within the displayable range of the second UI portion, then a tear-drop icon indicative of the second relative position of the object may be displayed at an edge of the second UI portion. | 09-23-2010 |
20100241999 | Canvas Manipulation Using 3D Spatial Gestures - User interface manipulation using three-dimensional (3D) spatial gestures may be provided. A two-dimensional (2D) user interface (UI) representation may be displayed. A first gesture may be performed, and, in response to the first gesture's detection, the 2D UI representation may be converted into a 3D UI representation. A second gesture may then be performed, and, in response to the second gesture's detection, the 3D UI representation may be manipulated. Finally, a third gesture may be performed, and, in response to the third gesture's detection, the 3D UI representation may be converted back into the 2D UI representation. | 09-23-2010 |
20110029629 | UNIFIED COMMUNICATION ESCALATION - A method and system for providing message threads with messages of multiple modes of communication in a uniform manner is provided. A messaging system provides a unified communications user interface for message threads that include messages sent using different modes of communications. When a user wants to reply to a message of one mode with a message of another mode, the messaging system displays the communication client application context of the other mode so that the user can prepare and send the message using the appropriate functions. When the instant message is sent, the messaging system adds the instant message to the message thread so that it can be displayed as part of the message thread. | 02-03-2011 |
20120139939 | Dual Module Portable Devices - A dual module portable device may be provided. A motion of a first module of the dual module portable device may be detected. Based at least in part on the detected motion, a position of the first module may be determined relative to the second module of the portable device. Once the relative position of the first module has been determined, a portion of a user interface associated with the relative position may be displayed at the first module. | 06-07-2012 |
20140115185 | DESKTOP ASSISTANT FOR MULTIPLE INFORMATION TYPES - A method and system for providing an aggregate view of information that a user may need is provided. A desktop assistant system collects information items that a user may need such as scheduling information and recently received messages. The desktop assistant system may also identify documents that the user may need and contacts with whom the user may need to communicate based on analysis of the collected scheduling information and the collected messages. The desktop assistant system then displays indications of the collected scheduling information, the collected messages, the identified documents, and the identified contacts so that the user has an integrated view of the needed information items. | 04-24-2014 |
20140344706 | Dual Module Portable Devices - A dual module portable device may be provided. A motion of a first module of the dual module portable device may be detected. Based at least in part on the detected motion, a position of the first module may be determined relative to the second module of the portable device. Once the relative position of the first module has been determined, a portion of a user interface associated with the relative position may be displayed at the first module. | 11-20-2014 |
Patent application number | Description | Published |
20130044114 | Visual Representation of Data According to an Abstraction Hierarchy - Visual representations having visual nodes that can each show up to two levels of an abstraction hierarchy of data, extracted elements of the data, and/or categories thereof, with the optional ability to explode the data, the extracted elements, and/or the categories into additional visual nodes provide capability for deeper composition exploration. Relationships among the data, the extracted elements, and/or the categories can be represented via lines within and across visual nodes. The visual representation can provide an user with awareness of different attributes of the data, the extracted elements, and/or the categories in context even for large, complex corpora of data. The representations of the data, the extracted elements, and/or the categories in the visual representation can be sorted, can depict relative size or quantity across various attributes, and can provide insight into relationships based on metadata and/or content. | 02-21-2013 |
20140280909 | MULTI-DOMAIN SITUATIONAL AWARENESS FOR INFRASTRUCTURE MONITORING - Apparatus and methods are disclosed for a monitoring system that integrates multi-domain data from weather, power, cyber, and/or social media sources to greatly increase situation awareness and drive more accurate assessments of reliability, sustainability, and efficiency in infrastructure environments, such as power grids. In one example of the disclosed technology, a method includes receiving real-time data from two or more different domains relevant to an infrastructure system, aggregating the real-time data into a unified representation relevant to the infrastructure system, and providing the unified representation to one or more customizable graphical user interfaces. | 09-18-2014 |
20140358900 | Search Systems and Computer-Implemented Search Methods - Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets. | 12-04-2014 |
Patent application number | Description | Published |
20100302144 | CREATING A VIRTUAL MOUSE INPUT DEVICE - A virtual mouse input device is created in response to a placement of a card on a touch surface. When the card is placed on the touch surface, the boundaries of the card are captured and a virtual mouse appears around the card. The virtual mouse may be linked with a user through an identifier that is contained on the card. Other controls and actions may be presented in menus that appear with the virtual mouse. For instance, the user may select the type of input (e.g. mouse, keyboard, ink or trackball) driven by the business card. Once created, the virtual mouse is configured to receive user input until the card is removed from the touch surface. The virtual mouse is configured to move a cursor on a display in response to movement of the card on the touch surface. | 12-02-2010 |
20100302155 | VIRTUAL INPUT DEVICES CREATED BY TOUCH INPUT - An input device is created on a touch screen in response to a user's placement of their hand. When a user places their hand on the touch screen, an input device sized for their hand is dynamically created. Alternatively, some other input device may be created. For example, when the user places two hands on the device a split keyboard input device may be dynamically created on the touch screen that is split between the user's hand locations. Once the input device is determined, the user may enter input through the created device on the input screen. The input devices may be configured for each individual user such that the display of the input device changes based on physical characteristics that are associated with the user. | 12-02-2010 |
20100306649 | VIRTUAL INKING USING GESTURE RECOGNITION - A virtual inking device is created in response to a touch input device detecting a user's inking gesture. For example, when a user places one of their hands in a pen gesture (i.e. by connecting the index finger with the thumb while holding the other fingers near the palm), the user may perform inking operations. When the user changes the pen gesture to an erase gesture (i.e. making a first) then the virtual pen may become a virtual eraser. Other inking gestures may also be utilized. | 12-02-2010 |
Patent application number | Description | Published |
20100293501 | Grid Windows - Embodiments of the present invention are directed toward facilitating multi-user input on large format displays. In situations where multiple users may want to work individually on separate content, or individually on the same content, embodiments of the present invention provide an interface allowing a user or users to segment a display in a way to create isolated areas in which multiple users may manipulate content independently and concurrently. | 11-18-2010 |
20100295794 | Two Sided Slate Device - Embodiments of the present invention provide a dual-sided multi-touch computing device that offers the advantages of a keyboard in addition to the conveniences of a slate device. The dual-sided multi-touch computing device may be utilized in two orientations; one side is a multi-touch slate device, and the alternate side is a multi-touch display keyboard. The device is configured with an orientation-recognition device, so that it may be configured based on its orientation. The present invention may be utilized as a stand alone personal computer or as a peripheral device in conjunction with other devices. | 11-25-2010 |
20100299060 | Timed Location Sharing - Rule-based location sharing may be provided. A location determining device, such as a Global Positioning System (GPS) enabled device, may receive a request to share the location. A rule may be used to determine whether to share the location with the requestor. If the rule allows the location to be shared, the location may be sent to the requestor. The location may be relayed through a third party server, which may be operative to evaluate the rule before sharing the location with the requestor. | 11-25-2010 |
20100306004 | Shared Collaboration Canvas - A computing system causes a plurality of display devices to display user interfaces containing portions of a canvas shared by a plurality of users. The canvas is a graphical space containing discrete graphical elements located at arbitrary locations within the canvas. Each of the discrete graphical elements graphically represents a discrete resource. When a user interacts with a resource in the set of resources, the computing system modifies the canvas to include an interaction element indicating that the user is interacting with the resource. The computer system then causes the display devices to update the user interfaces such that the user interfaces reflect a substantially current state of the canvas. In this way, the users may be able to understand which ones of the users are interacting with which ones of the resources. | 12-02-2010 |
20100306018 | Meeting State Recall - Meeting state recall may be provided. A meeting context may be saved at the end of and/or during an event. The meeting context may comprise, for example, a hardware configuration, a software configuration, a recording of the meeting, and/or data associated with a subject of the meeting. The meeting context may be associated with an ongoing project and may be restored at a subsequent meeting associated with the ongoing project. | 12-02-2010 |