Patent application number | Description | Published |
20090157736 | MULTIMEDIA INTEGRATION DESCRIPTION SCHEME, METHOD AND SYSTEM FOR MPEG-7 - The invention provides a system and method for integrating multimedia descriptions in a way that allows humans, software components or devices to easily identify, represent, manage, retrieve, and categorize the multimedia content. In this manner, a user who may be interested in locating a specific piece of multimedia content from a database, Internet, or broadcast media, for example, may search for and find the multimedia content. In this regard, the invention provides a system and method that receives multimedia content and separates the multimedia content into separate components which are assigned to multimedia categories, such as image, video, audio, synthetic and text. Within each of the multimedia categories, the multimedia content is classified and descriptions of the multimedia content are generated. The descriptions are then formatted, integrated, using a multimedia integration description scheme, and the multimedia integration description is generated for the multimedia content. The multimedia description is then stored into a database. As a result, a user may query a search engine which then retrieves the multimedia content from the database whose integration description matches the query criteria specified by the user. The search engine can then provide the user a useful search result based on the multimedia integration description. | 06-18-2009 |
20100005121 | MULTIMEDIA INTEGRATION DESCRIPTION SCHEME, METHOD AND SYSTEM FOR MPEG-7 - The invention provides a system and method for integrating multimedia descriptions in a way that allows humans, software components or devices to easily identify, represent, manage, retrieve, and categorize the multimedia content. In this manner, a user who may be interested in locating a specific piece of multimedia content from a database, Internet, or broadcast media, for example, may search for and find the multimedia content. In this regard, the invention provides a system and method that receives multimedia content and separates the multimedia content into separate components which are assigned to multimedia categories, such as image, video, audio, synthetic and text. Within each of the multimedia categories, the multimedia content is classified and descriptions of the multimedia content are generated. The descriptions are then formatted, integrated, using a multimedia integration description scheme, and the multimedia integration description is generated for the multimedia content. The multimedia description is then stored into a database. As a result, a user may query a search engine which then retrieves the multimedia content from the database whose integration description matches the query criteria specified by the user. The search engine can then provide the user a useful search result based on the multimedia integration description. | 01-07-2010 |
20110258189 | MULTIMEDIA INTEGRATION DESCRIPTION SCHEME, METHOD AND SYSTEM FOR MPEG-7 - The invention provides a system and method for integrating multimedia descriptions in a way that allows humans, software components or devices to easily identify, represent, manage, retrieve, and categorize the multimedia content. In this manner, a user who may be interested in locating a specific piece of multimedia content from a database, Internet, or broadcast media, for example, may search for and find the multimedia content. In this regard, the invention provides a system and method that receives multimedia content and separates the multimedia content into separate components which are assigned to multimedia categories, such as image, video, audio, synthetic and text. Within each of the multimedia categories, the multimedia content is classified and descriptions of the multimedia content are generated. The descriptions are then formatted, integrated, using a multimedia integration description scheme, and the multimedia integration description is generated for the multimedia content. The multimedia description is then stored into a database. As a result, a user may query a search engine which then retrieves the multimedia content from the database whose integration description matches the query criteria specified by the user. The search engine can then provide the user a useful search result based on the multimedia integration description. | 10-20-2011 |
Patent application number | Description | Published |
20100083103 | Phrase Generation Using Part(s) Of A Suggested Phrase - Real-time query expansion (RTQE) is a process of supplementing an original query with addition terms or expansion choices that are ranked according to some figure of merit and presented while users are still formulating their queries. As disclosed herein, phrases may be presented and one or more terms of a focused-on phrase may be pinned (as desirable to the user). Subsequent lists may be presented as a function of pinned terms and/or user input. In one embodiment, a placeholder may be substituted for one or more pinned terms if less than some predetermined threshold of phrases is able to be presented based upon the pinned terms and/or user input, and another list of phrases may be presented as a function of a query using fewer than all the pinned terms. The placeholder may allow out-of-index phrases to be formed, for example, based upon two or more phrases and/or terms input by the user. | 04-01-2010 |
20100162175 | AUGMENTED LIST FOR SEARCHING LARGE INDEXES - An augmented large index searching system and method for searching a database of items using a device having a limited input mechanism. Embodiments of the system and method present to a user in an augmented list view or a regular list view a list of items matching a sub-string search. The augmented list view contains a list of sub-group representations so that each sub-group is represented by an item in the sub-group most likely to be selected by the user. The user can select an item wanted by the user or refine the sub-string search by pinning a character to append the character to the sub-string and generated a revised sub-string. The above process is repeated using the revised sub-string. The list can be augmented by displaying visual features that indicate quantity and distinguish between items or characters by using coloring, highlighting, shading, size, and so forth. | 06-24-2010 |
20120150772 | Social Newsfeed Triage - A social newsfeed being delivered to a user is triaged. A personalized model is established which predicts the importance to the user of data elements within a current social newsfeed being delivered to the user. The personalized model is established based on implicit actions the user takes in response to receiving previous social newsfeeds. The personalized model is then used to triage the data elements within the current social newsfeed. | 06-14-2012 |
20130042175 | PHRASE GENERATION USING PART(S) OF A SUGGESTED PHRASE - Real-time query expansion (RTQE) is a process of supplementing an original query with addition terms or expansion choices that are ranked according to some figure of merit and presented while users are still formulating their queries. As disclosed herein, phrases may be presented and one or more terms of a focused-on phrase may be pinned (as desirable to the user). Subsequent lists may be presented as a function of pinned terms and/or user input. In one embodiment, a placeholder may be substituted for one or more pinned terms if less than some predetermined threshold of phrases is able to be presented based upon the pinned terms and/or user input, and another list of phrases may be presented as a function of a query using fewer than all the pinned terms. The placeholder may allow out-of-index phrases to be formed, for example, based upon two or more phrases and/or terms input by the user. | 02-14-2013 |
20130141576 | DETERMINING THREATS BASED ON INFORMATION FROM ROAD-BASED DEVICES IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured to perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142347 | VEHICULAR THREAT DETECTION BASED ON AUDIO SIGNALS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing audio signals. An example AEFS receives data that represents an audio signal emitted by a vehicle. The AEFS analyzes the audio signal to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142365 | AUDIBLE ASSISTANCE - Techniques for sensory enhancement and augmentation are described. Some embodiments provide an audible assistance facilitator system (“AAFS”) configured to provide audible assistance to a user via a hearing device. In one embodiment, the AAFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media device, or the like. The AAFS identifies the speaker based on the received data, such as by performing speaker recognition. The AAFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AAFS then informs the user of the speaker-related information, such as by causing an audio representation of the speaker-related information to be output via the hearing device. | 06-06-2013 |
20130142393 | VEHICULAR THREAT DETECTION BASED ON IMAGE ANALYSIS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing image data. An example AEFS receives data that represents an image of a vehicle. The AEFS analyzes the received data to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130144490 | PRESENTATION OF SHARED THREAT INFORMATION IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. In some embodiments, devices and systems located in a transportation network share threat information with one another, in order to enhance a user's ability to operate or function in a transportation-related context. In one embodiment, a process in a vehicle receives threat information from a remote device, the threat information based on information about objects or conditions proximate to the remote device. The process then determines that the threat information is relevant to the safe operation of the vehicle. Then, the process modifies operation of the vehicle based on the threat information, such as by presenting a message to the operator of the vehicle and/or controlling the vehicle itself. | 06-06-2013 |
20130144595 | LANGUAGE TRANSLATION BASED ON SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information. The AEFS then presents the message in the second language to the user. | 06-06-2013 |
20130144603 | ENHANCED VOICE CONFERENCING WITH HISTORY - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording and presenting voice conference history information based on speaker-related information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information. The AEFS then informs a user of the conference history information, such as by presenting a transcript of the voice conference and/or related information items on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144619 | ENHANCED VOICE CONFERENCING - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. In one embodiment, the AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs a user of the speaker-related information, such as by presenting the speaker-related information on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144623 | VISUAL PRESENTATION OF SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user. | 06-06-2013 |
20130165138 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 06-27-2013 |
20130165139 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165140 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130165141 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165148 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165158 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165159 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165160 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165161 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130172004 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 07-04-2013 |
20130303195 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of a traveled path of at least one mobile device over a specified time period; determining, using a microprocessor, a predicted location of the at least one mobile device at least partly based on receiving the indication of the traveled path over a specified time period; and presenting an indication of the predicted location of the at least one mobile device at least partially based on accepting an indication of a traveled path and determining a predicted location of the at least one mobile device. | 11-14-2013 |
20130339283 | STRING PREDICTION - In a mobile device, the text entered by users is analyzed to determine a set of responses commonly entered by users into text applications such as SMS applications in response to received messages. This set of responses is used to provide suggested responses to a user for a currently received message in a soft input panel based on the text of the currently received message. The suggested responses are provided before any characters are provided by the user. After the user provides one or more characters, the suggested responses in the soft input panel are updated. The number of suggested responses displayed to the user in the soft input panel is limited to a total confidence value to reduce user distraction and to allow for easier selection. An undo feature for inadvertent selections of suggested responses is also provided. | 12-19-2013 |
20140005941 | DYNAMIC DESTINATION NAVIGATION SYSTEM | 01-02-2014 |
20140032206 | GENERATING STRING PREDICTIONS USING CONTEXTS - In a mobile device, a context is determined for the mobile device. The context is determined based on a variety of characteristics of the mobile device environment including, for example, the current application being used, any contacts that a user of the mobile device is interacting with or having a conversation with, the current date and/or time, a current topic of the conversation, a current style of the conversation, etc. Based on a set of strings associated with the determined context and user generated text, one or more string predictions are generated for the user generated text. The string predictions may be presented to the user as suggested completions of the user generated text. | 01-30-2014 |
20140181741 | DISCREETLY DISPLAYING CONTEXTUALLY RELEVANT INFORMATION - The claimed subject matter provides a method for receiving and displaying contextually relevant information to a user. The method includes receiving automatically-updated contextually relevant information at a display device. The contextually relevant information includes information that is at least in part associated with the user. The display device then displays the contextually relevant information discreetly to the user. | 06-26-2014 |
Patent application number | Description | Published |
20090287626 | MULTI-MODAL QUERY GENERATION - A multi-modal search system (and corresponding methodology) is provided. The system employs text, speech, touch and gesture input to establish a search query. Additionally, a subset of the modalities can be used to obtain search results based upon exact or approximate matches to a search result. For example, wildcards, which can either be triggered by the user or inferred by the system, can be employed in the search. | 11-19-2009 |
20090287680 | MULTI-MODAL QUERY REFINEMENT - A multi-modal search query refinement system (and corresponding methodology) is provided. In accordance with the innovation, query suggestion results represent a word palette which can be used to select strings for inclusion or exclusion from a refined set of results. The system employs text, speech, touch and gesture input to refine a set of search query results. Wildcards can be employed in the search either prompted by the user or inferred by the system. Additionally, partial knowledge supplemented by speech can be employed to refine search results. | 11-19-2009 |
20090287681 | MULTI-MODAL SEARCH WILDCARDS - A multi-modal search system (and corresponding methodology) that employs wildcards is provided. Wildcards can be employed in the search query either initiated by the user or inferred by the system. These wildcards can represent uncertainty conveyed by a user in a multi-modal search query input. In examples, the words “something” or “whatchamacallit” can be used to convey uncertainty and partial knowledge about portions of the query and to dynamically trigger wildcard generation. | 11-19-2009 |
20100131275 | FACILITATING MULTIMODAL INTERACTION WITH GRAMMAR-BASED SPEECH APPLICATIONS - Multimodal interaction with grammar-based speech applications may be facilitated with a device by presenting permissible phrases that are in-grammar based on acceptable terms that are in-vocabulary and that have been recognized from a spoken utterance. In an example embodiment, a spoken utterance having two or more terms is received. The two or more terms include one or more acceptable terms. An index is searched using the acceptable terms as query terms. From the searching of the index, permissible phrase(s) are produced that include the acceptable terms. The index is a searchable data structure that represents multiple possible grammar paths that are ascertainable based on acceptable values for each term position of a grammar-based speech application. The permissible phrase(s) are presented to a user as option(s) that may be selected to conduct multimodal interaction with the device. | 05-27-2010 |
20120264516 | TEXT ENTRY BY TRAINING TOUCH MODELS - Embodiments present a game in which an ordered plurality of characters is presented for entry by a user with a touch screen, a physical keyboard, or other key input layout. The game advances to each successive character when the user presses the intended character or a character adjacent thereto. Contact areas are determined for each press, and in some embodiments the contact areas are overlaid on the keyboard. The contact areas are used to adjust user-specific touch models to improve text entry by the user. In some embodiments, the contact areas indicate areas for improvement by the user. Game completion statistics are calculated including speed and accuracy. | 10-18-2012 |
20130198115 | CLUSTERING CROWDSOURCED DATA TO CREATE AND APPLY DATA INPUT MODELS - The collection and clustering of data input characteristics from a plurality of computing devices is provided. The clustered data input characteristics define user groups to which users are assigned. Input models such as language models and touch models are created for, and distributed to, each of the user groups based on the data input characteristics of the users assigned thereto. For example, an input model may be selected for a computing device based on a current context of the computing device. The selected input model is applied to the computing device during the current context to alter the interpretation of input received from the user via the computing device. | 08-01-2013 |