Patent application number | Description | Published |
20090062679 | CATEGORIZING PERCEPTUAL STIMULI BY DETECTING SUBCONCIOUS RESPONSES - A perceptual stimulus categorization technique is presented which identifies the stimuli category of a perceptual stimulus that has been presented to a person whose brain activity is being monitored. This generally accomplished by first training a detection module to recognize the part of the brain activity generated in response to the presentation of a stimulus belonging to each of one or more stimuli categories using brain activity information. Once the detection module is trained, a subsequent instance of a stimulus belonging to a trained stimuli category being presented to the person is detected, and this detection is used to identify the trained stimuli category to which the presented stimulus belongs. | 03-05-2009 |
20090326406 | WEARABLE ELECTROMYOGRAPHY-BASED CONTROLLERS FOR HUMAN-COMPUTER INTERFACE - A “Wearable Electromyography-Based Controller” includes a plurality of Electromyography (EMG) sensors and provides a wired or wireless human-computer interface (HCl) for interacting with computing systems and attached devices via electrical signals generated by specific movement of the user's muscles. Following initial automated self-calibration and positional localization processes, measurement and interpretation of muscle generated electrical signals is accomplished by sampling signals from the EMG sensors of the Wearable Electromyography-Based Controller. In operation, the Wearable Electromyography-Based Controller is donned by the user and placed into a coarsely approximate position on the surface of the user's skin. Automated cues or instructions are then provided to the user for fine-tuning placement of the Wearable Electromyography-Based Controller. Examples of Wearable Electromyography-Based Controllers include articles of manufacture, such as an armband, wristwatch, or article of clothing having a plurality of integrated EMG-based sensor nodes and associated electronics. | 12-31-2009 |
20090327171 | RECOGNIZING GESTURES FROM FOREARM EMG SIGNALS - A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model. | 12-31-2009 |
20100241596 | INTERACTIVE VISUALIZATION FOR GENERATING ENSEMBLE CLASSIFIERS - A real-time visual feedback ensemble classifier generator and method for interactively generating an optimal ensemble classifier using a user interface. Embodiments of the real-time visual feedback ensemble classifier generator and method use a weight adjustment operation and a partitioning operation in the interactive generation process. In addition, the generator and method include a user interface that provides real-time visual feedback to a user so that the user can see how the weight adjustment and partitioning operation affect the overall accuracy of the ensemble classifier. Using the user interface and the interactive controls available on the user interface, a user can iteratively use one or both of the weigh adjustment operation and partitioning operation to generate an optimized ensemble classifier. | 09-23-2010 |
20100244767 | MAGNETIC INDUCTIVE CHARGING WITH LOW FAR FIELDS - A charging station wirelessly transmits power to mobile electronic devices (MEDs) each having a planar-shaped receiver coil (RC) and a capacitor connected in parallel across the RC. The station includes a planar charging surface, a number of series-interconnected bank A source coils (SCs), a number of series-interconnected bank B SCs, and electronics for energizing the SCs. Each SC generates a flux field perpendicular to the charging surface. The bank A and bank B SCs are interleaved and alternately energized in a repeating duty cycle. The coils in each bank are also alternately wound in a different direction so that the fields cancel each other out in a far-field environment. Whenever an MED is placed in close proximity to the charging surface, the fields wirelessly induce power in the RC. The MEDs can have any two-dimensional orientation with respect to the charging surface. | 09-30-2010 |
20120188158 | WEARABLE ELECTROMYOGRAPHY-BASED HUMAN-COMPUTER INTERFACE - A “Wearable Electromyography-Based Controller” includes a plurality of Electromyography (EMG) sensors and provides a wired or wireless human-computer interface (HCl) for interacting with computing systems and attached devices via electrical signals generated by specific movement of the user's muscles. Following initial automated self-calibration and positional localization processes, measurement and interpretation of muscle generated electrical signals is accomplished by sampling signals from the EMG sensors of the Wearable Electromyography-Based Controller. In operation, the Wearable Electromyography-Based Controller is donned by the user and placed into a coarsely approximate position on the surface of the user's skin. Automated cues or instructions are then provided to the user for fine-tuning placement of the Wearable Electromyography-Based Controller. Examples of Wearable Electromyography-Based Controllers include articles of manufacture, such as an armband, wristwatch, or article of clothing having a plurality of integrated EMG-based sensor nodes and associated electronics. | 07-26-2012 |
20130232095 | RECOGNIZING FINGER GESTURES FROM FOREARM EMG SIGNALS - A machine learning model is trained by instructing a user to perform various predefined gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model. | 09-05-2013 |
20140128994 | LOGICAL SENSOR SERVER FOR LOGICAL SENSOR PLATFORMS - A “Logical Sensor Server” or “LSS” acts as a smart hub between related or unrelated sensors, devices, or other systems by translating, morphing, or forwarding signals or events published by various input sources into signals or higher-order events that can be consumed or used by other subscribing sensors, devices, or systems. More specifically, the LSS acts alone or in combination with a Logical Sensor Platform (LSP) to enable various techniques that allow messages received from different input sources to be authored, transformed and made available to one or more subscribers in a manner that allows intelligent event-driven behavior to emerge from a collection of relatively simple input sources. Any combination of automatic configuration or user input is used to define the format of transformed inputs to be received by particular subscribers relative to one or more publications. Subscribers receiving transformed events control their own actions based on those events. | 05-08-2014 |
20140129162 | BATTERY WITH COMPUTING, SENSING AND COMMUNICATION CAPABILTIES - Electrical battery apparatus embodiments are presented that generally involve incorporating sensing, computing, and communication capabilities into the one common component that a vast number of electronic devices employ—namely batteries. By integrating these capabilities into disposable and/or rechargeable batteries, new functionality and intelligence can be provided to otherwise stand-alone devices. | 05-08-2014 |
20140129866 | AGGREGATION FRAMEWORK USING LOW-POWER ALERT SENSOR - An aggregation framework system and method that automatic configures, aggregates, disaggregates, manages, and optimizes components of a consolidated system of devices, modules, and sensors. Embodiments of the system and method include a low-power alert sensor, a data aggregator module, and an interpreter module. The low-power alert sensor is a sensor that is continuously on and continuously monitoring its environment. The low-power alert sensor acts as a watchdog and triggers other sensors to awaken them from a power-conservation state when there is a change or event that occurs in an environment. The data aggregator module manages the set of sensors within the system and aggregates sensor data obtained from the sensors. The interpreter module then translates the physical data collected by sensors into logical information. Together the data aggregator module and the interpreter module present a unified logical view of the capabilities of the sensors under their control. | 05-08-2014 |
20160089033 | DETERMINING TIMING AND CONTEXT FOR CARDIOVASCULAR MEASUREMENTS - The cardiovascular vital signs of a user are measured. One or more user activity metrics is received from one or more user activity sensors. A type of activity the user is currently engaged in is inferred from the received user activity metrics. Additional context that is associated with the inferred type of activity may also be identified. A determination is made as to if it is time to measure the cardiovascular vital signs of the user, where this determination is based on the inferred type of activity and may also be based on the identified additional context. Whenever it is determined to be time to measure the cardiovascular vital signs of the user, this measurement is made. | 03-31-2016 |
20160089042 | WEARABLE PULSE PRESSURE WAVE SENSING DEVICE - Wearable pulse pressure wave sensing devices are presented that generally provide a non-intrusive way to measure a pulse pressure wave travelling through an artery using a wearable device. In one implementation, the device includes an array of pressure sensors disposed on a mounting structure which is attachable to a user on an area proximate to an underlying artery. Each of the pressure sensors is capable of being mechanically coupled to the skin of the user proximate to the underlying artery. In addition, there are one or more arterial location sensors disposed on the mounting structure which identify a location on the user's skin likely overlying the artery. A pulse pressure wave is then measured using the pressure sensor of the array closest to the identified location. | 03-31-2016 |
20160089081 | WEARABLE SENSING BAND - A wearable sensing band is presented that generally provides a non-intrusive way to measure a person's cardiovascular vital signs including pulse transit time and pulse wave velocity. The band includes a strap with one or more primary electrocardiography (ECG) electrodes which are in contact with a first portion of the user's body, one or more secondary ECG electrodes, and one or more pulse pressure wave arrival (PPWA) sensors. The primary and secondary ECG electrodes detect an ECG signal whenever the secondary ECG electrodes make electrical contact with the second portion of the user's body, and the PPWA sensors sense an arrival of a pulse pressure wave to the first portion of the user's body from the user's heart. The ECG signal and PPWA sensor(s) readings are used to compute at least one of a pulse transit time (PTT) or a pulse wave velocity (PWV) of the user. | 03-31-2016 |
Patent application number | Description | Published |
20080250012 | IN SITU SEARCH FOR ACTIVE NOTE TAKING - A system and method that facilitates and effectuates in situ search for active note taking. The system and method includes receiving gestures from a stylus and a tablet associated with the system. Upon recognizing the gesture as belonging to a set of known and recognized gestures, the system creates an embeddable object, initiates a search with terms indicated by the gesture, associates the search results with the created object and inserts the object in close proximity with the terms that instigated the search. | 10-09-2008 |
20090098937 | ADAPTIVE TREE VISUALIZATION FOR TOURNAMENT-STYLE BRACKETS - An adaptive tree visualization system and method for adaptively deforming a traditional bracket tree to visualize information about competitors in a linear manner. A one-dimensional result line emanates from the name of each competitor such that the progress of each competitor can be immediately determined by examining the length of the competitor's result line. The result line typically is composed of multiple result line segments. Each line segment spans a particular time period column to indicate that the competitor is matched up with another competitor during that time period. A pending result line segment spans the adjacent time period to indicate that the results of the match-up are unknown. Once the result of the match-up is known, the pending result line is added to the result line segment of the winning competitor. This extends the winner's result line into the next time period while the loser's result line remains unchanged. | 04-16-2009 |
20090137924 | METHOD AND SYSTEM FOR MESHING HUMAN AND COMPUTER COMPETENCIES FOR OBJECT CATEGORIZATION - The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs. | 05-28-2009 |
20090154795 | INTERACTIVE CONCEPT LEARNING IN IMAGE SEARCH - An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules. | 06-18-2009 |
20090170584 | INTERACTIVE SCENARIO EXPLORATION FOR TOURNAMENT-STYLE GAMING - A tournament-style gaming scenario exploration system and method for interactively exploring current and future scenarios of a tournament and associated pick'em pool. The system and method include a prediction module (including a game constraint sub-module), and a key event detection module. Embodiments of the prediction module include a binary integer that represents tournament outcomes. The prediction module generates predictions of tournament outcomes using an exhaustive or a sampling technique. The sampling technique includes random sampling, where the tournament bracket is randomly sampled, and a weighted sampling technique, which sample portions of the tournament bracket more densely than others areas. Embodiments of the game constraint sub-module allow real-world results constraints and user-supplied constraints to be imposed on the tournament outcomes. Embodiments of the key event detection module identify key games in the tournament that affect a user's placement in the pick'em pool, a competitor's placement in the tournament standings, or both. | 07-02-2009 |
20100302137 | Touch Sensitive Display Apparatus using sensor input - Described herein is a system that includes a receiver component that receives gesture data from a sensor unit that is coupled to a body of a gloveless user, wherein the gesture data is indicative of a bodily gesture of the user, wherein the bodily gesture comprises movement pertaining to at least one limb of the gloveless user. The system further includes a location determiner component that determines location of the bodily gesture with respect to a touch-sensitive display apparatus. The system also includes a display component that causes the touch-sensitive display apparatus to display an image based at least in part upon the received gesture data and the determined location of the bodily gesture with respect to the touch-sensitive display apparatus. | 12-02-2010 |
20110133934 | Sensing Mechanical Energy to Appropriate the Body for Data Input - Described is using the human body as an input mechanism to a computing device. A sensor set is coupled to part of a human body. The sensor set detects mechanical (e.g., bio-acoustic) energy transmitted through the body as a result of an action/performed by the body, such as a user finger tap or flick. The sensor output data (e.g., signals) are processed to determine what action was taken. For example, the gesture may be a finger tap, and the output data may indicate which finger was tapped, what surface the finger was tapped on, or where on the body the finger was tapped. | 06-09-2011 |
20110251980 | Interactive Optimization of the Behavior of a System - An interactive tool is described for modifying the behavior of a system, such as, but not limited to, the behavior of a classification system. The tool uses an interface mechanism to present a current global state of the system. The tool accepts one or more refinements to this global state, e.g., by accepting individual changes to parameter settings that are presented by the interface mechanism. Based on this input, the tool computes and displays the global implications of the updated parameter settings. The process of iterating over one or more cycles of user updates, followed by computation and display of the implications of the attempted refinements, has the effect of advancing the system towards a global state that exhibits desirable behavior. | 10-13-2011 |
20110264484 | ACTIVITY-CENTRIC GRANULAR APPLICATION FUNCTIONALITY - A system that can enable the atomization of application functionality in connection with an activity-centric system is provided. The system can be utilized as a programmatic tool that decomposes an application's constituent functionality into atoms thereafter monitoring and aggregating atoms with respect to a particular activity. In doing so, the functionality of the system can be scaled based upon complexity and needs of the activity. Additionally, the system can be employed to monetize the atoms or activity capabilities based upon respective use. | 10-27-2011 |
20110295392 | DETECTING REACTIONS AND PROVIDING FEEDBACK TO AN INTERACTION - Reaction information of participants to an interaction may be sensed and analyzed to determine one or more reactions or dispositions of the participants. Feedback may be provided based on the determined reactions. The participants may be given an opportunity to opt in to having their reaction information collected, and may be provided complete control over how their reaction information is shared or used. | 12-01-2011 |
20110307422 | EXPLORING DATA USING MULTIPLE MACHINE-LEARNING MODELS - A multiple model data exploration system and method for running multiple machine-learning models simultaneously to understand and explore data. Embodiments of the system and method allow a user to gain a greater understanding of the data and to gain new insights into their data. Embodiments of the system and method also allow a user to interactively explore the problem and to navigate different views of data. Many different classifier training and evaluation experiments are run simultaneously and results are obtained. The results are aggregated and visualized across each of the experiments to determine and understand how each example is classified for each different classifier. These results then are summarized in a variety of ways to allow users to obtain a greater understanding of the data both in terms of the individual examples themselves and features associated with the data. | 12-15-2011 |
20120162057 | SENSING USER INPUT USING THE BODY AS AN ANTENNA - A human input system is described herein that provides an interaction modality that utilizes the human body as an antenna to receive electromagnetic noise that exists in various environments. By observing the properties of the noise picked up by the body, the system can infer human input on and around existing surfaces and objects. Home power lines have been shown to be a relatively good transmitting antenna that creates a particularly noisy environment. The human input system leverages the body as a receiving antenna and electromagnetic noise modulation for gestural interaction. It is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. The receiving device for which the human body is the antenna can be built into common, widely available electronics, such as mobile phones or other devices the user is likely to commonly carry. | 06-28-2012 |
20120183206 | INTERACTIVE CONCEPT LEARNING IN IMAGE SEARCH - An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules. | 07-19-2012 |
20120197876 | AUTOMATIC GENERATION OF AN EXECUTIVE SUMMARY FOR A MEDICAL EVENT IN AN ELECTRONIC MEDICAL RECORD - Described herein are technologies pertaining to automatic generation of an executive summary (explanation) of a medical event in an electronic medical record (EMR) of a patient. A medical event in the EMR is automatically identified, and a search is conducted over a document corpus based upon the identified medical event. A document retrieved as a result of the search is analyzed for a portion of text to act as an executive summary for the medical event. Each portion of text in the document is assigned a score, and the portion of text assigned the highest score is utilized as the executive summary for the medical event. | 08-02-2012 |
20130082978 | OMNI-SPATIAL GESTURE INPUT - Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space. | 04-04-2013 |
20130086674 | Multi-frame depth image information identification - Embodiments of the present invention relate to systems, methods, and computer storage media for identifying, authenticating, and authorizing a user to a device. A dynamic image, such as a video captured by a depth camera, is received. The dynamic image provides data from which geometric information of a portion of a user may be identified as well as motion information of a portion of the user may be identified. Consequently, a geometric attribute is identified from the geometric information. A motion attribute may also be identified from the motion information. The geometric attribute is compared to one or more geometric attributes associated with authorized users. Additionally, the motion attribute may be compared to one or more motion attributes associated with the authorized users. A determination may be made that the user is an authorized user. As such the user is authorized to utilize functions of the device. | 04-04-2013 |
20130141576 | DETERMINING THREATS BASED ON INFORMATION FROM ROAD-BASED DEVICES IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured to perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142347 | VEHICULAR THREAT DETECTION BASED ON AUDIO SIGNALS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing audio signals. An example AEFS receives data that represents an audio signal emitted by a vehicle. The AEFS analyzes the audio signal to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142365 | AUDIBLE ASSISTANCE - Techniques for sensory enhancement and augmentation are described. Some embodiments provide an audible assistance facilitator system (“AAFS”) configured to provide audible assistance to a user via a hearing device. In one embodiment, the AAFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media device, or the like. The AAFS identifies the speaker based on the received data, such as by performing speaker recognition. The AAFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AAFS then informs the user of the speaker-related information, such as by causing an audio representation of the speaker-related information to be output via the hearing device. | 06-06-2013 |
20130142393 | VEHICULAR THREAT DETECTION BASED ON IMAGE ANALYSIS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing image data. An example AEFS receives data that represents an image of a vehicle. The AEFS analyzes the received data to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130144490 | PRESENTATION OF SHARED THREAT INFORMATION IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. In some embodiments, devices and systems located in a transportation network share threat information with one another, in order to enhance a user's ability to operate or function in a transportation-related context. In one embodiment, a process in a vehicle receives threat information from a remote device, the threat information based on information about objects or conditions proximate to the remote device. The process then determines that the threat information is relevant to the safe operation of the vehicle. Then, the process modifies operation of the vehicle based on the threat information, such as by presenting a message to the operator of the vehicle and/or controlling the vehicle itself. | 06-06-2013 |
20130144595 | LANGUAGE TRANSLATION BASED ON SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information. The AEFS then presents the message in the second language to the user. | 06-06-2013 |
20130144603 | ENHANCED VOICE CONFERENCING WITH HISTORY - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording and presenting voice conference history information based on speaker-related information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information. The AEFS then informs a user of the conference history information, such as by presenting a transcript of the voice conference and/or related information items on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144619 | ENHANCED VOICE CONFERENCING - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. In one embodiment, the AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs a user of the speaker-related information, such as by presenting the speaker-related information on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144623 | VISUAL PRESENTATION OF SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user. | 06-06-2013 |
20130154919 | USER CONTROL GESTURE DETECTION - The description relates to user control gestures. One example allows a speaker and a microphone to perform a first functionality. The example simultaneously utilizes the speaker and the microphone to perform a second functionality. The second functionality comprises capturing sound signals that originated from the speaker with the microphone and detecting Doppler shift in the sound signals. It correlates the Doppler shift with a user control gesture performed proximate to the computer and maps the user control gesture to a control function. | 06-20-2013 |
20130165138 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 06-27-2013 |
20130165139 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165140 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130165141 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165148 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165158 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165159 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165160 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165161 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130172004 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 07-04-2013 |
20130181902 | SKINNABLE TOUCH DEVICE GRIP PATTERNS - Skinnable touch device grip pattern techniques are described herein. A touch-aware skin may be configured to substantially cover the outer surfaces of a computing device. The touch-aware skin may include a plurality of skin sensors configured to detect interaction with the skin at defined locations. The computing device may include one or more modules operable to obtain input from the plurality of skin sensors and decode the input to determine grips patterns that indicate how the computing device is being held by a user. Various functionality provided by the computing device may be selectively enabled and/or adapted based on a determined grip pattern such that the provided functionality may change to match the grip pattern. | 07-18-2013 |
20130215454 | THREE-DIMENSIONAL PRINTING - Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. | 08-22-2013 |
20130303195 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of a traveled path of at least one mobile device over a specified time period; determining, using a microprocessor, a predicted location of the at least one mobile device at least partly based on receiving the indication of the traveled path over a specified time period; and presenting an indication of the predicted location of the at least one mobile device at least partially based on accepting an indication of a traveled path and determining a predicted location of the at least one mobile device. | 11-14-2013 |
20140175876 | Ad Hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: generating electrical power from at least one ambient source via at least one structurally integrated electromagnetic transducer; powering at least one transmitter via the electrical power to wirelessly transmit one or more sensor operation activation signals to one or more sensors; and at least one of powering one or more sensing operations of one or more sensors via the electrical power or charging one or more power storage devices electrically coupled to the one or more sensors via the electrical power. | 06-26-2014 |
20140176061 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: generating electrical power from at least one ambient source via at least one transducer; powering at least one transmitter via the electrical power from at least one ambient source to wirelessly transmit one or more sensor operation activation signals to one or more sensors; and at least one of powering one or more sensing operations of the one or more sensors or charging one or more power storage devices of the one or more sensors via the one or more sensor operation activation signals. | 06-26-2014 |
20140176343 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: obtaining location data associated with a portion of a region including at least one sensor; wirelessly transmitting one or more sensor operation activation signals to one or more sensors; and powering one or more sensing operations of a sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140177524 | Ad-Hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving one or more wireless signals associated with a sensing capability status of at least one sensor; wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to the sensing capability status of the at least one sensor; and at least one of powering one or more sensing operations of the at least one sensor and charging one or more power storage devices of the at least one sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140180628 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving one or more wireless signals indicative of a presence of a sensor within a portion of a region to be monitored; storing location data associated with the portion of the region to be monitored; and wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to the location data. | 06-26-2014 |
20140180630 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving electrical power via at least one structurally integrated electrically conductive element; and powering one or more sensing operations of one or more sensors via the electrical power. | 06-26-2014 |
20140180639 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to one or more transmission authorization parameters; and powering one or more sensing operations of a sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140247277 | FOVEATED IMAGE RENDERING - A method and system for foveated image rendering are provided herein. The method includes tracking a gaze point of a user on a display device and generating a specified number of eccentricity layers based on the gaze point of the user. The method also includes antialiasing the eccentricity layers to remove artifacts, rendering a foveated image based on the eccentricity layers, and displaying the foveated image to the user via the display device. | 09-04-2014 |
20140249398 | DETERMINING PULSE TRANSIT TIME NON-INVASIVELY USING HANDHELD DEVICES - A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images. | 09-04-2014 |
20140327694 | Simultaneous Display of Multiple Content Items - Techniques for presenting multiple content items on a display without hardware modification. These techniques determine a first angle relative to the display at which a first content item is to be shown and a second content item is to be hidden. The techniques also determine a second angle at which the first content item is to be hidden and the second content item shown. The techniques then compute a first pair of pixel values having a contrast that is less than a threshold at the first angle and a second pair of pixel values having a contrast that is less than the threshold at the second angle. The techniques then render the content items such that the first content item is perceivable at the first angle and hidden at the second angle, while the second content item is hidden at the first angle and perceivable at the second angle. | 11-06-2014 |
20150106739 | COMMAND AUTHENTICATION - The description relates to a shared digital workspace. One example includes a display device and sensors. The sensors are configured to detect users proximate the display device and to detect that an individual user is performing an individual user command relative to the display device. The system also includes a graphical user interface configured to be presented on the display device that allows multiple detected users to simultaneously interact with the graphical user interface via user commands. | 04-16-2015 |
20150106740 | GROUP EXPERIENCE USER INTERFACE - The description relates to a shared digital workspace. One example includes a display device and sensors. The sensors are configured to detect users proximate the display device and to detect that an individual user is performing an individual user command relative to the display device. The system also includes a graphical user interface configured to be presented on the display device that allows multiple detected users to simultaneously interact with the graphical user interface via user commands. | 04-16-2015 |
20150196209 | CARDIOVASCULAR RISK FACTOR SENSING DEVICE - Various technologies described herein pertain to sensing cardiovascular risk factors of a user. A chair includes one or more sensors configured to output signals indicative of conditions at site(s) on a body of a user. A seat of the chair, a back of the chair, and/or arms of the chair can include the sensor(s). Moreover, the chair includes a collection circuit configured to receive the signals from the sensor(s). A risk factor evaluation component is configured to detect a pulse wave velocity of the user based on the signals from the sensor(s). The risk factor evaluation component is further configured to perform a pulse wave analysis of the user based on a morphology of a pulse pressure waveform of the user, and the pulse pressure waveform is detected based on the signals from the sensor(s). | 07-16-2015 |
20150199480 | CONTROLLING HEALTH SCREENING VIA ENTERTAINMENT EXPERIENCES - Various technologies described herein pertain to controlling performance of a health assessment of a user in an entertainment venue. Data in a health record of the user is accessed, where the health record is retained in computer-readable storage. The user is located at the entertainment venue, and the entertainment venue includes an attraction. A health parameter of the user to be measured as part of the health assessment performed in the entertainment venue is selected based on the data in the health record of the user. Further, an interaction between the user and the attraction of the entertainment venue is controlled based on the health parameter to be measured. Data indicative of the health parameter of the user is computed based on a signal output by a sensor. The signal is output by the sensor during the interaction between the user and the attraction of the entertainment venue. | 07-16-2015 |
20150199484 | USING SENSORS AND DEMOGRAPHIC DATA TO AUTOMATICALLY ADJUST MEDICATION DOSES - Various technologies described herein pertain to adjust recommended dosages of a medication for a user in a non-clinical environment. The medication can be identified and an indication of a symptom of the user desirably managed by the medication can be received. An initial recommended dosage of the medication can be determined based on static data of the user and the symptom. Dynamic data indicative of efficacy of the medication for the user over time in the non-clinical environment can be collected from sensor(s) in the non-clinical environment. The dynamic data indicative of the efficacy of the medication can include data indicative of the symptom and data indicative of a side effect of the user resulting from the medication. A subsequent recommended dosage of the medication can be refined based on the static data of the user and the dynamic data indicative of the efficacy of the medication for the user. | 07-16-2015 |
20150208166 | ENHANCED SPATIAL IMPRESSION FOR HOME AUDIO - Technologies pertaining to provision of customized audio to each listener in a plurality of listeners are described herein. A sensor outputs data that is indicative of locations of multiple listeners in an environment. The data is processed to determine locations and orientations of the respective heads of the multiple listener in the environment. Based on the locations and orientations of heads of the listeners in the environment, for each listener, respective customized audio signals are generated. The customized audio signals are transmitted to respective beamforming transducers. The beamforming transducers directionally output customized beams for the first listener and the second listener based upon the customized audio signals and locations of the heads of the listeners. | 07-23-2015 |
20150208184 | DYNAMIC CALIBRATION OF AN AUDIO SYSTEM - Technologies pertaining to calibration of filters of an audio system are described herein. A mobile computing device is configured to compute values for respective filters, such as equalizer filters, and transmit the values to a receiver device in the audio system. The receiver device causes audio to be emitted from a speaker based upon the values for the filters. | 07-23-2015 |
20150208233 | PRIVACY PRESERVING SENSOR APPARATUS - A privacy preserving sensor apparatus is described herein. The privacy preserving sensor apparatus includes a microphone that is configured to output a signal that is indicative of audio in an environment. The privacy preserving sensor apparatus further includes feature extraction circuitry integrated in the apparatus with the microphone, the feature extraction circuitry configured to extract features from the signal output by the microphone that are usable to detect occurrence of an event in the environment, wherein the signal output by the microphone is unable to be reconstructed based solely upon the features. | 07-23-2015 |
20150222757 | SYSTEMS AND METHODS FOR AUTOMATICALLY CONNECTING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, and a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The directional sound emitter may emit ultrasonic waves configured to frequency convert to produce the audio. The communication interface may be configured to identify an entity of interest with which the user wishes to interact based on gestures and/or vocal emissions by the user and may automatically communicatively couple the user to the entity of interest. The hands-free intercom may determine whether remote entities requesting to communicatively couple with the user should be allowed to couple. The hands-free intercom may detect eavesdroppers and warn the user of the detected eavesdroppers. | 08-06-2015 |
20150253424 | SYSTEMS AND METHODS FOR ULTRASONIC POSITION AND MOTION DETECTION - The present disclosure provides systems and methods associated with determining position and/or movement information using ultrasound. A system may include one or more ultrasonic transmitters and/or receivers. An ultrasonic transmitter may be configured to transmit ultrasound into a region bounded by one or more surfaces. The ultrasonic receiver may receive direct ultrasonic reflections and/or rebounded ultrasonic reflections from one or more objects within the region. A mapping or positioning system may generate positional data associated with one or more of the object(s) based on the direct ultrasonic reflection(s) and/or the rebounded ultrasonic reflection(s). The mapping or positioning system may generate enhanced positional data by combining the direct positional data and the rebounded positional data. | 09-10-2015 |
20150302158 | VIDEO-BASED PULSE MEASUREMENT - Aspects of the subject disclosure are directed towards a video-based pulse/heart rate system that may use motion data to reduce or eliminate the effects of motion on pulse detection. Signal quality may be computed from (e.g., transformed) video signal data, such as by providing video signal feature data to a trained classifier that provides a measure of the quality of pulse information in each signal. Based upon the signal quality data, corresponding waveforms may be processed to select one for extracting pulse information therefrom. Heart rate data may be computed from the extracted pulse information, which may be smoothed into a heart rate value for a time window based upon confidence and/or prior heart rate data. | 10-22-2015 |
20150304470 | SYSTEMS AND METHODS FOR AUTOMATICALLY CONNECTING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, and a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The hands-free intercom may determine whether the user is communicatively coupled via a mobile device to a remote entity. The hands-free intercom may be configured to receive a handoff of the communicative coupling, for example, by acting as a peripheral of the mobile device, by requesting the handoff, and/or the like. The hands-free intercom may be configured to deliver communications from the user to an appliance and vice versa. The hands-free intercom may manage access rights of the various entities to prevent unauthorized communications. | 10-22-2015 |
20150331102 | SYSTEMS AND METHODS FOR ULTRASONIC VELOCITY AND ACCELERATION DETECTION - The present disclosure provides systems and methods associated with determining velocity and/or acceleration information using ultrasound. A system may include one or more ultrasonic transmitters and/or receivers. An ultrasonic transmitter may be configured to transmit ultrasound into a region bounded by one or more surfaces. The ultrasonic receiver may detect a Doppler shift of reflected ultrasound to determine an acceleration and/or velocity associated with an object. The velocity and/or acceleration information may be utilized to modify the state of a gaming system, entertainment system, infotainment system, and/or other device. The velocity and/or acceleration date may be used in combination with a mapping or positioning system that generates positional data associated with the objects. | 11-19-2015 |
20150334346 | SYSTEMS AND METHODS FOR AUTOMATICALLY CONNECTING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, and a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The hands-free intercom may include a directional camera to capture video of the user and/or a directional video projector to deliver video to the user. The captured video may be provided to a remote entity and/or the delivered video may be received from the remote entity. The captured video may be used to identify vocal commands, gestures, facial expressions, and/or eye movements from the user. The projected video may be used to provide status information to the user. | 11-19-2015 |
20150336578 | ABILITY ENHANCEMENT - Techniques for ability enhancement are described. In some embodiments, devices and systems located in a transportation network share threat information with one another, in order to enhance a user's ability to operate or function in a transportation-related context. In one embodiment, a process in a vehicle receives threat information from a remote device, the threat information based on information about objects or conditions proximate to the remote device. The process then determines that the threat information is relevant to the safe operation of the vehicle. Then, the process modifies operation of the vehicle based on the threat information, such as by presenting a message to the operator of the vehicle and/or controlling the vehicle itself. | 11-26-2015 |
20160049052 | SYSTEMS AND METHODS FOR POSITIONING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, a display device, and/or a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The hands-free intercom may induce the user to move to a desired location and/or to stay within a connectivity area. The hands-free intercom may also or instead induce the user to face in a desired orientation. The directional sound emitter and/or the display device may induce the user by explicitly indicating the desired location, by adjusting an apparent source of the audio or video, by changing quality of delivered audio or video based on user position, by producing irritating audio or video, and/or the like. | 02-18-2016 |
20160093051 | SYSTEMS AND METHODS FOR A DUAL MODALITY SENSOR SYSTEM - The present disclosure provides systems and methods for using two imaging modalities for imaging an object at two different resolutions. For example, the system may utilize a first modality (e.g., ultrasound or electromagnetic radiation) to generate image data at a first resolution. The system may then utilize the other modality to generate image data of portions of interest at a second resolution that is higher than the first resolution. In another embodiment, one imaging modality may be used to resolve an ambiguity, such as ghost images, in image data generated using another imaging modality. | 03-31-2016 |
20160118036 | SYSTEMS AND METHODS FOR POSITIONING A USER OF A HANDS-FREE INTERCOMMUNICATION SYSTEM - A hands-free intercom may include a user-tracking sensor, a directional microphone, a directional sound emitter, a display device, and/or a communication interface. The user-tracking sensor may determine a location of a user so the directional microphone can measure vocal emissions by the user and the directional sound emitter can deliver audio to the user. The hands-free intercom may provide privacy to the user. The hands-free intercom may prevent an eavesdropper from hearing the user's vocal emissions, for example, by canceling the vocal emissions at the eavesdropper's ear. The directional sound emitter may deliver out-of-phase sound to cancel the vocal emissions. The hands-free intercom may also, or instead, cancel ambient noise at the user's ear. The hands-free intercom may measure or predict a filtration of the sound to be canceled and compensate for the filtration when canceling the sound. | 04-28-2016 |