Patent application number | Description | Published |
20080250012 | IN SITU SEARCH FOR ACTIVE NOTE TAKING - A system and method that facilitates and effectuates in situ search for active note taking. The system and method includes receiving gestures from a stylus and a tablet associated with the system. Upon recognizing the gesture as belonging to a set of known and recognized gestures, the system creates an embeddable object, initiates a search with terms indicated by the gesture, associates the search results with the created object and inserts the object in close proximity with the terms that instigated the search. | 10-09-2008 |
20090098937 | ADAPTIVE TREE VISUALIZATION FOR TOURNAMENT-STYLE BRACKETS - An adaptive tree visualization system and method for adaptively deforming a traditional bracket tree to visualize information about competitors in a linear manner. A one-dimensional result line emanates from the name of each competitor such that the progress of each competitor can be immediately determined by examining the length of the competitor's result line. The result line typically is composed of multiple result line segments. Each line segment spans a particular time period column to indicate that the competitor is matched up with another competitor during that time period. A pending result line segment spans the adjacent time period to indicate that the results of the match-up are unknown. Once the result of the match-up is known, the pending result line is added to the result line segment of the winning competitor. This extends the winner's result line into the next time period while the loser's result line remains unchanged. | 04-16-2009 |
20090137924 | METHOD AND SYSTEM FOR MESHING HUMAN AND COMPUTER COMPETENCIES FOR OBJECT CATEGORIZATION - The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs. | 05-28-2009 |
20090154795 | INTERACTIVE CONCEPT LEARNING IN IMAGE SEARCH - An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules. | 06-18-2009 |
20090170584 | INTERACTIVE SCENARIO EXPLORATION FOR TOURNAMENT-STYLE GAMING - A tournament-style gaming scenario exploration system and method for interactively exploring current and future scenarios of a tournament and associated pick'em pool. The system and method include a prediction module (including a game constraint sub-module), and a key event detection module. Embodiments of the prediction module include a binary integer that represents tournament outcomes. The prediction module generates predictions of tournament outcomes using an exhaustive or a sampling technique. The sampling technique includes random sampling, where the tournament bracket is randomly sampled, and a weighted sampling technique, which sample portions of the tournament bracket more densely than others areas. Embodiments of the game constraint sub-module allow real-world results constraints and user-supplied constraints to be imposed on the tournament outcomes. Embodiments of the key event detection module identify key games in the tournament that affect a user's placement in the pick'em pool, a competitor's placement in the tournament standings, or both. | 07-02-2009 |
20100302137 | Touch Sensitive Display Apparatus using sensor input - Described herein is a system that includes a receiver component that receives gesture data from a sensor unit that is coupled to a body of a gloveless user, wherein the gesture data is indicative of a bodily gesture of the user, wherein the bodily gesture comprises movement pertaining to at least one limb of the gloveless user. The system further includes a location determiner component that determines location of the bodily gesture with respect to a touch-sensitive display apparatus. The system also includes a display component that causes the touch-sensitive display apparatus to display an image based at least in part upon the received gesture data and the determined location of the bodily gesture with respect to the touch-sensitive display apparatus. | 12-02-2010 |
20110133934 | Sensing Mechanical Energy to Appropriate the Body for Data Input - Described is using the human body as an input mechanism to a computing device. A sensor set is coupled to part of a human body. The sensor set detects mechanical (e.g., bio-acoustic) energy transmitted through the body as a result of an action/performed by the body, such as a user finger tap or flick. The sensor output data (e.g., signals) are processed to determine what action was taken. For example, the gesture may be a finger tap, and the output data may indicate which finger was tapped, what surface the finger was tapped on, or where on the body the finger was tapped. | 06-09-2011 |
20110251980 | Interactive Optimization of the Behavior of a System - An interactive tool is described for modifying the behavior of a system, such as, but not limited to, the behavior of a classification system. The tool uses an interface mechanism to present a current global state of the system. The tool accepts one or more refinements to this global state, e.g., by accepting individual changes to parameter settings that are presented by the interface mechanism. Based on this input, the tool computes and displays the global implications of the updated parameter settings. The process of iterating over one or more cycles of user updates, followed by computation and display of the implications of the attempted refinements, has the effect of advancing the system towards a global state that exhibits desirable behavior. | 10-13-2011 |
20110264484 | ACTIVITY-CENTRIC GRANULAR APPLICATION FUNCTIONALITY - A system that can enable the atomization of application functionality in connection with an activity-centric system is provided. The system can be utilized as a programmatic tool that decomposes an application's constituent functionality into atoms thereafter monitoring and aggregating atoms with respect to a particular activity. In doing so, the functionality of the system can be scaled based upon complexity and needs of the activity. Additionally, the system can be employed to monetize the atoms or activity capabilities based upon respective use. | 10-27-2011 |
20110295392 | DETECTING REACTIONS AND PROVIDING FEEDBACK TO AN INTERACTION - Reaction information of participants to an interaction may be sensed and analyzed to determine one or more reactions or dispositions of the participants. Feedback may be provided based on the determined reactions. The participants may be given an opportunity to opt in to having their reaction information collected, and may be provided complete control over how their reaction information is shared or used. | 12-01-2011 |
20110307422 | EXPLORING DATA USING MULTIPLE MACHINE-LEARNING MODELS - A multiple model data exploration system and method for running multiple machine-learning models simultaneously to understand and explore data. Embodiments of the system and method allow a user to gain a greater understanding of the data and to gain new insights into their data. Embodiments of the system and method also allow a user to interactively explore the problem and to navigate different views of data. Many different classifier training and evaluation experiments are run simultaneously and results are obtained. The results are aggregated and visualized across each of the experiments to determine and understand how each example is classified for each different classifier. These results then are summarized in a variety of ways to allow users to obtain a greater understanding of the data both in terms of the individual examples themselves and features associated with the data. | 12-15-2011 |
20120162057 | SENSING USER INPUT USING THE BODY AS AN ANTENNA - A human input system is described herein that provides an interaction modality that utilizes the human body as an antenna to receive electromagnetic noise that exists in various environments. By observing the properties of the noise picked up by the body, the system can infer human input on and around existing surfaces and objects. Home power lines have been shown to be a relatively good transmitting antenna that creates a particularly noisy environment. The human input system leverages the body as a receiving antenna and electromagnetic noise modulation for gestural interaction. It is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. The receiving device for which the human body is the antenna can be built into common, widely available electronics, such as mobile phones or other devices the user is likely to commonly carry. | 06-28-2012 |
20120183206 | INTERACTIVE CONCEPT LEARNING IN IMAGE SEARCH - An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules. | 07-19-2012 |
20120197876 | AUTOMATIC GENERATION OF AN EXECUTIVE SUMMARY FOR A MEDICAL EVENT IN AN ELECTRONIC MEDICAL RECORD - Described herein are technologies pertaining to automatic generation of an executive summary (explanation) of a medical event in an electronic medical record (EMR) of a patient. A medical event in the EMR is automatically identified, and a search is conducted over a document corpus based upon the identified medical event. A document retrieved as a result of the search is analyzed for a portion of text to act as an executive summary for the medical event. Each portion of text in the document is assigned a score, and the portion of text assigned the highest score is utilized as the executive summary for the medical event. | 08-02-2012 |
20130082978 | OMNI-SPATIAL GESTURE INPUT - Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space. | 04-04-2013 |
20130086674 | Multi-frame depth image information identification - Embodiments of the present invention relate to systems, methods, and computer storage media for identifying, authenticating, and authorizing a user to a device. A dynamic image, such as a video captured by a depth camera, is received. The dynamic image provides data from which geometric information of a portion of a user may be identified as well as motion information of a portion of the user may be identified. Consequently, a geometric attribute is identified from the geometric information. A motion attribute may also be identified from the motion information. The geometric attribute is compared to one or more geometric attributes associated with authorized users. Additionally, the motion attribute may be compared to one or more motion attributes associated with the authorized users. A determination may be made that the user is an authorized user. As such the user is authorized to utilize functions of the device. | 04-04-2013 |
20130141576 | DETERMINING THREATS BASED ON INFORMATION FROM ROAD-BASED DEVICES IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured to perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142347 | VEHICULAR THREAT DETECTION BASED ON AUDIO SIGNALS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing audio signals. An example AEFS receives data that represents an audio signal emitted by a vehicle. The AEFS analyzes the audio signal to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130142365 | AUDIBLE ASSISTANCE - Techniques for sensory enhancement and augmentation are described. Some embodiments provide an audible assistance facilitator system (“AAFS”) configured to provide audible assistance to a user via a hearing device. In one embodiment, the AAFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media device, or the like. The AAFS identifies the speaker based on the received data, such as by performing speaker recognition. The AAFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AAFS then informs the user of the speaker-related information, such as by causing an audio representation of the speaker-related information to be output via the hearing device. | 06-06-2013 |
20130142393 | VEHICULAR THREAT DETECTION BASED ON IMAGE ANALYSIS - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing image data. An example AEFS receives data that represents an image of a vehicle. The AEFS analyzes the received data to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user. | 06-06-2013 |
20130144490 | PRESENTATION OF SHARED THREAT INFORMATION IN A TRANSPORTATION-RELATED CONTEXT - Techniques for ability enhancement are described. In some embodiments, devices and systems located in a transportation network share threat information with one another, in order to enhance a user's ability to operate or function in a transportation-related context. In one embodiment, a process in a vehicle receives threat information from a remote device, the threat information based on information about objects or conditions proximate to the remote device. The process then determines that the threat information is relevant to the safe operation of the vehicle. Then, the process modifies operation of the vehicle based on the threat information, such as by presenting a message to the operator of the vehicle and/or controlling the vehicle itself. | 06-06-2013 |
20130144595 | LANGUAGE TRANSLATION BASED ON SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information. The AEFS then presents the message in the second language to the user. | 06-06-2013 |
20130144603 | ENHANCED VOICE CONFERENCING WITH HISTORY - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording and presenting voice conference history information based on speaker-related information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information. The AEFS then informs a user of the conference history information, such as by presenting a transcript of the voice conference and/or related information items on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144619 | ENHANCED VOICE CONFERENCING - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. In one embodiment, the AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs a user of the speaker-related information, such as by presenting the speaker-related information on a display of a conferencing device associated with the user. | 06-06-2013 |
20130144623 | VISUAL PRESENTATION OF SPEAKER-RELATED INFORMATION - Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user. | 06-06-2013 |
20130154919 | USER CONTROL GESTURE DETECTION - The description relates to user control gestures. One example allows a speaker and a microphone to perform a first functionality. The example simultaneously utilizes the speaker and the microphone to perform a second functionality. The second functionality comprises capturing sound signals that originated from the speaker with the microphone and detecting Doppler shift in the sound signals. It correlates the Doppler shift with a user control gesture performed proximate to the computer and maps the user control gesture to a control function. | 06-20-2013 |
20130165138 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 06-27-2013 |
20130165139 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165140 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130165141 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a query from a radio-frequency identification object associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting a query from a radio-frequency identification object associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the query response from the radio-frequency identification object associated with the at least one mobile device. | 06-27-2013 |
20130165148 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165158 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165159 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting a mobile device location query using digital signal processing and presenting an indication of location of the mobile device at least partially based on receiving the location query. Additionally, systems and methods are described relating to means for accepting a mobile device location query using digital signal processing and means for presenting an indication of location of the mobile device at least partially based on receiving the location query. | 06-27-2013 |
20130165160 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. | 06-27-2013 |
20130165161 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of an inertial impact associated with at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. Additionally, systems and methods are described relating to means for accepting an indication of an inertial impact associated with at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on accepting the indication of the inertial impact associated with the at least one mobile device. | 06-27-2013 |
20130172004 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to detecting an indication of a person within a specified proximity to at least one mobile device; and presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. Additionally, systems and methods are described relating to means for detecting an indication of a person within a specified proximity to at least one mobile device; and means for presenting an indication of location of the at least one mobile device at least partially based on the indication of the person within the specified proximity. | 07-04-2013 |
20130181902 | SKINNABLE TOUCH DEVICE GRIP PATTERNS - Skinnable touch device grip pattern techniques are described herein. A touch-aware skin may be configured to substantially cover the outer surfaces of a computing device. The touch-aware skin may include a plurality of skin sensors configured to detect interaction with the skin at defined locations. The computing device may include one or more modules operable to obtain input from the plurality of skin sensors and decode the input to determine grips patterns that indicate how the computing device is being held by a user. Various functionality provided by the computing device may be selectively enabled and/or adapted based on a determined grip pattern such that the provided functionality may change to match the grip pattern. | 07-18-2013 |
20130215454 | THREE-DIMENSIONAL PRINTING - Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. | 08-22-2013 |
20130303195 | Computational Systems and Methods for Locating a Mobile Device - Systems and methods are described relating to accepting an indication of a traveled path of at least one mobile device over a specified time period; determining, using a microprocessor, a predicted location of the at least one mobile device at least partly based on receiving the indication of the traveled path over a specified time period; and presenting an indication of the predicted location of the at least one mobile device at least partially based on accepting an indication of a traveled path and determining a predicted location of the at least one mobile device. | 11-14-2013 |
20140175876 | Ad Hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: generating electrical power from at least one ambient source via at least one structurally integrated electromagnetic transducer; powering at least one transmitter via the electrical power to wirelessly transmit one or more sensor operation activation signals to one or more sensors; and at least one of powering one or more sensing operations of one or more sensors via the electrical power or charging one or more power storage devices electrically coupled to the one or more sensors via the electrical power. | 06-26-2014 |
20140176061 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: generating electrical power from at least one ambient source via at least one transducer; powering at least one transmitter via the electrical power from at least one ambient source to wirelessly transmit one or more sensor operation activation signals to one or more sensors; and at least one of powering one or more sensing operations of the one or more sensors or charging one or more power storage devices of the one or more sensors via the one or more sensor operation activation signals. | 06-26-2014 |
20140176343 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: obtaining location data associated with a portion of a region including at least one sensor; wirelessly transmitting one or more sensor operation activation signals to one or more sensors; and powering one or more sensing operations of a sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140177524 | Ad-Hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving one or more wireless signals associated with a sensing capability status of at least one sensor; wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to the sensing capability status of the at least one sensor; and at least one of powering one or more sensing operations of the at least one sensor and charging one or more power storage devices of the at least one sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140180628 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving one or more wireless signals indicative of a presence of a sensor within a portion of a region to be monitored; storing location data associated with the portion of the region to be monitored; and wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to the location data. | 06-26-2014 |
20140180630 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: receiving electrical power via at least one structurally integrated electrically conductive element; and powering one or more sensing operations of one or more sensors via the electrical power. | 06-26-2014 |
20140180639 | Ad-hoc Wireless Sensor Package - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for control of transmission to a target device with communicating with one or more sensors in an ad-hoc sensor network may implement operations including, but not limited to: wirelessly transmitting one or more sensor operation activation signals to one or more sensors according to one or more transmission authorization parameters; and powering one or more sensing operations of a sensor via the one or more sensor operation activation signals. | 06-26-2014 |
20140247277 | FOVEATED IMAGE RENDERING - A method and system for foveated image rendering are provided herein. The method includes tracking a gaze point of a user on a display device and generating a specified number of eccentricity layers based on the gaze point of the user. The method also includes antialiasing the eccentricity layers to remove artifacts, rendering a foveated image based on the eccentricity layers, and displaying the foveated image to the user via the display device. | 09-04-2014 |
20140249398 | DETERMINING PULSE TRANSIT TIME NON-INVASIVELY USING HANDHELD DEVICES - A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images. | 09-04-2014 |
20140327694 | Simultaneous Display of Multiple Content Items - Techniques for presenting multiple content items on a display without hardware modification. These techniques determine a first angle relative to the display at which a first content item is to be shown and a second content item is to be hidden. The techniques also determine a second angle at which the first content item is to be hidden and the second content item shown. The techniques then compute a first pair of pixel values having a contrast that is less than a threshold at the first angle and a second pair of pixel values having a contrast that is less than the threshold at the second angle. The techniques then render the content items such that the first content item is perceivable at the first angle and hidden at the second angle, while the second content item is hidden at the first angle and perceivable at the second angle. | 11-06-2014 |