Patent application number | Description | Published |
20140114990 | BUFFER ORDERING BASED ON CONTENT ACCESS TRACKING - Embodiments are disclosed that relate to buffering portions of a digital content item in different orders for different users. For example, one disclosed embodiment provides a method of providing a buffer ordering for a digital content item. The method includes tracking content access information for each user of a plurality of users, the content access information for each user comprising information regarding how content portions of each of one or more digital content items were accessed by the user. The method further comprises providing a different buffer ordering for a first user of a selected digital content item than for a second user based upon the content access information. | 04-24-2014 |
20140115096 | RECOMMENDING CONTENT BASED ON CONTENT ACCESS TRACKING - Embodiments are disclosed that relate to generating digital content recommendations for a user based upon how the user accesses the assets of a digital content item. For example, one disclosed embodiment provides a method including receiving from a remote computing device content access information regarding an order in which content portions of a selected digital content item were accessed by the user, and, storing the content access information. The method further includes comparing the content access information for the user to content access information for other users that consumed the selected digital content item to determine other users with similar content access patterns, and sending digital content recommendations to the user based upon content consumption information for the other users. | 04-24-2014 |
20140115157 | MULTIPLE BUFFERING ORDERS FOR DIGITAL CONTENT ITEM - Various embodiments are disclosed that relate to buffering digital content items in different orders for different user experiences. For example, one disclosed embodiment provides, on a computing device, a method for providing a buffering order for a digital content item. The method includes receiving from a remote computing device a request to access a selected digital content item, the selected digital content item comprising a plurality of content portions consumable in a plurality of different orders, the plurality of different orders corresponding to a plurality of user experiences for the selected digital content item, and in response, providing a selected content provision schema selected from a plurality of content provision schemas for the selected digital content item, each content provision schema defining a buffering order of the plurality of content portions of the selected digital content item for a corresponding user experience of the selected digital content item. | 04-24-2014 |
20140149636 | INTEGRATED ARCHIVAL SYSTEM - Embodiments are disclosed for presenting a digital content item comprising a plurality of content portions. One example embodiment includes a computing device comprising a primary content storage machine, where the primary content storage machine is configured to selectively store one or more content portions of a digital content item. The computing device is configured to determine a dynamically changing content access window including one or more content portions useable to provide an above-threshold user experience based on a current access position of the digital content item. The computing device is configured to dynamically load the primary content storage machine with the content portions of the digital content item corresponding to the content access window and dynamically unload the content portions of the digital content item outside of the content access window from the primary content storage machine. | 05-29-2014 |
20140164627 | PEER-TO-PEER PERFORMANCE - Embodiments disclosed herein generally relate to improving distribution of digital content in a peer-to-peer network. For example, future snapshots of a peer-to-peer network are predicted and used to determine that a computing device may be better off waiting until a future point in time to download specific digital content. For another example, computing devices are mapped into different groups based on location information, and inter-group information is used to identify other computing devices for a computing device to send download requests for digital content. For a further example, information indicative of scarcity associated with different digital content units is used to prioritize distribution of the digital content units. For still another example, computing devices are grouped into clusters and different computing devices within the same cluster download different digital content units so that the computing devices within the same cluster collectively obtain all of the different digital content units. | 06-12-2014 |
20140171205 | PRESENTING DIGITAL CONTENT ITEM WITH TIERED FUNCTIONALITY - Acquiring an interactive digital content item including a plurality of content portions includes receiving a first set of the content portions that is less than an entirety of the content portions. A partial functionality version of the interactive digital content item is presented using the first set of content portions. A second set of the content portions is received while the partial functionality version of the interactive digital content item is presented. Functionality is added to the partial functionality version of the interactive digital content item using the second set of content portions without interrupting presentation of the partial functionality version of the interactive digital content item. | 06-19-2014 |
20140172968 | PEER-TO-PEER DOWNLOAD THROUGHPUT - Certain embodiments relate to use of aggressive peering requests, which enable a peer computing device to obtain desired digital content more quickly than typically possible in a P2P network. In certain embodiments, an aggressive peering request comprises a request that another peer computing device, to which the aggressive peering request is sent, dedicates substantially all of, or a disproportionately large amount of, its P2P resources to servicing a specific peer computing device. Other embodiments relate to identifying, based on accessed information, peer computing devices that are predicted to be available as an uninterrupted seed, and thus, can be used to increase download throughput in a P2P network. | 06-19-2014 |
20140172971 | CENTRALIZED MANAGEMENT OF A P2P NETWORK - Telemetry data from a plurality of peer computers of a peer-to-peer network is aggregated via a computer network. Each of the plurality of peer computers sends telemetry data relating to transfer of a digital content item within the peer-to-peer network. A mitigation operation that modifies transfer of a digital content item between peer computers of the peer-to-peer network is performed according to one or more health metrics of the peer-to-peer network. The one or more health metrics are derived from the telemetry data aggregated from the plurality of peer computers. | 06-19-2014 |
20140172972 | CONTENT SOURCE SELECTION IN A P2P NETWORK - Telemetry data from a plurality of peer computers of a peer-to-peer network is aggregated via a computer network. Each of the plurality of peer computers sends telemetry data related to transfer of a digital content item within the peer-to-peer network. A content-acquisition request querying for a recommended content source to provide a first digital content item is received from a first peer computer of the peer-to-peer network via the computer network. A response to the content-acquisition request is sent to the first peer computer via the computer network. The response identifies a second peer computer of the peer-to-peer network that has the first digital content item as the recommended content source. The second peer computer is selected according to a peer selection metric derived from the telemetry data aggregated from the plurality of peer computers. | 06-19-2014 |
20140173022 | MANAGED P2P NETWORK WITH CONTENT-DELIVERY NETWORK - A content-acquisition request is sent to a centralized management service computer via a computer network. The content-acquisition request may query the centralized management service computer for a recommended content source to provide a first digital content item. If a response to the content-acquisition request is received via the computer network and identifies a recommended peer computer of a peer-to-peer network as the recommended content source, a request to download the first digital content item is sent to the recommended peer computer via the computer network. If a response to the content-acquisition request is not received, a fallback request to download the first digital content item is automatically sent to a content-delivery network computer via the computer network. | 06-19-2014 |
20140173024 | CONTENT-ACQUISITION SOURCE SELECTION AND MANAGEMENT - A plurality of sources storing portions of a digital content item that includes a plurality of pieces is identified. The plurality of sources includes one or more local storage machines of a computer and one or more peer computers of a peer-to-peer network. For each piece of the plurality of pieces of the digital content item, that piece is downloaded from a source. The source is selected from the plurality of sources according to one or more download metrics. The plurality of pieces is organized for installation on the computer as the digital content item. | 06-19-2014 |
20140173070 | UPDATING OF DIGITAL CONTENT BUFFERING ORDER - Embodiments for dynamically varying a buffering order of a digital content item are disclosed. One disclosed embodiment provides a computing device configured to receive content access information for a plurality of client devices, the content access information describing consumption of a digital content item provided according to a buffering order previously sent to each client device of the plurality of client devices. The computing device is further to dynamically update the buffering order based on the content access information to produce an updated buffering order and to send the updated buffering order to one or more client devices. | 06-19-2014 |
20140221084 | DYNAMIC BUFFER - Buffering an interactive digital content item includes downloading the interactive digital content item, and beginning execution of the interactive digital content item with a buffer after enough of the interactive digital content item is downloaded to fill the buffer and before the interactive digital content item is completely downloaded. The size of the buffer is dynamically set as a function of one or more experience parameters. | 08-07-2014 |
20150058175 | REALIZING BOXED EXPERIENCE FOR DIGITAL CONTENT ACQUISITION - Example apparatus and methods concern realizing the boxed experience for digital content acquisition. Example apparatus and methods associate a digital content purchase with digital metadata. The digital content purchase may be a computer game, a console video game, a film, a television program, or an e-book. The digital metadata may describe a user-customizable physical item portrayed within the digital content purchase. Example apparatus and methods include digital metadata with a digital content purchase, and control the re-creation of a physical item from the digital metadata. Example apparatus and methods may limit the number of times the physical item may be re-created from the digital metadata, and may control the frequency with which the physical item may be re-created. | 02-26-2015 |
20150142893 | PEER-TO-PEER COMMUNICATION TO INCREASE DOWNLOAD THROUGHPUT - Certain embodiments relate to use of aggressive peering requests, which enable a peer computing device to obtain desired digital content more quickly than typically possible in a P2P network. In certain embodiments, an aggressive peering request comprises a request that another peer computing device, to which the aggressive peering request is sent, dedicates substantially all of, or a disproportionately large amount of, its P2P resources to servicing a specific peer computing device. Other embodiments relate to identifying, based on accessed information, peer computing devices that are predicted to be available as an uninterrupted seed, and thus, can be used to increase download throughput in a P2P network. | 05-21-2015 |
20160080487 | PEER-TO-PEER PERFORMANCE - Embodiments disclosed herein can be used to improve the distribution of digital content in a peer-to-peer network. In certain embodiments, computing devices are mapped into different groups based on location information, and inter-group information is collected and used to identify other computing devices to which it would be efficient and effective for a computing device to send download requests for digital content. In certain embodiments, computing devices are grouped into clusters of computing devices, and different computing devices within the same cluster are instructed or recommended to send download requests for different digital content units to computing devices outside of the cluster so that the plurality of computing devices within the same cluster will collectively obtain all of the different digital content units. The computing devices within the same cluster can then share the digital content units with one another. | 03-17-2016 |
Patent application number | Description | Published |
20150032810 | CONTENT DISTRIBUTION USING SOCIAL RELATIONSHIPS - Embodiments of the present invention enable users to allocate resources on their client devices according to relationships with other users. Resources include content such as games or movies. In one embodiment, a content provider directs a requesting device to a peer device that has access to the requested content. When users are in a social relationship, the users' devices are said to be socially affiliated. A user's social network is a collection of the user's electronic relationships with other people. Embodiments of the present invention allow users to establish sharing preferences for one or more client devices. In general, a sharing preference gives an individual preferential access to a user's computing resources on the one or more client devices. The access is preferential when compared to access given to nonsocially affiliated computing devices. | 01-29-2015 |
20150134548 | Unified Content Representation - Example apparatus and methods facilitate providing an incremental future-proof license to a master stream of content. The master stream may be related to different instances of content (e.g., different versions) for which there is a unified content representation. A request for content available through the master stream may be received from a licensee. The request may include an explicit indication of which stream of frames is to be accessed or may include implicit information from which a stream of frames may be selected. The selected stream may be changed midstream in response to changing conditions (e.g., bandwidth), events (e.g., gesture), devices (e.g., licensee accesses different device) or explicit requests. As the available streams of frames associated with the content changes, the changes may be mapped to the master stream and made available to the licensee. The licensee may pay an incremental license fee for access to updated content. | 05-14-2015 |
20150189011 | PEER-TO-PEER NETWORK PRIORITIZING PROPAGATION OF OBJECTS THROUGH THE NETWORK - A method for transferring digital content items in a peer-to-peer network in which a plurality of nodes participate includes receiving requests for receipt of one or more digital content items from a plurality of requesting nodes belonging to the peer-to-peer network. A capacity of the requesting nodes to upload data is assessed. Network resources available to the peer-to-peer network for delivering the digital content items or chunks thereof to the receiving nodes are allocated based at least in part on the capacity of the requesting nodes to upload data. The digital content items or chunks thereof are sent to the requesting nodes over the peer-to-peer network in accordance with the network resources that are allocated to each of the requesting nodes. | 07-02-2015 |
20150324555 | CONTENT DISCOVERY IN MANAGED WIRELESS DISTRIBUTION NETWORKS - A content store is maintained in a device, the device being one of multiple devices in a managed wireless distribution network that allows portions of protected content to be transferred among the multiple devices via multiple wireless networks hosted by various ones of the multiple devices. The content store is configured to maintain portions of protected content that can be consumed by a user of the device only if the user of the device is licensed to consume the protected content. An indication of portions of protected content stored in the content store is provided to each of a set of the multiple devices or to a network management service. Routes to portions of content in the managed wireless distribution network can be identified by the network management service or the multiple devices. | 11-12-2015 |
20150324556 | CONTENT DELIVERY PRIORITIZATION IN MANAGED WIRELESS DISTRIBUTION NETWORKS - A managed wireless distribution network includes multiple devices that communicate with one another via multiple wireless networks (e.g., multiple Wi-Fi networks). Each device in the managed wireless distribution network can host at least one wireless network and/or join at least one wireless network. Content in the managed wireless distribution network is protected so that the content cannot be consumed unless permission to consume the content is obtained. Devices can host portions of protected content regardless of whether they can consume the protected content, and can obtain portions of protected content via the wireless networks of the managed wireless distribution network without having to access a content service over the Internet. | 11-12-2015 |
20150324601 | Managed Wireless Distribution Network - A managed wireless distribution network includes multiple devices that communicate with one another via multiple wireless networks (e.g., multiple Wi-Fi networks). Each device in the managed wireless distribution network can host at least one wireless network and/or join at least one wireless network. Content in the managed wireless distribution network is protected so that the content cannot be consumed unless permission to consume the content is obtained. Devices can host portions of protected content regardless of whether they can consume the protected content, and can obtain portions of protected content via the wireless networks of the managed wireless distribution network without having to access a content service over the Internet. | 11-12-2015 |
20150327068 | DISTRIBUTING CONTENT IN MANAGED WIRELESS DISTRIBUTION NETWORKS - Multiple portions of protected content to host on a device are identified by the device, the multiple portions including one or more portions of each of one or more pieces of protected content. The multiple portions are obtained and stored on the device. The device is one of multiple devices in a managed wireless distribution network that allows portions of protected content to be transferred among the multiple devices via multiple wireless networks hosted by various ones of the multiple devices, and the device is configured to store portions of protected content that can be consumed by a user of the device only if the user of the device has permission to consume the protected content. Participation of the device in the managed wireless distribution network can also be identified, and a reward generated based on the participation of the device in the managed wireless distribution network. | 11-12-2015 |
20150373086 | Courier Network Service - Example apparatus facilitate controlling how targeted electronic data is selected and couriered (e.g., physically carried) between a provider in a first physical location and a recipient in a second physical location. An apparatus, method, or service may control the flow of targeted electronic data or metadata concerning the targeted electronic data in a courier network. The service may consider requests for targeted electronic data or information from which targeted electronic data can be identified. The service may also consider predictions about content that a recipient may want. The targeted electronic data may be identified based on a current state of an operating system, an application, or content at the recipient and information about a desired state of the operating system, application, or content. The number and identity of courier devices selected to courier data may be based on a familiarity index between couriers and recipients in the courier network. | 12-24-2015 |
20150382169 | Courier Network - Example apparatus facilitate couriering (e.g., physically carrying) targeted electronic data between a provider in a first physical location and a recipient in a second physical location. An apparatus may store targeted electronic data or may store metadata concerning the targeted electronic data. The apparatus may also store requests for targeted electronic data or information from which targeted electronic data can be identified. An example apparatus may identify targeted electronic data to be provided and may then acquire the targeted electronic data from a provider. The provider may be another courier, another recipient, a source provider (e.g., database), or other source. The apparatus may provide the targeted electronic data to the recipient using a close-range communication channel that does not use the Internet. The targeted electronic data may be identified based on a state of an operating system, an application, or content at the recipient. | 12-31-2015 |
Patent application number | Description | Published |
20110271928 | DETERMINATION OF AN OVERSPEED-SHUTDOWN EVENT IN A COMBUSTION ENGINE - Methods and systems are provided for detecting an overspeed shutdown condition of an internal combustion engine. The pressure within an air-intake manifold of the engine is measured, and that pressure is compared to a predetermined pressure, which represents the pressure within the air-intake manifold when an overspeed-shutdown mechanism has not been activated. Activation of the overspeed-shutdown mechanism is indicated when comparing the measured pressure value to the predetermined value results in the measured value being less than the predetermined value. | 11-10-2011 |
20130261846 | METHOD AND APPARATUS FOR MATCHING VEHICLE ECU PROGRAMMING TO CURRENT VEHICLE OPERATING CONDITIONS - Disclosed herein are techniques for implementing vehicle ECU reprogramming, so the ECU programming, which plays a large role in vehicle performance characteristics, is tailored to current operational requirements, which may be different than the operational characteristics selected by the manufacturer when initially programming the vehicle ECU (or ECUs) with specific instruction sets, such as fuel maps. In one embodiment, a controller monitors the current operational characteristics of the vehicle, determines the current ECU programming, and determines if a different programming set would better suited to the current operating conditions. In the event that the current programming set should be replaced, the controller implements the ECU reprogramming. In a related embodiment, users are enabled to specify the ECU programming to change, such as changing speed limiter settings. | 10-03-2013 |
20130261874 | METHOD AND APPARATUS FOR MATCHING VEHICLE ECU PROGRAMMING TO CURRENT VEHICLE OPERATING CONDITIONS - Disclosed herein are techniques for implementing vehicle ECU reprograming, so the ECU programming, which plays a large role in vehicle performance characteristics, is tailored to current operational requirements, which may be different than the operational characteristics selected by the manufacturer when initially programming the vehicle ECU (or ECUs) with specific instruction sets, such as fuel maps. In one embodiment, a controller monitors the current operational characteristics of the vehicle, determines the current ECU programming, and determines if a different programming set would better suited to the current operating conditions. In the event that the current programming set should be replaced, the controller implements the ECU reprogramming. In a related embodiment, users are enabled to specify the ECU programming to change, such as changing speed limiter settings. | 10-03-2013 |
20130261907 | METHOD AND APPARATUS FOR MATCHING VEHICLE ECU PROGRAMMING TO CURRENT VEHICLE OPERATING CONDITIONS - Disclosed herein are techniques for implementing vehicle ECU reprograming, so the ECU programming, which plays a large role in vehicle performance characteristics, is tailored to current operational requirements, which may be different than the operational characteristics selected by the manufacturer when initially programming the vehicle ECU (or ECUs) with specific instruction sets, such as fuel maps. In one embodiment, a controller monitors the current operational characteristics of the vehicle, determines the current ECU programming, and determines if a different programming set would better suited to the current operating conditions. In the event that the current programming set should be replaced, the controller implements the ECU reprogramming. In a related embodiment, users are enabled to specify the ECU programming to change, such as changing speed limiter settings. | 10-03-2013 |
20130261939 | METHOD AND APPARATUS FOR MATCHING VEHICLE ECU PROGRAMMING TO CURRENT VEHICLE OPERATING CONDITIONS - Disclosed herein are techniques for implementing vehicle ECU reprograming, so the ECU programming, which plays a large role in vehicle performance characteristics, is tailored to current operational requirements, which may be different than the operational characteristics selected by the manufacturer when initially programming the vehicle ECU (or ECUs) with specific instruction sets, such as fuel maps. In one embodiment, a controller monitors the current operational characteristics of the vehicle, determines the current ECU programming, and determines if a different programming set would better suited to the current operating conditions. In the event that the current programming set should be replaced, the controller implements the ECU reprogramming. In a related embodiment, users are enabled to specify the ECU programming to change, such as changing speed limiter settings. | 10-03-2013 |
20130261942 | METHOD AND APPARATUS FOR MATCHING VEHICLE ECU PROGRAMMING TO CURRENT VEHICLE OPERATING CONDITIONS - Disclosed herein are techniques for implementing vehicle ECU reprograming, so the ECU programming, which plays a large role in vehicle performance characteristics, is tailored to current operational requirements, which may be different than the operational characteristics selected by the manufacturer when initially programming the vehicle ECU (or ECUs) with specific instruction sets, such as fuel maps. In one embodiment, a controller monitors the current operational characteristics of the vehicle, determines the current ECU programming, and determines if a different programming set would better suited to the current operating conditions. In the event that the current programming set should be replaced, the controller implements the ECU reprogramming. In a related embodiment, users are enabled to specify the ECU programming to change, such as changing speed limiter settings. | 10-03-2013 |
20140195071 | EMERGENCY EVENT BASED VEHICLE DATA LOGGING - System and method for enabling predefined events to be used to trigger the collection of vehicle position data. A combination GSM device and GPS device is used to collect vehicle position data and to convey that position data to a remote computing device for review and/or analysis. There is a tradeoff between collecting too much data (cell phone bill is too high) and collecting too little data (value added analytics cannot be achieved without sufficient data). The concepts disclosed herein relate to method and apparatus to enable the data collection/transmission paradigm of such a GSM/GPS to be varied (or triggered) based on the detection of one or more predefined events. This enables data which can contribute to value added analytics to be acquired, without wasting airtime on unimportant data. | 07-10-2014 |
20140195074 | METHOD AND APPARATUS FOR CHANGING EITHER DRIVER BEHAVIOR OR VEHICLE BEHAVIOR BASED ON CURRENT VEHICLE LOCATION AND ZONE DEFINITIONS CREATED BY A REMOTE USER - A remote user can define one or more zone based driver/vehicle behavior definitions. Current vehicle location is analyzed at the vehicle or remotely to determine if the vehicle is approaching or has arrived at a location for which a zone based driver/vehicle behavior has been defined. For zone based driver behavior definitions, a display in the vehicle automatically displays the zone based driver behavior definition to the driver. In some embodiments driver compliance is tracked and non-compliance is reported to the remote user. For zone based vehicle behavior definitions, a vehicle controller at the vehicle responsible for controlling the defined behavior is reprogrammed to impose the defined behavior (no regeneration at location, max speed at location, no idle over 2 minutes at location, etc.). Once the vehicle has left the zone, the controller programming reverts to its prior state, and/or zone based driver behavior definition is no longer displayed. | 07-10-2014 |
20140277831 | METHOD AND APPARATUS FOR REDUCING DATA TRANSFER RATES FROM A VEHICLE DATA LOGGER WHEN A QUALITY OF THE CELLULAR OR SATELLITE LINK IS POOR - System and method for reducing data transfer rates when a quality of the cellular or satellite link (i.e., a long range wireless data link) is poor. Such a concept is particularly well suited to embodiments where the vehicle data being logged or collected includes position data, because consumers of vehicle data that includes position data often desire to have such data exported from the vehicle on frequent basis, so that the physical location of fleet vehicles can be tracked in real-time. In one embodiment, before transmitting data a current location of the vehicle is checked against known bad locations, and no data is sent when the current location is known to be bad. In another embodiment, if successful data transmission is not confirmed during a first time period, additional transmission attempts are delayed for a second time period. | 09-18-2014 |
Patent application number | Description | Published |
20130164712 | METHOD AND APPARATUS FOR GPS BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional GPS or vehicle position data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 06-27-2013 |
20130164713 | METHOD AND APPARATUS FOR GPS BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional GPS or vehicle position data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 06-27-2013 |
20130164714 | USING SOCIAL NETWORKING TO IMPROVE DRIVER PERFORMANCE BASED ON INDUSTRY SHARING OF DRIVER PERFORMANCE DATA - Data is collected during the operation of a vehicle and used to produce a ranking of a driver's performance, and that ranking is shared on a hosted website, such that the drivers can compare their performance metrics to their peers. Fleet operators can use these performance metrics as incentives, by linking driver pay with performance. Individual fleet operators can host their own website, where driver rankings in that fleet can be compared, or the website can be hosted by a third party, and multiple fleet operators participate. The third party can offset their costs for operating the website by charging participating fleet operators a fee, and/or by advertising revenue. In some embodiments, all driver performance data is displayed in an anonymous format, so that individual drivers cannot be identified unless the driver shares their user ID. | 06-27-2013 |
20130164715 | USING SOCIAL NETWORKING TO IMPROVE DRIVER PERFORMANCE BASED ON INDUSTRY SHARING OF DRIVER PERFORMANCE DATA - Data is collected during the operation of a vehicle and used to produce a ranking of a driver's performance, and that ranking is shared on a hosted website, such that the drivers can compare their performance metrics to their peers. Fleet operators can use these performance metrics as incentives, by linking driver pay with performance. Individual fleet operators can host their own website, where driver rankings in that fleet can be compared, or the website can be hosted by a third party, and multiple fleet operators participate. The third party can offset their costs for operating the website by charging participating fleet operators a fee, and/or by advertising revenue. In some embodiments, all driver performance data is displayed in an anonymous format, so that individual drivers cannot be identified unless the driver shares their user ID. | 06-27-2013 |
20130166170 | METHOD AND APPARATUS FOR GPS BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional GPS or vehicle position data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 06-27-2013 |
20130184964 | METHOD AND APPARATUS FOR 3-D ACCELEROMTER BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional accelerometer data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 07-18-2013 |
20130184965 | METHOD AND APPARATUS FOR 3-D ACCELEROMETER BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional accelerometer data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 07-18-2013 |
20140180557 | METHOD AND APPARATUS FOR 3-D ACCELEROMETER BASED SLOPE DETERMINATION, REAL-TIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS - Three dimensional accelerometer data is used to determine a slope the vehicle is traveling over at a specific point in time. The slope data can then be combined with other metrics to provide an accurate, slope corrected vehicle mass. The vehicle mass can then be used along with other vehicle data to determine an amount of work performed by a vehicle, enabling s detailed efficiency analysis of the vehicle to be performed. To calculate slope, horizontal ground speed (V | 06-26-2014 |
Patent application number | Description | Published |
20080215450 | REMOTE PROVISIONING OF INFORMATION TECHNOLOGY - Remote provisioning of an IT network and/or associated services is provided. Hardware, software, service and/or expertise can be moved from on-premise to a remote location (e.g., central, distributed . . . ). Accordingly, at least a large degree computation can be moved to the center to exploit economies of scale, among other things. In such an architecture, computational resources (e.g., data storage, computation power, cache . . . ) can be pooled, and entities can subscribe to a particular level of resources related to a private entity IT network. | 09-04-2008 |
20080222659 | ABSTRACTING OPERATING ENVIRONMENT FROM OPERATING SYSTEM - The present invention extends to methods, systems, and computer program products for abstracting an operating environment from an operating system running in the operating environment. Within an operating environment, an operating environment abstraction layer abstracts and exposes operating environment resources to an operating system. Accordingly, appropriately configured operating environment abstraction layers provide the operating system with a uniform interface to available resources across a variety of different operating environments. Each operating environment abstraction layer and the operating system include adjustable algorithms that can be adjusted to appropriately provide services to requesting applications based on exposed resources of the operating environment. Abstraction layers can be configured to analyze and become fully aware of their operating environment, including identifying the presence of other abstraction layers. An operating system and corresponding abstraction layer can be run in flexible combinations of privileged and unprivileged processor modes. | 09-11-2008 |
20080244507 | Homogeneous Programming For Heterogeneous Multiprocessor Systems - Systems and methods establish communication and control between various heterogeneous processors in a computing system so that an operating system can run an application across multiple heterogeneous processors. With a single set of development tools, software developers can create applications that will flexibly run on one CPU or on combinations of central, auxiliary, and peripheral processors. In a computing system, application-only processors can be assigned a lean subordinate kernel to manage local resources. An application binary interface (ABI) shim is loaded with application binary images to direct kernel ABI calls to a local subordinate kernel or to the main OS kernel depending on which kernel manifestation is controlling requested resources. | 10-02-2008 |
20080244599 | Master And Subordinate Operating System Kernels For Heterogeneous Multiprocessor Systems - Systems and methods establish communication and control between various heterogeneous processors in a computing system so that an operating system can run an application across multiple heterogeneous processors. With a single set of development tools, software developers can create applications that will flexibly run on one CPU or on combinations of central, auxiliary, and peripheral processors. In a computing system, application-only processors can be assigned a lean subordinate kernel to manage local resources. An application binary interface (ABI) shim is loaded with application binary images to direct kernel ABI calls to a local subordinate kernel or to the main OS kernel depending on which kernel manifestation is controlling requested resources. | 10-02-2008 |
20100251265 | Operating System Distributed Over Heterogeneous Platforms - An illustrative operating system distributes two or more instances of the operating system over heterogeneous platforms of a computing device. The instances of the operating system work together to provide single-kernel semantics to present a common operating system abstraction to application modules. The heterogeneous platforms may include co-processors that use different instruction set architectures and/or functionality, different NUMA domains, etc. Further, the operating system allows application modules to transparently access components using a local communication path and a remote communication path. Further, the operating system includes a policy manager module that determines the placement of components based on affinity values associated with interaction relations between components. The affinity values express the sensitivity of the interaction relations to a relative location of the components. | 09-30-2010 |
20100287271 | System and Method for Restricting Data Transfers and Managing Software Components of Distributed Computers - A controller, referred to as the “BMonitor”, is situated on a computer. The BMonitor includes a plurality of filters that identify where data can be sent to and/or received from, such as another node in a co-location facility or a client computer coupled to the computer via the Internet. The BMonitor further receives and implements requests from external sources regarding the management of software components executing on the computer, allowing such external sources to initiate, terminate, debug, etc. software components on the computer. Additionally, the BMonitor operates as a trusted third party mediating interaction among multiple external sources managing the computer. | 11-11-2010 |
20100318293 | RETRACING STEPS - Techniques for creating breadcrumbs for a trail of activity are described. The trail of activity may be created by recording movement information based on inferred actions of walking, not walking, or changing floor levels. The movement information may be recorded with an accelerometer and a pressure sensor. A representation of a list of breadcrumbs may be visually displayed on a user interface of a mobile device, in a reverse order to retrace steps. In some implementations, a compass may additionally or alternatively be used to collect directional information relative to the earth's magnetic poles. | 12-16-2010 |
20110258290 | Bandwidth-Proportioned Datacenters - A system including at least one storage node and at least one computation node connected by a switch is described herein. Each storage node has one or more storage units and one or more network interface components, the collective bandwidths of the storage units and the network interface components being proportioned to one another to enable communication to and from other nodes at the collective bandwidth of the storage units. Each computation node has logic configured to make requests of storage nodes, an input/output bus, and one or more network interface components, the bandwidth of the bus and the collective bandwidths of the network interface components being proportioned to one another to enable communication to and from other nodes at the bandwidth of the input/output bus. | 10-20-2011 |
20110258297 | Locator Table and Client Library for Datacenters - A system including a plurality of servers, a client, and a metadata server is described herein. The servers each store tracts of data, a plurality of the tracts comprising a byte sequence and being distributed among the plurality of servers. To locate the tracts, the metadata server generates a table that is used by the client to identify servers associated with the tracts, enabling the client to provide requests to the servers. The metadata server also enables recovery in the event of a server failure. Further, the servers construct tables of tract identifiers and locations to use in responding to the client requests. | 10-20-2011 |
20110258482 | Memory Management and Recovery for Datacenters - A system including a plurality of servers, a client, and a metadata server is described herein. The servers each store tracts of data, a plurality of the tracts comprising a byte sequence and being distributed among the plurality of servers. To locate the tracts, the metadata server generates a table that is used by the client to identify servers associated with the tracts, enabling the client to provide requests to the servers. The metadata server also enables recovery in the event of a server failure. Further, the servers construct tables of tract identifiers and locations to use in responding to the client requests. | 10-20-2011 |
20120017213 | ULTRA-LOW COST SANDBOXING FOR APPLICATION APPLIANCES - The disclosed architecture facilitates the sandboxing of applications by taking core operating system components that normally run in the operating system kernel or otherwise outside the application process and on which a sandboxed application depends on to run, and converting these core operating components to run within the application process. The architecture takes the abstractions already provided by the host operating system and converts these abstractions for use by the sandbox environment. More specifically, new operating system APIs (application program interfaces) are created that include only the basic computation services, thus, separating the basic services from rich application APIs. The code providing the rich application APIs is copied out of the operating system and into the application environment—the application process. | 01-19-2012 |
20120227038 | LIGHTWEIGHT ON-DEMAND VIRTUAL MACHINES - Virtual machines are made lightweight by substituting a library operating system for a full-fledged operating system. Consequently, physical machines can include substantially more virtual machines than otherwise possible. Moreover, a hibernation technique can be employed with respect to lightweight virtual machines to further increase the capacity of physical machines. More specifically, virtual machines can be loaded onto physical machines on-demand and removed from physical machines to make computational resources available as needed. Still further yet, since the virtual machines are lightweight, they can be hibernated and restored at a rate substantially imperceptible to users. | 09-06-2012 |
20120227058 | DYNAMIC APPLICATION MIGRATION - A library operating system is employed in conjunction with an application in a virtual environment to facilitate dynamic application migration. An application executing in a virtual environment with a library operating system on a first machine can be suspended, and application state can be captured. Subsequently, the state can be restored and execution resumed on the first machine or a second machine. | 09-06-2012 |
20120227061 | APPLICATION COMPATIBILITY WITH LIBRARY OPERATING SYSTEMS - Application compatibility is facilitated by use of library operating systems. Library operating systems can encapsulate portions of an application likely to break application compatibility. An application can be bound to a compatible library operating system that operates over a host operating system. Furthermore, library operating system version can be greater than, equal, or less than the version of the host operating system. Consequently, both backward and forward compatibility is enabled. | 09-06-2012 |
20120239649 | EXTENT VIRTUALIZATION - Files can be segmented into distinct groups and allocated storage units such as blocks. Files associated with parent and child files can be segmented into separate groups, for instance. Further, a group associated with parent files can be extended to include additional blocks reserved for subsequent update. Additionally, metadata can be merged across groups to provide a unified view of the distinct groups. | 09-20-2012 |
20120296626 | INSTRUCTION SET EMULATION FOR GUEST OPERATING SYSTEMS - The described implementations relate to virtual computing techniques. One implementation provides a technique that can include receiving a request to execute an application. The application can include first application instructions from a guest instruction set architecture. The technique can also include loading an emulator and a guest operating system into an execution context with the application. The emulator can translate the first application instructions into second application instructions from a host instruction set architecture. The technique can also include running the application by executing the second application instructions. | 11-22-2012 |
20130054734 | MIGRATION OF CLOUD APPLICATIONS BETWEEN A LOCAL COMPUTING DEVICE AND CLOUD - Architecture that facilitates seamless migration of server-hosted code to the client machine and back. Migration is of a running instance of a process by communicating only a small amount of state data, which makes this feasible over current network connection speeds. The web browsing experience for applications is retained. The migration capabilities are facilitated by an operating construction, referred to as the library OS (operating system) system in a context of state and execution migration between server and client. An application binary interface is provided that resides below the library OS to provide the state and execution mobility. | 02-28-2013 |
20130151846 | Cryptographic Certification of Secure Hosted Execution Environments - Implementations for providing a secure execution environment with a hosted computer are described. A security-enabled processor establishes a hardware-protected memory area with an activation state that executes only software identified by a client system. The hardware-protected memory area is inaccessible by code that executes outside the hardware-protected memory area. A certification is transmitted to the client system to indicate that the secure execution environment is established, in its activation state, with only the software identified by the request. | 06-13-2013 |
20130151848 | CRYPTOGRAPHIC CERTIFICATION OF SECURE HOSTED EXECUTION ENVIRONMENTS - Implementations for providing a persistent secure execution environment with a hosted computer are described. A host operating system of a computing system provides an encrypted checkpoint to a persistence module that executes in a secure execution environment of a hardware-protected memory area initialized by a security-enabled processor. The encrypted checkpoint is derived at least partly from another secure execution environment that is cryptographically certifiable as including another hardware-protected memory area established in an activation state to refrain from executing software not trusted by the client system. | 06-13-2013 |
20130152209 | Facilitating System Service Request Interactions for Hardware-Protected Applications - Described herein are implementations for providing a platform adaptation layer that enables applications to execute inside a user-mode hardware-protected isolation container while utilizing host platform resources that reside outside of the isolation container. The platform adaptation layer facilitates a system service request interaction between the application and the host platform. As part of the facilitating, a secure services component of the platform adaptation layer performs a security-relevant action. | 06-13-2013 |
20140033197 | MODEL-BASED VIRTUAL SYSTEM PROVISIONING - Model-based virtual system provisioning includes accessing a model of a workload to be installed on a virtual machine of a system as well as a model of the system. A workload refers to some computing that is to be performed, and includes an application to be executed to perform the computing, and optionally includes the operating system on which the application is to be installed. The workload model identifies a source of the application and operating system of the workload, as well as constraints of the workload, such as resources and/or other capabilities that the virtual machine(s) on which the workload is to be installed must have. An installation specification for the application is also generated, the installation specification being derived at least in part from the model of the workload and the model of the virtual system. | 01-30-2014 |
20140195834 | HIGH THROUGHPUT LOW LATENCY USER MODE DRIVERS IMPLEMENTED IN MANAGED CODE - Implementing a safe driver that can support high throughput and low latency devices. The method includes receiving a hardware message from a hardware device. The method further includes delivering the hardware message to one or more driver processes executing in user mode using a zero-copy to allow the one or more driver processes to support high throughput and low latency hardware devices. | 07-10-2014 |
20140298356 | Operating System Distributed Over Heterogeneous Platforms - An illustrative operating system distributes two or more instances of the operating system over heterogeneous platforms of a computing device. The instances of the operating system work together to provide single-kernel semantics to present a common operating system abstraction to application modules. The heterogeneous platforms may include co-processors that use different instruction set architectures and/or functionality, different NUMA domains, etc. Further, the operating system allows application modules to transparently access components using a local communication path and a remote communication path. Further, the operating system includes a policy manager module that determines the placement of components based on affinity values associated with interaction relations between components. The affinity values express the sensitivity of the interaction relations to a relative location of the components. | 10-02-2014 |
20160026488 | INSTRUCTION SET EMULATION FOR GUEST OPERATING SYSTEMS - The described implementations relate to virtual computing techniques. One implementation provides a technique that can include receiving a request to execute an application. The application can include first application instructions from a guest instruction set architecture. The technique can also include loading an emulator and a guest operating system into an execution context with the application. The emulator can translate the first application instructions into second application instructions from a host instruction set architecture. The technique can also include running the application by executing the second application instructions. | 01-28-2016 |
20160077862 | MODEL-BASED VIRTUAL SYSTEM PROVISIONING - Model-based virtual system provisioning includes accessing a model of a workload to be installed on a virtual machine of a system as well as a model of the system. A workload refers to some computing that is to be performed, and includes an application to be executed to perform the computing, and optionally includes the operating system on which the application is to be installed. The workload model identifies a source of the application and operating system of the workload, as well as constraints of the workload, such as resources and/or other capabilities that the virtual machine(s) on which the workload is to be installed must have. An installation specification for the application is also generated, the installation specification being derived at least in part from the model of the workload and the model of the virtual system. | 03-17-2016 |