Patent application number | Description | Published |
20090040842 | Enhanced write abort mechanism for non-volatile memory - In a non-volatile memory (NVM) device having a controller and a non-volatile memory array controlled by the controller a voltage supervisor circuit monitors an output of a voltage supply powering the NVM device. The voltage supervisor circuit may be part of the NVM device or coupled to it. The voltage supervisor circuit is configured to assert a “low-voltage” signal responsive to detecting the output of the voltage supply powering the NVM device dropping below a predetermined value. The controller is configured to write data into the memory array while the “low-voltage” signal is deasserted and to suspend writing data while the “low-voltage” signal is asserted. In response to assertion of the “low-voltage” signal, the controller completes a write cycle/program operation, if pending, and prevents any additional write cycles/program operation(s) during assertion of the “low-voltage” signal. | 02-12-2009 |
20090040843 | Enhanced write abort mechanism for non-volatile memory - In a non-volatile memory (NVM) device having a controller and a non-volatile memory array controlled by the controller a voltage supervisor circuit monitors an output of a voltage supply powering the NVM device. The voltage supervisor circuit may be part of the NVM device or coupled to it. The voltage supervisor circuit is configured to assert a “low-voltage” signal responsive to detecting the output of the voltage supply powering the NVM device dropping below a predetermined value. The controller is configured to write data into the memory array while the “low-voltage” signal is deasserted and to suspend writing data while the “low-voltage” signal is asserted. In response to assertion of the “low-voltage” signal, the controller completes a write cycle/program operation, if pending, and prevents any additional write cycles/program operation(s) during assertion of the “low-voltage” signal. | 02-12-2009 |
20090089481 | Leveraging Portable System Power to Enhance Memory Management and Enable Application Level Features - A memory device and techniques for its operation are presented. After operating on power received from a host, the memory device determines that it is no longer receiving host power and, in response, activates a power source on the memory device itself. Using this reserve power, the memory device can then perform data management operations. The techniques can also be applied to a digital appliance having a non-volatile memory. The memory device or digital appliance can prioritize its memory management operation during the host/user operating window based on the ability to perform these operations outside of the host/user operating window. Additionally, in a data write operations, where the memory device receives data from a host, stores the data in volatile memory, and then writes the data into the non-volatile memory, the memory device sends the host an acknowledgment of the data having been written into the non-volatile memory after it has been store in the volatile memory, but before the write into the non-volatile memory is complete. | 04-02-2009 |
20110154158 | SYSTEM AND METHOD OF ERROR CORRECTION OF CONTROL DATA AT A MEMORY DEVICE - A method includes initiating a compression operation to compress data to be stored in a group of storage elements at a memory device that includes an error correction coding (ECC) engine. The method includes selecting one of a first mode of the ECC engine to generate a first number of parity bits and a second mode of the ECC engine to generate a second number of parity bits based on an extent of compression of the data. The method also includes encoding the compressed data to generate parity bits corresponding to the compressed data and storing the compressed data and the parity bits to the group of storage elements according to a page format that includes a data portion and a parity portion. The compressed data is stored in the data portion and at least some of the parity bits are stored in the parity portion. | 06-23-2011 |
20110154160 | SYSTEM AND METHOD OF ERROR CORRECTION OF CONTROL DATA AT A MEMORY DEVICE - A controller coupled to a memory array includes an error correction coding (ECC) engine and an ECC enhancement compression module coupled to the ECC engine. The ECC enhancement compression module is configured to receive and compress control data to be provided to the ECC engine to be encoded. Compressed encoded control data generated at the ECC engine is stored as a codeword at the memory array. | 06-23-2011 |
20120023346 | METHODS AND SYSTEMS FOR DYNAMICALLY CONTROLLING OPERATIONS IN A NON-VOLATILE MEMORY TO LIMIT POWER CONSUMPTION - Systems and methods are disclosed for limiting power consumption of a non-volatile memory (NVM) using a power limiting scheme that distributes a number of concurrent NVM operations over time. This provides a “current consumption cap” that fixes an upper limit of current consumption for the NVM, thereby eliminating peak power events. In one embodiment, power consumption of a NVM can be limited by receiving data suitable for use as a factor in adjusting a current threshold from at least one of a plurality of system sources. The current threshold can be less than a peak current capable of being consumed by the NVM and can be adjusted based on the received data. A power limiting scheme can be used that limits the number of concurrent NVM operations performed so that a cumulative current consumption of the NVM does not exceed the adjusted current threshold. | 01-26-2012 |
20120023347 | METHODS AND SYSTEMS FOR DYNAMICALLY CONTROLLING OPERATIONS IN A NON-VOLATILE MEMORY TO LIMIT POWER CONSUMPTION - Systems and methods are disclosed for limiting power consumption of a non-volatile memory (NVM) using a power limiting scheme that distributes a number of concurrent NVM operations over time. This provides a “current consumption cap” that fixes an upper limit of current consumption for the NVM, thereby eliminating peak power events. In one embodiment, power consumption of a NVM can be limited by receiving data suitable for use as a factor in adjusting a current threshold from at least one of a plurality of system sources. The current threshold can be less than a peak current capable of being consumed by the NVM and can be adjusted based on the received data. A power limiting scheme can be used that limits the number of concurrent NVM operations performed so that a cumulative current consumption of the NVM does not exceed the adjusted current threshold. | 01-26-2012 |
20120023348 | METHODS AND SYSTEMS FOR DYNAMICALLY CONTROLLING OPERATIONS IN A NON-VOLATILE MEMORY TO LIMIT POWER CONSUMPTION - Systems and methods are disclosed for limiting power consumption of a non-volatile memory (NVM) using a power limiting scheme that distributes a number of concurrent NVM operations over time. This provides a “current consumption cap” that fixes an upper limit of current consumption for the NVM, thereby eliminating peak power events. In one embodiment, power consumption of a NVM can be limited by receiving data suitable for use as a factor in adjusting a current threshold from at least one of a plurality of system sources. The current threshold can be less than a peak current capable of being consumed by the NVM and can be adjusted based on the received data. A power limiting scheme can be used that limits the number of concurrent NVM operations performed so that a cumulative current consumption of the NVM does not exceed the adjusted current threshold. | 01-26-2012 |
20120023356 | PEAK POWER VALIDATION METHODS AND SYSTEMS FOR NON-VOLATILE MEMORY - Systems and methods are disclosed for validating a non-volatile memory (NVM) package for use in an electronic device before it is incorporated into the device. A NVM package may be validated by determining its power consumption profile, and if the profile meets predetermined criteria, that NVM package may be qualified for use in an electronic system. The power consumption profile may be obtained by issuing commands, such as read commands, to the NVM package to simultaneously access each die of the NVM package to invoke a maximum power consumption event. During this event, power consumption by the NVM package can be monitored and analyzed to determine whether the NVM package qualifies for use in an electronic device. | 01-26-2012 |
20130031009 | AD-HOC CASH DISPENSING NETWORK - An ad-hoc cash-dispensing network that allows users to efficiently exchange cash is provided. The ad-hoc cash-dispensing network includes a cash-dispensing server, a network, and a plurality of client terminals that connect to the cash-dispending server through the network. The user of a client terminal sends a request for cash to the cash-dispensing server. The request for cash includes the location of the client terminal. Based on this location, the cash-dispensing server locates one or more other users that are close/proximate to the requesting user and verifies that at least one of these proximate users is willing and able to provide the requested amount of cash. Following the transfer of cash between the parties, the requesting user's account is charged for the service while the providing user's account is credited for the service. | 01-31-2013 |
20130290606 | POWER MANAGEMENT FOR A SYSTEM HAVING NON-VOLATILE MEMORY - Systems and methods are disclosed for power management of a system having non-volatile memory (“NVM”). One or more controllers of the system can optimally turn modules on or off and/or intelligently adjust the operating speeds of modules and interfaces of the system based on the type of incoming commands and the current conditions of the system. This can result in optimal system performance and reduced system power consumption. | 10-31-2013 |
20140068296 | METHODS AND SYSTEMS FOR DYNAMICALLY CONTROLLING OPERATIONS IN A NON-VOLATILE MEMORY TO LIMIT POWER CONSUMPTION - Systems and methods are disclosed for limiting power consumption of a non-volatile memory (NVM) using a power limiting scheme that distributes a number of concurrent NVM operations over time. This provides a “current consumption cap” that fixes an upper limit of current consumption for the NVM, thereby eliminating peak power events. In one embodiment, power consumption of a NVM can be limited by receiving data suitable for use as a factor in adjusting a current threshold from at least one of a plurality of system sources. The current threshold can be less than a peak current capable of being consumed by the NVM and can be adjusted based on the received data. A power limiting scheme can be used that limits the number of concurrent NVM operations performed so that a cumulative current consumption of the NVM does not exceed the adjusted current threshold. | 03-06-2014 |
Patent application number | Description | Published |
20100037418 | Autonomous Coverage Robots - An autonomous coverage robot includes a body, a drive system disposed on the body, and a cleaning assembly disposed on the body and configured to engage a floor surface while the robot is maneuvered across the floor surface. The cleaning assembly includes a driven cleaning roller, a cleaning bin disposed on the body for receiving debris agitated by the cleaning roller, and an air mover. The cleaning bin includes a cleaning bin body having a cleaning bin entrance disposed adjacent to the cleaning roller and a roller scraper disposed on the cleaning bin body for engaging the cleaning roller. The cleaning bin body has a holding portion in pneumatic communication with the cleaning bin entrance, and the air mover is operable to move air into the cleaning bin entrance. | 02-18-2010 |
20120159725 | Cleaning Robot Roller Processing - A coverage robot includes a chassis, a drive system, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller including an elongated core with end mounting features defining a central longitudinal axis of rotation, multiple floor cleaning bristles extending radially outward from the core, and at least one compliant flap extending radially outward from the core to sweep a floor surface. The flap is configured to prevent errant filaments from spooling tightly about the core to aid subsequent removal of the filaments. In another aspect, a coverage robot includes a chassis, a drive system, a controller, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller. The coverage robot includes a roller cleaning tool carried by the chassis and configured to longitudinally traverse the roller to remove accumulated debris from the cleaning roller. | 06-28-2012 |
20130205520 | CLEANING ROBOT ROLLER PROCESSING - A coverage robot includes a chassis, a drive system, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller including an elongated core with end mounting features defining a central longitudinal axis of rotation, multiple floor cleaning bristles extending radially outward from the core, and at least one compliant flap extending radially outward from the core to sweep a floor surface. The flap is configured to prevent errant filaments from spooling tightly about the core to aid subsequent removal of the filaments. In another aspect, a coverage robot includes a chassis, a drive system, a controller, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller. The coverage robot includes a roller cleaning tool carried by the chassis and configured to longitudinally traverse the roller to remove accumulated debris from the cleaning roller. | 08-15-2013 |
20130206170 | COVERAGE ROBOT MOBILITY - An autonomous coverage robot includes a body having at least one outer wall, a drive system disposed on the body and configured to maneuver the robot over a work surface, and a cleaning assembly carried by the body. The cleaning assembly includes first and second cleaning rollers rotatably coupled to the body, a suction assembly having a channel disposed adjacent at least one of the cleaning rollers, and a container in fluid communication with the channel. The container is configured to collect debris drawn into the channel. The suction assembly is configured to draw debris removed from the work surface by at least one of the cleaning rollers into the channel, and the container has a wall common with the at least one outer wall of the body. | 08-15-2013 |
20140026354 | MODULAR ROBOT - A coverage robot including a chassis, multiple drive wheel assemblies disposed on the chassis, and a cleaning assembly carried by the chassis. Each drive wheel assembly including a drive wheel assembly housing, a wheel rotatably coupled to the housing, and a wheel drive motor carried by the drive wheel assembly housing and operable to drive the wheel. The cleaning assembly including a cleaning assembly housing, a cleaning head rotatably coupled to the cleaning assembly housing, and a cleaning drive motor carried by cleaning assembly housing and operable to drive the cleaning head. The wheel assemblies and the cleaning assembly are each separately and independently removable from respective receptacles of the chassis as complete units. | 01-30-2014 |
20140053351 | CLEANING ROBOT ROLLER PROCESSING - A coverage robot includes a chassis, a drive system, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller including an elongated core with end mounting features defining a central longitudinal axis of rotation, multiple floor cleaning bristles extending radially outward from the core, and at least one compliant flap extending radially outward from the core to sweep a floor surface. The flap is configured to prevent errant filaments from spooling tightly about the core to aid subsequent removal of the filaments. In another aspect, a coverage robot includes a chassis, a drive system, a controller, and a cleaning assembly. The cleaning assembly includes a housing and at least one driven cleaning roller. The coverage robot includes a roller cleaning tool carried by the chassis and configured to longitudinally traverse the roller to remove accumulated debris from the cleaning roller. | 02-27-2014 |
20140352103 | MODULAR ROBOT - A coverage robot including a chassis, multiple drive wheel assemblies disposed on the chassis, and a cleaning assembly carried by the chassis. Each drive wheel assembly including a drive wheel assembly housing, a wheel rotatably coupled to the housing, and a wheel drive motor carried by the drive wheel assembly housing and operable to drive the wheel. The cleaning assembly including a cleaning assembly housing, a cleaning head rotatably coupled to the cleaning assembly housing, and a cleaning drive motor carried by cleaning assembly housing and operable to drive the cleaning head. The wheel assemblies and the cleaning assembly are each separately and independently removable from respective receptacles of the chassis as complete units. | 12-04-2014 |
Patent application number | Description | Published |
20100094582 | METHOD FOR ESTIMATING TEMPERATURE AT A CRITICAL POINT - Methods and apparatuses are disclosed to estimate temperature at one or more critical points in a data processing system comprising modeling a steady state temperature portion of a thermal model at the one or more critical points using regression analysis; modeling the transient temperature portion of the thermal model at the one or more critical points using a filtering algorithm; and generating a thermal model at the one or more critical points by combining the steady state temperature portion of the thermal model with the transient temperature portion of the thermal model. The thermal model may then be used to estimate an instantaneous temperature at the one or more critical points or to predict a future temperature at the one or more critical points. | 04-15-2010 |
20100235012 | AUTOMATIC ADJUSTMENT OF THERMAL REQUIREMENT - Methods and apparatuses to automatically adjust a thermal requirement of a data processing system are described. One or more conditions associated with a data processing system are detected. A temperature requirement for the data processing system is determined based on the one or more conditions. The performance of the data processing system may be throttled to maintain a temperature of the data processing system below the temperature requirement. Detecting the one or more conditions associated with the data processing system may include determining a location of the data processing system based on a measured motion, a state of a peripheral device, a position of one portion of the data processing system (e.g., a lid) relative another portion of the data processing system (e.g., a bottom portion), a type of application operating on the data processing system, or any combination thereof. | 09-16-2010 |
20110301777 | ADJUSTING THE THERMAL BEHAVIOR OF A COMPUTING SYSTEM USING INDIRECT INFORMATION ABOUT AMBIENT TEMPERATURE - A computing system has a thermal manager that changes a power consuming activity limit in the device based on an estimate of temperature of a target location in the system. There are several temperature sensors that are not at the target location. An estimator computes the target location temperature estimate using a thermal model and, as input to the thermal model, data from the sensors. The thermal model produces different estimates of the target location temperature at different ambient temperatures but without computing or measuring the ambient temperatures. Other embodiments are also described and claimed. | 12-08-2011 |
20130060510 | METHOD FOR ESTIMATING TEMPERATURE AT A CRITICAL POINT - Methods and apparatuses are disclosed to estimate temperature at one or more critical points in a data processing system comprising modeling a steady state temperature portion of a thermal model at the one or more critical points using regression analysis; modeling the transient temperature portion of the thermal model at the one or more critical points using a filtering algorithm; and generating a thermal model at the one or more critical points by combining the steady state temperature portion of the thermal model with the transient temperature portion of the thermal model. The thermal model may then be used to estimate an instantaneous temperature at the one or more critical points or to predict a future temperature at the one or more critical points. | 03-07-2013 |
20130179000 | AUTOMATIC ADJUSTMENT OF THERMAL REQUIREMENT - Methods and apparatuses to automatically adjust a thermal requirement of a data processing system are described. One or more conditions associated with a data processing system are detected. A temperature requirement for the data processing system is determined based on the one or more conditions. The performance of the data processing system may be throttled to maintain a temperature of the data processing system below the temperature requirement. Detecting the one or more conditions associated with the data processing system may include determining a location of the data processing system based on a measured motion, a state of a peripheral device, a position of one portion of the data processing system (e.g., a lid) relative another portion of the data processing system (e.g., a bottom portion), a type of application operating on the data processing system, or any combination thereof. | 07-11-2013 |
20130228632 | CONTROLLING A COOLING SYSTEM FOR A COMPUTER SYSTEM - The disclosed embodiments provide an apparatus that controls a cooling system for a computer system. During operation, the apparatus monitors a temperature signal from the computer system to determine a trend for the temperature signal. Then, a filter parameter for a trend-based filter is selected based on the trend. Next, the temperature signal is filtered using the trend-based filter to generate a filtered temperature signal which is then passed through a controller to generate a control signal for the cooling system. | 09-05-2013 |
20140082383 | PREDICTING USER INTENT AND FUTURE INTERACTION FROM APPLICATION ACTIVITIES - Techniques for power management of a portable device are described herein. According to one embodiment, a user agent of an operating system executed within a portable device is configured to monitor activities of programs running within the portable device and to predict user intent at a given point in time and possible subsequent user interaction with the portable device based on the activities of the program. Power management logic is configured to adjust power consumption of the portable device based on the predicted user intent and subsequent user interaction of the portable device, such that remaining power capacity of a battery of the portable device satisfies intended usage of the portable device. | 03-20-2014 |
20140082384 | INFERRING USER INTENT FROM BATTERY USAGE LEVEL AND CHARGING TRENDS - Techniques for power management of a portable device are described herein. According to one embodiment, a user agent of an operating system executed within a portable device is configured to monitor daily battery usage of a battery of the portable device, to capturing, by the user agent, daily battery charging pattern of the battery of the portable device, and to inferring, by the user agent, user intent of utilizing the portable device at a given point in time based on a battery operating condition at the point in time in view of the daily battery usage and the daily battery charging pattern. Power management logic is configured to perform power management actions based on the user intent. | 03-20-2014 |
20140164757 | CLOSED LOOP CPU PERFORMANCE CONTROL - The invention provides a technique for targeted scaling of the voltage and/or frequency of a processor included in a computing device. One embodiment involves scaling the voltage/frequency of the processor based on the number of frames per second being input to a frame buffer in order to reduce or eliminate choppiness in animations shown on a display of the computing device. Another embodiment of the invention involves scaling the voltage/frequency of the processor based on a utilization rate of the GPU in order to reduce or eliminate any bottleneck caused by slow issuance of instructions from the CPU to the GPU. Yet another embodiment of the invention involves scaling the voltage/frequency of the CPU based on specific types of instructions being executed by the CPU. Further embodiments include scaling the voltage and/or frequency of a CPU when the CPU executes workloads that have characteristics of traditional desktop/laptop computer applications. | 06-12-2014 |
20140364104 | Push Notification Initiated Background Updates - In some implementations, a mobile device can be configured to monitor environmental, system and user events. The occurrence of one or more events can trigger adjustments to system settings. In some implementations, the mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or accessing a network interface, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device to preserve a high quality user experience. | 12-11-2014 |
20140365793 | THERMAL MANAGEMENT OF AN INTEGRATED CIRCUIT - Methods for thermal management of an integrated circuit are disclosed. In particular, a dual control loop, having a first control loop and a second control loop, is used to maintain the temperature of an integrated circuit at a first temperature and a second temperature, respectively. In order to prevent the integrated circuit from overheating during periods of rapid temperature increase, the second control loop may be configured to control temperature at the second temperature below the specification limit of the integrated circuit by reducing power to the integrated circuit. The second control loop samples and maintains temperature of the integrated circuit at time intervals relatively faster than that of the first control loop. However, the second control loop is configured to release control to the first control loop when the temperature of the integrated circuit is reduced. The first control loop may then control power to the integrated circuit. | 12-11-2014 |
20140366041 | Dynamic Adjustment of Mobile Device Based on User Activity - In some implementations, a mobile device can be configured to monitor environmental, system and user events. The occurrence of one or more events can trigger adjustments to system settings. In some implementations, the mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or accessing a network interface, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device to preserve a high quality user experience. | 12-11-2014 |
20140366042 | Initiating Background Updates Based on User Activity - In some implementations, a mobile device can be configured to monitor environmental, system and user events. The occurrence of one or more events can trigger adjustments to system settings. In some implementations, the mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or accessing a network interface, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device to preserve a high quality user experience. | 12-11-2014 |
20150347204 | Dynamic Adjustment of Mobile Device Based on System Events - In some implementations, a mobile device can be configured to monitor environmental, system and user events associated with the mobile device and/or a peer device. The occurrence of one or more events can trigger adjustments to system settings. The mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or communicating with a peer device, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device and/or a peer device to ensure a high quality user experience. | 12-03-2015 |
20150347205 | Dynamic Adjustment of Mobile Device Based on Adaptive Prediction of System Events - In some implementations, a mobile device can be configured to monitor environmental, system and user events associated with the mobile device and/or a peer device. The occurrence of one or more events can trigger adjustments to system settings. The mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or communicating with a peer device, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device and/or a peer device to ensure a high quality user experience. | 12-03-2015 |
20150347519 | MACHINE LEARNING BASED SEARCH IMPROVEMENT - Systems and methods are disclosed for improving search results returned to a user from one or more search domains, utilizing query features learned locally on the user's device. A search engine can receive, analyze and forward query results from multiple search domains and pass the query results to a client device. A search engine can determine a feature by analyzing query results, generate a predictor for the feature, instruct a client device to use the predictor to train on the feature, and report back to the search engine on training progress. A search engine can instruct a first and second set of client devices to train on set A and B of predictors, respectively, and report back training progress to the search engine. A client device can store search session context and share the context with a search engine between sessions with one or more search engines. A synchronization system can synchronize local predictors between multiple client devices of a user. | 12-03-2015 |
20150347594 | MULTI-DOMAIN SEARCH ON A COMPUTING DEVICE - Systems and methods are disclosed for improving search results returned to a user from one or more domains, utilizing query features learned locally on the user's device. One or more domains can inform a computing device of one or more features related to a search query upon which to the computing device can apply local learning. A local search system can include a local database, a local search history and feedback history database, and a local learning system to identify features about query terms. The features can be learned from the user's interaction with both local search results and remote search results, without sending the user interaction information or other user identification information to a remote search engine. A locally learned feature can be used to extend a query, bias a query term, or filter query results. | 12-03-2015 |
20150350885 | Dynamic Adjustment of Mobile Device Based on Thermal Conditions - In some implementations, a mobile device can be configured to monitor environmental, system and user events associated with the mobile device and/or a peer device. The occurrence of one or more events can trigger adjustments to system settings. The mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or communicating with a peer device, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device and/or a peer device to ensure a high quality user experience. | 12-03-2015 |
20150351033 | Dynamic Adjustment of Mobile Device Based on Voter Feedback - In some implementations, a mobile device can be configured to monitor environmental, system and user events associated with the mobile device and/or a peer device. The occurrence of one or more events can trigger adjustments to system settings. The mobile device can be configured to keep frequently invoked applications up to date based on a forecast of predicted invocations by the user. In some implementations, the mobile device can receive push notifications associated with applications that indicate that new content is available for the applications to download. The mobile device can launch the applications associated with the push notifications in the background and download the new content. In some implementations, before running an application or communicating with a peer device, the mobile device can be configured to check energy and data budgets and environmental conditions of the mobile device and/or a peer device to ensure a high quality user experience. | 12-03-2015 |
20150351037 | ADAPTIVE BATTERY LIFE EXTENSION - According to one embodiment, a first battery number is determined representing a battery condition of a battery of a mobile device using a predictive model, where the predictive model is configured to predict future battery conditions based on a past battery usage of the battery. A second battery number is determined representing the battery condition using a drain model, where the drain model is configured to predict a future battery discharge rate based on a past battery discharge rate. A third battery number is determined representing the battery condition based on a current battery level corresponding to a remaining life of the battery at the point in time. Power management logic performs a power management action based on the battery condition derived from at least one of the first battery number, the second battery number and the third battery number. | 12-03-2015 |
20160058331 | PACING ACTIVITY DATA OF A USER - Pacer activity data of a user may be managed. For example, historical activity data of a user corresponding to a particular time of a day prior to a current day may be received. Additionally, a user interface configured to display an activity goal of the user may be generated and the user interface may be provided for presentation. In some aspects, the user interface may be configured to display a first indicator that identifies cumulative progress towards the activity goal and a second indicator that identifies predicted cumulative progress towards the activity goal. The cumulative progress may be calculated based on monitored activity from a start of the current day to the particular time of the current day and the predicted cumulative progress may be calculated based on the received historical activity data corresponding to the particular time of the day prior to the current day. | 03-03-2016 |
Patent application number | Description | Published |
20100036846 | METHOD AND SYSTEM FOR OPTIMIZING ROW LEVEL SECURITY IN DATABASE SYSTEMS - One embodiment of the present invention provides a system that implements a security policy in a database. During operation, the system receives a request associated with a set of objects in the database. Next, the system obtains a set of access control lists (ACLs) associated with the database, wherein a respective ACL specifies one or more access privileges associated with a user or user group, and wherein a respective ACLs is not specific to a particular object in the database. The system then evaluates the ACLs to obtain a set of ACL results associated with the request and processes the request by applying the set of ACL results to the objects without evaluating the ACLs repeatedly for each of the objects. | 02-11-2010 |
20100278446 | STRUCTURE OF HIERARCHICAL COMPRESSED DATA STRUCTURE FOR TABULAR DATA - A highly flexible and extensible structure is provided for physically storing tabular data. The structure, is referred to as a compression unit, and may be used to physically store tabular data that logically resides in any type of table-like structure. According to one embodiment, compression units are recursive. Thus, a compression unit may have a “parent” compression unit to which it belongs, and may have one or more “child” compression units that belong to it. In one embodiment, compression units include metadata that indicates how the tabular data is stored within them. The metadata for a compression unit may indicate, for example, whether the data within the compression unit is stored in row-major or column major-format (or some combination thereof), the order of the columns within the compression unit (which may differ from the logical order of the columns dictated by the definition of their logical container), a compression technique for the compression unit, the child compression units (if any), etc. | 11-04-2010 |
20100281004 | STORING COMPRESSION UNITS IN RELATIONAL TABLES - A database server stores compressed units in data blocks of a database. A table (or data from a plurality of rows thereof) is first compressed into a “compression unit” using any of a wide variety of compression techniques. The compression unit is then stored in one or more data block rows across one or more data blocks. As a result, a single data block row may comprise compressed data for a plurality of table rows, as encoded within the compression unit. Storage of compression units in data blocks maintains compatibility with existing data block-based databases, thus allowing the use of compression units in preexisting databases without modification to the underlying format of the database. The compression units may, for example, co-exist with uncompressed tables. Various techniques allow a database server to optimize access to data in the compression unit, so that the compression is virtually transparent to the user. | 11-04-2010 |
20100281079 | COMPRESSION ANALYZER - Techniques are described herein for automatically selecting the compression techniques to be used on tabular data. A compression analyzer gives users high-level control over the selection process without requiring the user to know details about the specific compression techniques that are available to the compression analyzer. Users are able to specify, for a given set of data, a “balance point” along the spectrum between “maximum performance” and “maximum compression”. The point thus selected is used by the compression analyzer in a variety of ways. For example, in one embodiment, the compression analyzer uses the user-specified balance point to determine which of the available compression techniques qualify as “candidate techniques” for the given set of data. The compression analyzer selects the compression technique to use on a set of data by actually testing the candidate compression techniques against samples from the set of data. After testing the candidate compression techniques against the samples, the resulting compression ratios are compared. The compression technique to use on the set of data is then selected based, in part, on the compression ratios achieved during the compression tests performed on the sample data. | 11-04-2010 |
20110029569 | DDL AND DML SUPPORT FOR HYBRID COLUMNAR COMPRESSED TABLES - Techniques for storing and manipulating tabular data are provided. According to one embodiment, a user may control whether tabular data is stored in row-level or column-major format. Furthermore, the user may control the level of data compression to achieve an optimal balance between query performance and compression ratios. Tabular data from within the same table may be stored in both column-major and row-major format and compressed at different levels. In addition, tabular data can migrate between column-major format and row-major format in response to various events. For example, in response to a request to update or lock a row stored in column-major format, the row may be migrated and subsequently stored into row-major format. In one embodiment, table partitions are used to enhance data compression techniques. For example, compression tests are performed on a representative table partition, and a compression map is generated and applied to other table partitions. | 02-03-2011 |
20110296412 | APPROACHES FOR SECURING AN INTERNET ENDPOINT USING FINE-GRAINED OPERATING SYSTEM VIRTUALIZATION - Approaches for executing untrusted software on a client without compromising the client using micro-virtualization to execute untrusted software in isolated contexts. A template for instantiating a virtual machine on a client is identified in response to receiving a request to execute an application. After the template is identified, without human intervention, a virtual machine is instantiated, using the template, in which the application is to be executed. The template may be selected from a plurality of templates based on the nature of the request, as each template describe characteristics of a virtual machine suitable for a different type of activity. Selected resources such as files are displayed to the virtual machines according to user and organization policies and controls. When the client determines that the application has ceased to execute, the client ceases execution of the virtual machine without human intervention. | 12-01-2011 |
20120117038 | LAZY OPERATIONS ON HIERARCHICAL COMPRESSED DATA STRUCTURE FOR TABULAR DATA - A highly flexible and extensible structure is provided for physically storing tabular data. The structure, referred to as a compression unit, may be used to physically store tabular data that logically resides in any type of table-like structure. Techniques are employed to avoid changing tabular data within existing compression units. Deleting tabular data within compression units is avoided by merely tracking deletion requests, without actually deleting the data. Inserting new tabular data into existing compression units is avoided by storing the new data external to the compression units. If the number of deletions exceeds a threshold, and/or the number of new inserts exceeds a threshold, new compression units may be generated. When new compression units are generated, the previously-existing compression units may be discarded to reclaim storage, or retained to allow reconstruction of prior states of the tabular data. | 05-10-2012 |
20120143833 | STRUCTURE OF HIERARCHICAL COMPRESSED DATA STRUCTURE FOR TABULAR DATA - A highly flexible and extensible structure is provided for physically storing tabular data. The structure, referred to as a compression unit, may be used to store tabular data that logically resides in any type of table-like structure. According to one embodiment, compression units are recursive. Thus, a compression unit may have a “parent” compression unit to which it belongs, and may have one or more “child” compression units that belong to it. In one embodiment, compression units include metadata that indicates how the tabular data is stored within them. The metadata for a compression unit may indicate, for example, whether the data is stored in row-major or column major-format the order of the columns within the compression unit (which may differ from the logical order of the columns dictated by the definition of their logical container), a compression technique for the compression unit, the child compression units (if any), etc. | 06-07-2012 |
20120296883 | Techniques For Automatic Data Placement With Compression And Columnar Storage - For automatic data placement of database data, a plurality of access-tracking data is maintained. The plurality of access-tracking data respectively corresponds to a plurality of data rows that are managed by a database server. While the database server is executing normally, it is automatically determined whether a data row, which is stored in first one or more data blocks, has been recently accessed based on the access-tracking data that corresponds to that data row. After determining that the data row has been recently accessed, the data row is automatically moved from the first one or more data blocks to one or more hot data blocks that are designated for storing those data rows, from the plurality of data rows, that have been recently accessed. | 11-22-2012 |
20130036101 | Compression Analyzer - Techniques are described herein for automatically selecting the compression techniques to be used on tabular data. A compression analyzer gives users high-level control over the selection process without requiring the user to know details about the specific compression techniques that are available to the compression analyzer. Users are able to specify, for a given set of data, a “balance point” along the spectrum between “maximum performance” and “maximum compression”. The point thus selected is used by the compression analyzer in a variety of ways. For example, in one embodiment, the compression analyzer uses the user-specified balance point to determine which of the available compression techniques qualify as “candidate techniques” for the given set of data. The compression analyzer selects the compression technique to use on a set of data by actually testing the candidate compression techniques against samples from the set of data. After testing the candidate compression techniques against the samples, the resulting compression ratios are compared. The compression technique to use on the set of data is then selected based, in part, on the compression ratios achieved during the compression tests performed on the sample data. | 02-07-2013 |
20130055256 | APPROACHES FOR AUTOMATED MANAGEMENT OF VIRTUAL MACHINES FOR RUNNING UNTRUSTED CODE SAFELY - Approaches for transferring data to a client by safely receiving the data in or more virtual machines. In response to the client determining that digital content, originating from an external source, is to be received or processed by the client, the client identifies, without human intervention, one or more virtual machines, executing or to be executed on the client, into which the digital content is to be stored. In doing so, the client may consult policy data to determine a placement policy, a containment policy, and a persistence policy for any virtual machine to receive the digital content. In this way, digital content, such as executable code or interpreted data, of unknown trustworthiness may be safely received by the client without the possibility of any malicious code therein from affecting any undesirable consequence upon the client. | 02-28-2013 |
20130132691 | APPROACHES FOR EFFICIENT PHYSICAL TO VIRTUAL DISK CONVERSION - Approaches for providing a guest operating system to a virtual machine. A read-only copy of one or more disk volumes, including a boot volume, is created. A copy of a master boot record (MBR) for the one or more disk volumes is also stored. The read-only copy may be, but need not be, made using a Volume Shadow Copy Service (VSS). A virtual disk, for use by the virtual machine, is created based on the read-only copy of the one or more disk volumes and the copy of the master boot record (MBR), wherein the virtual disk comprises the guest operating system used by the virtual machine. In this way, a single installed operating system may provide both the host operating system and the guest operating system. | 05-23-2013 |
20130191924 | Approaches for Protecting Sensitive Data Within a Guest Operating System - Approaches for preventing unauthorized access of sensitive data within an operating system (OS), e.g., a guest OS used by a virtual machine. Dummy data may be written over physical locations on disk where sensitive data is stored, thereby preventing a malicious program from accessing the sensitive data. Alternately, a delete operation may be performed on sensitive data within an OS, and thereafter the OS is converted into a serialized format to expunge the deleted data. The serialized OS is converted into a deserialized form to facilitate its use. Optionally, a data structure may be updated to identify where sensitive data is located within an OS. When a request to access a portion of the OS is received, the data structure is consulted to determine whether the requested portion contains sensitive data, and if so, dummy data is returned to the requestor without consulting the requested portion of the OS. | 07-25-2013 |
20140074805 | STORING COMPRESSION UNITS IN RELATIONAL TABLES - A database server stores compressed units in data blocks of a database. A table (or data from a plurality of rows thereof) is first compressed into a “compression unit” using any of a wide variety of compression techniques. The compression unit is then stored in one or more data block rows across one or more data blocks. As a result, a single data block row may comprise compressed data for a plurality of table rows, as encoded within the compression unit. Storage of compression units in data blocks maintains compatibility with existing data block-based databases, thus allowing the use of compression units in preexisting databases without modification to the underlying format of the database. The compression units may, for example, co-exist with uncompressed tables. Various techniques allow a database server to optimize access to data in the compression unit, so that the compression is virtually transparent to the user. | 03-13-2014 |
20140380315 | Transferring Files Using A Virtualized Application - Approaches for transferring a file using a virtualized application. A virtualized application executes within a virtual machine residing on a physical machine. When the virtualized application is instructed to download a file stored external to the physical machine, the virtualized application displays an interface which enables at least a portion of a file system, maintained by a host OS, to be browsed while preventing files stored within the virtual machine to be browsed. Upon the virtualized application receiving input identifying a target location within the file system, the virtualized application stores the file at the target location. The virtualized application may also upload a file stored on the physical machine using an interface which enables at least a portion of a file system of a host OS to be browsed while preventing files in the virtual machine to be browsed. | 12-25-2014 |
20150032763 | QUERY AND EXADATA SUPPORT FOR HYBRID COLUMNAR COMPRESSED DATA - A method and apparatus is provided for optimizing queries received by a database system that relies on an intelligent data storage server to manage storage for the database system. Storing compression units in hybrid columnar format, the storage manager evaluates simple predicates and only returns data blocks containing rows that satisfy those predicates. The returned data blocks are not necessarily stored persistently on disk. That is, the storage manager is not limited to returning disc block images. The hybrid columnar format enables optimizations that provide better performance when processing typical database workloads including both fetching rows by identifier and performing table scans. | 01-29-2015 |
20150143374 | SECURING AN INTERNET ENDPOINT USING FINE-GRAINED OPERATING SYSTEM VIRTUALIZATION - Approaches for executing untrusted software on a client without compromising the client using micro-virtualization to execute untrusted software in isolated contexts. A template for instantiating a virtual machine on a client is identified in response to receiving a request to execute an application. After the template is identified, without human intervention, a virtual machine is instantiated, using the template, in which the application is to be executed. The template may be selected from a plurality of templates based on the nature of the request, as each template describe characteristics of a virtual machine suitable for a different type of activity. When the client determines that the application has ceased to execute, the client ceases execution of the virtual machine without human intervention. | 05-21-2015 |
20150149419 | Techniques for Automatic Data Placement with Compression and Columnar Storage - For automatic data placement of database data, a plurality of access-tracking data is maintained. The plurality of access-tracking data respectively corresponds to a plurality of data rows that are managed by a database server. While the database server is executing normally, it is automatically determined whether a data row, which is stored in first one or more data blocks, has been recently accessed based on the access-tracking data that corresponds to that data row. After determining that the data row has been recently accessed, the data row is automatically moved from the first one or more data blocks to one or more hot data blocks that are designated for storing those data rows, from the plurality of data rows, that have been recently accessed. | 05-28-2015 |