Patent application number | Description | Published |
20110282881 | METHODS AND SYSTEMS FOR DETERMINING CANDIDATES FOR A CUSTOM INDEX IN A MULTI-TENANT DATABASE ENVIRONMENT - Methods and systems are described for determining candidates for a custom index in a multi-tenant database environment. In one embodiment, a method includes, capturing a query that is directed to a multi-tenant database, determining whether the captured query is a candidate for an additional filter, determining operators used by the captured query if the query is a candidate, determining data types of the database used by the captured query if the query is a candidate, determining whether there is a current filter for the operator and data types used by the captured query if the query is a candidate, selecting the captured query based on the determined operators, data types, and the determined current filters, and generating a custom index for the selected query. | 11-17-2011 |
20120023375 | GENERATING PERFORMANCE ALERTS - A method for generating performance alerts in a database system. The method includes collecting a predefined set of performance data, and comparing the performance data to one or more predefined thresholds. The method also includes determining if any of the performance data exceeds the one or more predefined thresholds, and generating an alert if any of the data exceeds one of the predefined thresholds. | 01-26-2012 |
20130007062 | TRUNCATING DATA ASSOCIATED WITH OBJECTS IN A MULTI-TENANT DATABASE - An embodiment of a multi-tenant database system includes a multi-tenant database, an entity definition table, and a data processing engine. The database has objects for multiple tenants, including an existing object for a designated tenant. Each entry in the existing object has a respective entity identifier. The definition table has entries for the database objects, including a metadata entry for the existing object. This metadata entry has a tenant identifier for the designated tenant, an entity name for the existing object, and an old key prefix for the existing object. Each entity identifier of the existing object begins with the old key prefix. The engine performs a data truncation operation on the existing object by updating the metadata entry to replace the old key prefix with a new key prefix. This results in an updated object that is identified by the new key prefix and the tenant identifier. | 01-03-2013 |
20130018890 | CREATING A CUSTOM INDEX IN A MULTI-TENANT DATABASE ENVIRONMENT - Methods and systems are described for creating a custom index in a multi-tenant database environment. In one embodiment, a method includes obtaining query for a multi-tenant database that is recommended as a candidate for creating an additional filter, evaluating the query against criteria to determine whether to select the query for creating the additional filter, and creating the additional filter for the query, if the query is selected. | 01-17-2013 |
20130054637 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR CALCULATING A SIZE OF AN ENTITY - In accordance with embodiments, there are provided mechanisms and methods for calculating a size of an entity. These mechanisms and methods for calculating a size of an entity can enable optimized data analysis, improved system resource knowledge, increased efficiency, etc. | 02-28-2013 |
20150127680 | PROTECTED HANDLING OF DATABASE QUERIES - Embodiments regard protected handling of database queries. An embodiment of a method for querying database system views and tables includes: receiving a user query from a user, the user query being directed to one or both of a view and a table of a database, wherein the user is not a database administrator; parsing the user query with a query parser to identify elements of the user query, parsing the query including determining whether the query meets certain database access criteria; automatically generating a database query based on the parsing of the user query, the generation of the database query including generating a database query that is limited by the database access criteria; accessing the one or both of the view and the table using the generated database query, wherein the access is limited to read-only access; and obtaining a result of the access of the one or both of the view and table. | 05-07-2015 |
20150254286 | TRUNCATING DATA ASSOCIATED WITH OBJECTS IN A MULTI-TENANT DATABASE - An exemplary embodiment of a multi-tenant database system is provided. The system includes a multi-tenant database, an entity definition table, and a data processing engine. The database has database objects for multiple tenants, including an existing object for a designated tenant. Each entry in the existing object has a respective entity identifier. The definition table has metadata entries for the database objects, including a metadata entry for the existing object. This metadata entry has a tenant identifier for the designated tenant, an entity name for the existing object, and an old key prefix for the existing object. Each entity identifier of the existing object begins with the old key prefix. The engine performs a data truncation operation on the existing object by updating the metadata entry to replace the old key prefix with a new key prefix. This results in an updated object that is identified by the new key prefix and the tenant identifier. | 09-10-2015 |
Patent application number | Description | Published |
20090276377 | NETWORK DATA MINING TO DETERMINE USER INTEREST - Mining information from network data traffic to determine interests of online network users is provided herein. A data packet received at a network interface device can be accessed and inspected at line rate speeds. Source or addressing information in the data packet can be extracted to identify an initiating and/or receiving device. The packet can be inspected to identify occurrences of keywords or data features related with one or more subject matters. A vector can be defined for a network device that indicates a relative rank of interest in various subject matters. Furthermore, statistical analysis can be implemented on data stored in one or more interest vectors to determine information pertinent to network user interests. The information can facilitate providing value-added products or services to network users. | 11-05-2009 |
20110019667 | PACKET CLASSIFICATION - Apparatuses, methods, and other embodiments associated with packet identification are described. One example apparatus includes a packet selection logic to identify packets associated with a data stream. The example apparatus may also include a set of packet classification logics. A packet classification logic may generate a signal as a function of whether an attribute associated with the packet matches an attribute associated with packets generated by a tested application. | 01-27-2011 |
20120026890 | Reporting Statistics on the Health of a Sensor Node in a Sensor Network - In one embodiment, a method includes generating a set of statistics concerning a sensor node in a sensor network based on one or more of sensor data from a sensor at the sensor node, communication to the sensor node from one or more other sensor nodes in the sensor network, or communication from the sensor node; determining based on a subset of the set of statistics whether a predetermined anomalous event correlated with the subset has occurred; and, if the predetermined anomalous event has occurred, generating a summary of the subset and communicating it to a police node in the sensor network. | 02-02-2012 |
20120026898 | Formatting Messages from Sensor Nodes in a Sensor Network - In one embodiment, a method includes receiving a summary of statistics concerning a sensor node in a sensor network that comprises a plurality of sensor nodes, the statistics having been generated based on one or more of sensor data from a sensor at the sensor node, communication to the sensor node from one or more other sensor nodes in the sensor network, or communication from the sensor node; analyzing the summary; and applying based on the analysis one or more predetermined polices to one or more of the sensor nodes or the sensor network. | 02-02-2012 |
20120026938 | Applying Policies to a Sensor Network - In one embodiment, a method comprises accessing a statistic concerning a sensor node in a sensor network, the statistic being based on one or more of sensor data from a sensor at the sensor node, communication to the sensor node from one or more other sensor nodes in the sensor network, or communication from the sensor node; generating a message that includes a type-length-value (TLV) element based on the statistic, the TLV element including a first portion that indicates a class of the statistic, a second portion that indicates a numerical value for the statistic, and a third portion that indicates a length of the second portion; and communicating the message to a police node in the sensor network. | 02-02-2012 |
20120101912 | Providing a Marketplace for Sensor Data - In one embodiment, a method includes accessing first information identifying a sensor-data set that includes sensor-data from multiple sensor-data streams from multiple sensors over a period of time, with the sensor data from the sensor-data streams having been combined with each other based on a relationship of the sensor data to a sensor subject; accessing second information identifying one or more offers to purchase the sensor-data set; and matching one of the offers with the sensor-data set to facilitate a purchase of the sensor-data set based at least on the one of the offers matched to the sensor-data set. | 04-26-2012 |
20120197852 | Aggregating Sensor Data - In particular embodiments, a method includes accessing sensor data from sensor nodes in a sensor network and aggregating the sensor data for communication to an indexer in the sensor network. The aggregation of the sensor data includes deduplicating the sensor data; validating the sensor data; formatting the sensor; generating metadata for the sensor data; and time-stamping the sensor data. The metadata identifies one or more pre-determined attributes of the sensor data. The method also includes communicating the aggregated sensor data to the indexer in the sensor network. The indexer is configured to index the aggregated sensor data according to a multi-dimensional array for querying of the aggregated sensor data along with other aggregated sensor data. One or more first ones of the dimensions of the multi-dimensional array include time and one or more second ones of the dimensions of the multi-dimensional include one or more of the pre-determined sensor-data attributes. | 08-02-2012 |
20120197856 | Hierarchical Network for Collecting, Aggregating, Indexing, and Searching Sensor Data - In particular embodiments, a system includes a sensor-data-collection network layer including multiple sensors. The sensor-data-collection network layer is a first logical layer of a sensor network. The system includes an aggregation network layer including one or more aggregators configured to access sensor data from the sensors and aggregate the sensor data. The aggregation network layer is a second logical layer residing logically above the first logical layer. The system includes an indexing network layer including one or more indexers that are configured to access the aggregated sensor data and generate an index of the aggregated sensor data according to a multi-dimensional array. The indexing network layer is a third logical layer residing logically above the second logical layer. The system includes a search network layer including one or more search engines. The search network layer is a fourth logical layer residing logically above the third logical layer. | 08-02-2012 |
20120197898 | Indexing Sensor Data - In particular embodiments, a method includes, from an indexer in a sensor network, accessing a set of sensor data that includes sensor data aggregated together from sensors in the sensor network, one or more time stamps for the sensor data, and metadata for the sensor data identifying one or more pre-determined attributes of the sensor data. The method includes, at the indexer, generating an index of the set of sensor data according to a multi-dimensional array configured for querying of the set of sensor data along with a plurality of other sets of sensor data. One or more first ones of the dimensions of the multi-dimensional array include time, and one or more second ones of the dimensions of the multi-dimensional array include one or more of the pre-determined sensor-data attributes. The method includes, from the indexer, communicating the index of the set of sensor data for use in responding to one or more queries of the set of sensor data along with a plurality of other sets of sensor data. | 08-02-2012 |
20120197911 | Searching Sensor Data - In particular embodiments, a method includes receiving a query for particular sensor data among multiple sensor data from multiple sensors. The plurality of sensor data has been indexed according to a multi-dimensional array. One or more first ones of the dimensions include time, and one or more second ones of the dimensions include one or more pre-determined sensor-data attributes. The method includes translating the query to correspond to the indexing of the plurality of sensor data. The translated query includes one or more values for one or more of the dimensions of the multi-dimensional array. The method includes communicating the translated query to search among the plurality of sensor data according to its indexing to identify the particular sensor data. | 08-02-2012 |
20120239792 | PLACEMENT OF A CLOUD SERVICE USING NETWORK TOPOLOGY AND INFRASTRUCTURE PERFORMANCE - Techniques are described for selecting an optimal data center for instantiating a first cloud service. Embodiments of the invention receive a request specifying a topology of a first cloud service to be hosted by one of a plurality of data centers which provide computing resources to host a plurality of cloud services. A suitability value is then determined for each of the data centers which measures a fitness of the data center for instantiating the first cloud service. In one embodiment, the suitability value is determined by calculating a plurality of metric values for the data center, normalizing the metric values and calculating a weighted average of the normalized values. One of the data centers is then selected for instantiating the first cloud service, based at least in part on the determined suitability values. | 09-20-2012 |
20120303618 | Clustering-Based Resource Aggregation within a Data Center - Data representing capabilities of devices in a data is aggregated on a cluster-basis. Information representing capability attributes of devices in the data center is received. The information representing the capability attributes is analyzed to generate data that groups devices based on similarity of at least one capability attribute. Aggregation data is stored that represents the grouping of the devices based on similarity of the at least one capability attribute and identifies the devices in corresponding groups. | 11-29-2012 |
20120331147 | HIERARCHICAL DEFRAGMENTATION OF RESOURCES IN DATA CENTERS - Techniques are provided herein for defragmenting resources within a cloud computing system. The cloud computing system includes a plurality of servers deployed in a plurality of respective racks, wherein the respective racks are deployed in a pod of a data center. An element of the cloud computing system determines for each server in a given rack of servers a number of free resource slots available thereon and a number of resource slots in an idle state, and then further determines whether the number of free resource slots on a first server in the plurality of servers is greater than a predetermined threshold. When the number of free resource slots in the first server is greater than the predetermined threshold, a second server in the plurality of servers is identified with sufficient resource slots thereon to accommodate the number of resource slots in the idle state on the first server, and the resource slots in the idle state on the first server are caused to be migrated to the second server. | 12-27-2012 |
20130007261 | VIRTUAL DATA CENTER MONITORING - Techniques are provided for monitoring the state or status of virtual data centers. In one embodiment, a method includes receiving state information representing the state of hardware devices supporting instantiations of virtual data centers operating within a physical data center. The state information is mapped to hardware devices supporting a selected instantiation of a virtual data center to identify state information for the selected instantiation of a virtual data center. An assessment is then made, based on the state information for the selected instantiation of a virtual data center, regarding a degree to which the selected instantiation of a virtual data center is operating in accordance with predetermined policy. A user is then notified of the assessment via, e.g., a color-coded dashboard representation of the selected instantiation of a virtual data center or a color-coded aspect of the selected instantiation of a virtual data center. | 01-03-2013 |
20130055091 | Graph-Based Virtual Data Center Requests - Graph-based virtual data center requests are described. In some implementations, a method includes displaying a graph having graphical elements representing network resources. A user can select one of the graphical elements and provide input specifying requirements for a network resource corresponding to the selected graphical element. A virtual data center request can be generated based on the graph and the specified requirements. The virtual data center request can be transmitted to a data center device for processing. In some implementations, the virtual data center request can be an extensible markup language (XML) representation of the graph that includes the specified service requirements. In some implementations, a data center server can receive a graph-based virtual data center request and allocate data center resources based on the virtual data center request. | 02-28-2013 |
20130073552 | Data Center Capability Summarization - A method for summarizing capabilities in a hierarchically arranged data center includes receiving capabilities information, wherein the capabilities information is representative of capabilities of respective nodes at a first hierarchical level in the hierarchically arranged data center, clustering nodes based on groups of capabilities information, generating a histogram that represents individual node clusters, and sending the histogram to a next higher level in the hierarchically arranged data center. Relative rankings of capabilities may be used to order a sequence of clustering operations. | 03-21-2013 |
20130179538 | HIERARCHICAL INFRASTRUCTURE FOR NOTIFICATION OF NETWORK EVENTS - Techniques are described for reporting and monitoring network devices using microblog messaging. Embodiments monitor network traffic traversing a network device and performance metrics of the network device to detect occurrences of network and performance events. In response to detecting an occurrence of an event, a microblog message is generated. The microblog message contains at least a description of the occurrence. The microblog message is transmitted to a microblog service, which in turn forwards the message to subscribers. The microblog message may then be analyzed by the subscribers to determine operational attributes of the network device. | 07-11-2013 |
20130212279 | Resource Allocation Mechanism - A first network device determines capabilities of resources in a section of a network that is accessible using the first network device. The first network device groups the resources into a resource cluster. The first network device advertises the resource cluster in the network, wherein each of a plurality of network devices advertise a resource cluster associated with sections of the network. A second network device receives a request for providing a service. The second network device groups the request into a plurality of request clusters. The second network device selects at least one resource cluster for providing the service based on information associated with the request clusters and the advertised resource clusters. The second network device allocates resources included in the at least one resource cluster for providing the service based on selecting the at least one resource cluster. | 08-15-2013 |
20130290536 | GENERALIZED COORDINATE SYSTEM AND METRIC-BASED RESOURCE SELECTION FRAMEWORK - In one embodiment, an n-dimensional resource vector for each of a plurality of resources in a computer network is determined, each n-dimensional resource vector having n property values for a corresponding resource of the plurality of resources. Upon receiving a request for one or more resources of the plurality of resources, where the request indicates one or more desired property values, the techniques convert the desired property values of the request into an n-dimensional request vector, determine a distance between each resource vector and the request vector, and provide a response to the request, the response indicating one or more closest match resources for the request based on the distances. | 10-31-2013 |
20130304600 | Providing a Marketplace for Sensor Data - In one embodiment, a method includes accessing first information identifying a sensor-data set that includes sensor-data from multiple sensor-data streams from multiple sensors over a period of time, with the sensor data from the sensor-data streams having been combined with each other based on a relationship of the sensor data to a sensor subject; accessing second information identifying one or more offers to purchase the sensor-data set; and matching one of the offers with the sensor-data set to facilitate a purchase of the sensor-data set based at least on the one of the offers matched to the sensor-data set. | 11-14-2013 |
20140059178 | CLOUD RESOURCE PLACEMENT USING PLACEMENT PIVOT IN PHYSICAL TOPOLOGY - In one embodiment, a method comprises retrieving a request graph specifying request nodes identifying respective requested cloud computing service operations, and at least one request edge specifying a requested path requirements connecting the request nodes; identifying a placement pivot among feasible cloud elements identified in a physical graph representing a data network having a physical topology, each feasible cloud element an available solution for one of the request nodes, the placement pivot having a maximum depth in the physical topology relative to the feasible cloud elements; ordering the feasible cloud elements, according to increasing distance from the placement pivot to form an ordered list of candidate sets of feasible cloud elements; and determining an optimum candidate set, from at least a portion of the ordered list, based on the optimum candidate set having an optimized fitness function in the physical graph among the other candidate sets in the ordered list. | 02-27-2014 |
20140280997 | ESTABLISHING TRANSLATION FOR VIRTUAL MACHINES IN A NETWORK ENVIRONMENT - A method, apparatus, computer readable medium, and system that includes receiving an indication identifying a tunnel between a first virtual machine, associated with a first protocol, and a second virtual machine, associated with a second protocol, determining that the first protocol is different than the second protocol, determining at least one translation directive that specifies for translation between the first protocol and the second protocol for the tunnel, and causing establishment of a translator based, at least in part, on the translation directive is disclosed. | 09-18-2014 |
20150049904 | REFLECTION BASED TRACKING SYSTEM - In one embodiment, a processor can receive data representing a view reflected by a mirror of a plurality of mirrors. The plurality of mirrors may be configured in a space to reflect a plurality of views of structures in the space. The mirror of the plurality of mirrors may include a uniquely identifiable feature distinguishable from other objects in the space. The processor can identify the mirror of the plurality of mirrors according to the uniquely identifiable feature. The processor can also determine an attribute of the structures according to the identified mirror and the data representing the view reflected by the mirror. | 02-19-2015 |
20150063102 | Flow Based Network Service Insertion - Techniques are provided to generate and store a network graph database comprising information that indicates a service node topology, and virtual or physical network services available at each node in a network. A service request is received for services to be performed on packets traversing the network between at least first and second endpoints. A subset of the network graph database is determined that can provide the services requested in the service request. A service chain and service chain identifier is generated for the service based on the network graph database subset. A flow path is established through the service chain by flow programming network paths between the first and second endpoints using the service chain identifier. | 03-05-2015 |
20150103837 | LEVERAGING HARDWARE ACCELERATORS FOR SCALABLE DISTRIBUTED STREAM PROCESSING IN A NETWORK ENVIRONMENT - An example method for leveraging hardware accelerators for scalable distributed stream processing in a network environment is provided and includes allocating a plurality of hardware accelerators to a corresponding plurality of bolts of a distributed stream in a network, facilitating a handshake between the hardware accelerators and the corresponding bolts to allow the hardware accelerators to execute respective processing logic according to the corresponding bolts, and performing elastic allocation of hardware accelerators and load balancing of stream processing in the network. The distributed stream comprises a topology of at least one spout and the plurality of bolts. In specific embodiments, the allocating includes receiving capability information from the bolts and the hardware accelerators, and mapping the hardware accelerators to the bolts based on the capability information. In some embodiments, facilitating the handshake includes executing a shadow process to interface between the hardware accelerator and the distributed stream. | 04-16-2015 |
20150127834 | OPTIMIZING PLACEMENT OF VIRTUAL MACHINES - Systems and methods are described for allocating resources in a cloud computing environment. The method includes receiving a computing request, the request for use of at least one virtual machine and a portion of memory. In response to the request, a plurality of hosts is identified and a cost function is formulated using at least a portion of those hosts. Based on the cost function, at least one host that is capable of hosting the virtual machine and memory is selected. | 05-07-2015 |
20150199208 | ALLOCATING RESOURCES FOR MULTI-PHASE, DISTRIBUTED COMPUTING JOBS - In one embodiment, data indicative of the size of an intermediate data set generated by a first resource device is received at a computing device. The intermediate data set is associated with a virtual machine to process the intermediate data set. A virtual machine configuration is determined based on the size of the intermediate data set. A second resource device is selected to execute the virtual machine based on the virtual machine configuration and on an available bandwidth between the first and second resource devices. The virtual machine is then assigned to the second resource device to process the intermediate data set. | 07-16-2015 |
20150200867 | TASK SCHEDULING USING VIRTUAL CLUSTERS - In one embodiment, a device receives information regarding a data set to be processed by a map-reduce process. The device generates a set of virtual clusters for the map-reduce process based on network bandwidths between nodes of the virtual clusters, each node of the virtual cluster corresponding to a resource device, and associates the data set with a map-reduce process task. The device then schedules the execution of the task by a node of the virtual clusters based on the network bandwidth between the node and a source node on which the data set resides. | 07-16-2015 |
20150200872 | CLOUD RESOURCE PLACEMENT BASED ON STOCHASTIC ANALYSIS OF SERVICE REQUESTS - In one embodiment, a method comprises determining a stochastic distribution of received service requests for services in a data network having a prescribed physical topology; and allocating virtualized resources within the prescribed physical topology for a corresponding service request, based on the stochastic distribution. | 07-16-2015 |
20150278343 | Data Center Capability Summarization - A method for summarizing capabilities in a hierarchically arranged data center includes receiving capabilities information, wherein the capabilities information is representative of capabilities of respective nodes at a first hierarchical level in the hierarchically arranged data center, clustering nodes based on groups of capabilities information, generating a histogram that represents individual node clusters, and sending the histogram to a next higher level in the hierarchically arranged data center. Relative rankings of capabilities may be used to order a sequence of clustering operations. | 10-01-2015 |
Patent application number | Description | Published |
20130070524 | ON CHIP DYNAMIC READ FOR NON-VOLATILE STORAGE - Dynamically determining read levels on chip (e.g., memory die) is disclosed herein. One method comprises reading a group of non-volatile storage elements on a memory die at a first set of read levels. Results of the two most recent of the read levels are stored on the memory die. A count of how many of the non-volatile storage elements in the group showed a different result between the reads for the two most recent read levels is determined. The determining is performed on the memory die using the results stored on the memory die. A dynamic read level is determined for distinguishing between a first pair of adjacent data states of the plurality of data states based on the read level when the count reaches a pre-determined criterion. Note that the read level may be dynamically determined on the memory die. | 03-21-2013 |
20130148425 | ON CHIP DYNAMIC READ FOR NON-VOLATILE STORAGE - Dynamically determining read levels on chip (e.g., memory die) is disclosed herein. One method comprises reading a group of non-volatile storage elements on a memory die at a first set of read levels. Results of the two most recent of the read levels are stored on the memory die. A count of how many of the non-volatile storage elements in the group showed a different result between the reads for the two most recent read levels is determined. The determining is performed on the memory die using the results stored on the memory die. A dynamic read level is determined for distinguishing between a first pair of adjacent data states of the plurality of data states based on the read level when the count reaches a pre-determined criterion. Note that the read level may be dynamically determined on the memory die. | 06-13-2013 |
20130163342 | PROGRAM TEMPERATURE DEPENDENT READ - Methods and non-volatile storage systems are provided for using compensation that depends on the temperature at which the memory cells were programmed. Note that the read level compensation may have a component that is not dependent on the memory cells' Tco. That is, the component is not necessarily based on the temperature dependence of the Vth of the memory cells. The compensation may have a component that is dependent on the difference in width of individual Vth distributions of the different states across different temperatures of program verify. This compensation may be used for both verify and read, although a different amount of compensation may be used during read than during verify. | 06-27-2013 |
20130250690 | SELECTED WORD LINE DEPENDENT SELECT GATE VOLTAGE DURING PROGRAM - Methods and devices for operating non-volatile storage are disclosed. One or more programming conditions depend on the location of the word line that is selected for programming, which may reduce or eliminate program disturb. The voltage applied to the gate of a select transistor of a NAND string may depend on the location of the selected word line. This could be either a source side or drain side select transistor. This may prevent or reduce program disturb that could result due to DIBL. This may also prevent or reduce program disturb that could result due to GIDL. A negative bias may be applied to the gate of a source side select transistor when programming at least some of the word lines. In one embodiment, progressively lower voltages are used for the gate of the drain side select transistor when programming progressively higher word lines. | 09-26-2013 |
20130301351 | Channel Boosting Using Secondary Neighbor Channel Coupling In Non-Volatile Memory - In a non-volatile storage system, a programming portion of a program-verify iteration has multiple programming pulses, and storage elements along a word line are selected for programming according to a pattern. Unselected storage elements are grouped to benefit from channel-to-channel capacitive coupling from both primary and secondary neighbor storage elements. The coupling is helpful to boost channel regions of the unselected storage elements to a higher channel potential to prevent program disturb. Each selected storage element has a different relative position within its set. For example, during a first programming pulse, first, second and third storage elements are selected in first, second and third sets, respectively. During a second programming pulse, second, third and first storage elements are selected in the first, second and third sets, respectively. During a third programming pulse, third, first and second storage elements are selected in the first, second and third sets, respectively. | 11-14-2013 |
20130314995 | Controlling Dummy Word Line Bias During Erase In Non-Volatile Memory - A technique for erasing non-volatile memory such as a NAND string which includes non-user data or dummy storage elements. The voltages of the non-user data storage elements are capacitively coupled higher by controlled increases in an erase voltage which is applied to a substrate. The voltages are floated by rendering a pass gate transistor in a non-conductive state, where the pass gate transistor is between a voltage driver and a non-user data storage element. Voltages of select gate transistors can also be capacitively coupled higher. The substrate voltage can be increased in steps and/or as a continuous ramp. In one approach, outer dummy storage elements are floated while inner dummy storage elements are driven. In another approach, both outer and inner dummy storage elements are floated. Write-erase endurance of the storage elements is increased due to reduced charge trapping in the substrate. | 11-28-2013 |
20140003147 | Optimized Erase Operation For Non-Volatile Memory With Partially Programmed Block | 01-02-2014 |
20140063940 | ON CHIP DYNAMIC READ LEVEL SCAN AND ERROR DETECTION FOR NONVOLATILE STORAGE - Techniques for efficiently programming non-volatile storage are disclosed. A second page of data may efficiently be programmed into memory cells that already store a first page. Data may be efficiently transferred from single bit cells to multi-bit cells. Memory cells are read using at least two different read levels. The results are compared to determine a count how many memory cells showed a different result between the two reads. If the count is less than a threshold, then data from the memory cells is stored into a set of data latches without attempting to correct for misreads. If the count is not less than the threshold, then data from the memory cells is stored into the set of data latches with attempting to correct for misreads. A programming operation may be performed based on the data stored in the set of data latches. | 03-06-2014 |
20140119126 | Dynamic Bit Line Bias For Programming Non-Volatile Memory - A program operation for a set of non-volatile storage elements. A count is maintained of a number of program pulses which are applied to an individual storage element in a slow programming mode, and an associated bit line voltage is adjusted based on the count. Different bit line voltages can be used, having a common step size or different steps sizes. As a result, the change in threshold voltage of the storage element within the slow programming mode, with each program pulse can be made uniform, resulting in improved programming accuracy. Latches maintain the count of program pulses experienced by the associated storage element, while in the slow programming mode. The storage element is in a fast programming mode when its threshold voltage is below a lower verify level, and in the slow programming mode when its threshold voltage is between the lower verify level and a higher verify level. | 05-01-2014 |
20140160848 | SELECT GATE BIAS DURING PROGRAM OF NON-VOLATILE STORAGE - Techniques disclosed herein may prevent program disturb by preventing a select transistor of an unselected NAND string from unintentionally turning on. The Vgs of a select transistor of a NAND string may be lowered from one programming pulse to the next programming pulse multiple times. The select transistor may be a drain side select transistor or a source side select transistor. Progressively lowering the Vgs of the select transistor of an unselected NAND string as programming progresses may prevent the select transistor from unintentionally turning on. Therefore, program disturb is prevented or reduced. Vgs may be lowered by applying a lower voltage to a select line associated with the select transistor. Vgs may be lowered by applying a higher voltage to bit lines associated with the unselected NAND strings as programming progresses. Vgs may be lowered by applying a higher voltage to a common source line as programming progresses. | 06-12-2014 |
20140185382 | ERASE FOR NON-VOLATILE STORAGE - Techniques are disclosed herein for erasing non-volatile storage elements. A sequence of increasing erase voltages may be applied to a substrate. The select line may be floated and many of the word lines may be held at a low voltage (e.g., close to 0V). However, the voltage applied to an edge word may be increased in magnitude relative to a previous voltage applied to the edge word line for at least a portion of the sequence of erase voltages. The edge word line could be the word line that is immediately adjacent to the select line. The increasing voltage applied to the edge word line may prevent or reduce damage to oxides between the select line and edge word line. It may also help to regulate the e-field across a tunnel oxide of memory cells on the edge word line. | 07-03-2014 |
20140198575 | Method And Apparatus For Program And Erase Of Select Gate Transistors - Techniques are provided for programming and erasing of select gate transistors in connection with the programming or erasing of a set of memory cells. In response to a program command to program memory cells, the select gate transistors are read to determine whether their Vth is below an acceptable range, in which case the select gate transistors are programmed before the memory cells. Or, a decision can be made to program the select gate transistors based on a count of program-erase cycles, whether a specified time period has elapsed and/or a temperature history of the non-volatile storage device. When an erase command is made to erase memory cells, the select gate transistors are read to determine whether their Vth is above an acceptable range. If their Vth is above the acceptable range, the select gate transistors can be erased concurrently with the erasing of the memory cells. | 07-17-2014 |
20140254277 | Method And Apparatus For Program And Erase Of Select Gate Transistors - Techniques are provided for programming and erasing of select gate transistors in connection with the programming or erasing of a set of memory cells. In response to a program command to program memory cells, the select gate transistors are read to determine whether their Vth is below an acceptable range, in which case the select gate transistors are programmed before the memory cells. Or, a decision can be made to program the select gate transistors based on a count of program-erase cycles, whether a specified time period has elapsed and/or a temperature history of the non-volatile storage device. When an erase command is made to erase memory cells, the select gate transistors are read to determine whether their Vth is above an acceptable range. If their Vth is above the acceptable range, the select gate transistors can be erased concurrently with the erasing of the memory cells. | 09-11-2014 |
20140369129 | Method And Apparatus For Program And Erase Of Select Gate Transistors - Techniques are provided for programming select gate transistors in connection with the programming of a set of memory cells. In response to a program command to program memory cells, the select gate transistors are read to determine whether their Vth is below an acceptable range, in which case the select gate transistors are programmed before the memory cells. Or, a decision can be made to program the select gate transistors based on a count of program-erase cycles, whether a specified time period has elapsed and/or a temperature history of the non-volatile storage device. | 12-18-2014 |
20150092496 | Dynamic Bit Line Bias For Programming Non-Volatile Memory - A program operation for a set of non-volatile storage elements. A count is maintained of a number of program pulses which are applied to an individual storage element in a slow programming mode, and an associated bit line voltage is adjusted based on the count. Different bit line voltages can be used, having a common step size or different steps sizes. As a result, the change in threshold voltage of the storage element within the slow programming mode, with each program pulse can be made uniform, resulting in improved programming accuracy. Latches maintain the count of program pulses experienced by the associated storage element, while in the slow programming mode. The storage element is in a fast programming mode when its threshold voltage is below a lower verify level, and in the slow programming mode when its threshold voltage is between the lower verify level and a higher verify level. | 04-02-2015 |
20150149693 | Targeted Copy of Data Relocation - In a nonvolatile memory array that has a binary cache formed of SLC blocks and a main memory formed of MLC blocks, corrupted data along an MLC word line is corrected and relocated, along with any other data along the MLC word line, to binary cache, before it becomes uncorrectable. Subsequent reads of the relocated data directed to binary cache. | 05-28-2015 |
20150200014 | Controlling Dummy Word Line Bias During Erase In Non-Volatile Memory - A technique for erasing non-volatile memory such as a NAND string which includes non-user data or dummy storage elements. The voltages of the non-user data storage elements are capacitively coupled higher by controlled increases in an erase voltage which is applied to a substrate. The voltages are floated by rendering a pass gate transistor in a non-conductive state, where the pass gate transistor is between a voltage driver and a non-user data storage element. Voltages of select gate transistors can also be capacitively coupled higher. The substrate voltage can be increased in steps and/or as a continuous ramp. In one approach, outer dummy storage elements are floated while inner dummy storage elements are driven. In another approach, both outer and inner dummy storage elements are floated. Write-erase endurance of the storage elements is increased due to reduced charge trapping between the select gates and the dummy storage elements. | 07-16-2015 |
20150212883 | ON CHIP DYNAMIC READ LEVEL SCAN AND ERROR DETECTION FOR NONVOLATILE STORAGE - Techniques for efficiently programming non-volatile storage are disclosed. A second page of data may efficiently be programmed into memory cells that already store a first page. Data may be efficiently transferred from single bit cells to multi-bit cells. Memory cells are read using at least two different read levels. The results are compared to determine a count how many memory cells showed a different result between the two reads. If the count is less than a threshold, then data from the memory cells is stored into a set of data latches without attempting to correct for misreads. If the count is not less than the threshold, then data from the memory cells is stored into the set of data latches with attempting to correct for misreads. A programming operation may be performed based on the data stored in the set of data latches. | 07-30-2015 |
Patent application number | Description | Published |
20080306963 | Calendaring techniques and interfaces - The calendaring techniques and interfaces described herein provide access to calendar data stored in a server hosted calendar store to applications. The calendar data includes calendar events and tasks. In one aspect, an application program interface (API) retrieves an occurrence from a series of reoccurring calendar data upon request from an application. In another aspect, the API sends calendar data provided by the application to a server program that manages a calendar store for storage and queries the server program to retrieve calendar data requested by the application from the calendar store. In yet another aspect, the API sends notifications that the calendar store has changed to interested applications. | 12-11-2008 |
20080307323 | CALENDARING TECHNIQUES AND SYSTEMS - The calendaring techniques and systems described herein enable a user to more easily resolve conflicts for attendees to an event by visually indicating available time slots for all attendees in a calendar window or in a timeline window separate from the calendar window. The first available time slot may be automatically selected or the user may select an available slot to reschedule the event. In another aspect, inspector windows are displayed within a calendar window to show summary or details for an event. An inspector window can also be displayed when a change to an event is detected. In yet another aspect, calendars for multiple accounts accessible by a user are merged into a single calendar view. | 12-11-2008 |
20110054976 | Scheduling Recurring Calendar Events - Methods, systems, and computer-readable media for scheduling a recurring event are disclosed. When a calendar application receives an invitation from an organizer to an invite, the calendar application expands the recurring event into a plurality of occurrences, and detects any scheduling conflicts that can be caused by each of the plurality of occurrences. The calendar application notifies the invitee of the detected scheduling conflicts before the invitee makes a decision regarding the invitation. An invitee is provided an opportunity to accept only the non-conflicting occurrences of the recurring event. If the invitee chooses to accept only the non-conflicting occurrences, the invitee is given opportunities to respond to each of the conflicting occurrences separately. The organizer is notified of the invitee's responses regarding the non-conflicting occurrences and the conflicting occurrences. | 03-03-2011 |
20110239146 | AUTOMATIC EVENT GENERATION - A text input is received in a calendar context. The text input is processed with a context-neutral extraction process to generate a first set of elements and with a calendar-specific extraction process to generate a second set of elements. A calendar event is created from the first set of elements and the second set of elements and displayed on a display device without confirming the elements of the calendar event with a user. | 09-29-2011 |
20130201218 | SHOWING CALENDAR EVENTS NOT VISIBLE ON SCREEN - Methods, systems, and graphical user interfaces are provided for showing calendar events that are not visible on screen. Event objects are shown when an event is within a viewable time range, but the event object is partially drawn on screen (e.g. clipped) when the event is not within a viewable time range. The displayed parts of event objects indicating off-screen event can stack on top of each other to provide information about a number of off-screen events for a particular day. With the events clipped and stacked, a user has a visual indicator that there are events off screen for a particular day. As a user scrolls to the time of the event, the event can completely reveals itself. | 08-08-2013 |
20130203442 | Location-Based Methods, Systems, and Program Products For Performing An Action At A User Device. - Methods, program products, and systems for location-based reminders are disclosed. A first user device can receive an input specifying that a reminder be presented at a given location. The first user device can provide a reminder request, including type and content of the reminder and the location, to a server computer for pushing to one or more user devices. A second user device, upon receiving the reminder request, can determine a device location of the second user device. If the given location matches the device location, the second user device can present the reminder in a user interface. | 08-08-2013 |