Patent application number | Description | Published |
20090100216 | Power saving optimization for disk drives with external cache - A power conservation system implementable in a computer system. The system includes a non-volatile cache memory (NVCM) device for storing information. The NVCM device is operationally coupled to the computer system. The system also includes a data storage device coupled to the NVCM device. The data storage device is for storing said information. The system further includes a controller coupled to the NVCM device. The controller initiates an occurrence of writing the information in the NVCM device to the data storage device. The occurrence of writing causes powering up of the data storage device to which the data is to be written or from which data is to be retrieved. | 04-16-2009 |
20130174248 | PORTABLE DATA-STORAGE DEVICE CONFIGURED TO ENABLE A PLURALITY OF HOST DEVICES SECURE ACCESS TO DATA THROUGH MUTUAL AUTHENTICATION - A portable data-storage device configured to enable a plurality of host devices secure access to data through mutual authentication. The portable data-storage device includes a storage-device enclosure, a data-storage medium, a data-writing element, a data-reading element, and an electronic authenticator. The data-writing element and the data-reading element are configured to write data to, and to read the data from, the data-storage medium. The electronic authenticator is configured to mutually authenticate the portable data-storage device with a first host device, and at least a second host device. The electronic authenticator is configured to enable secure access to the data on the data-storage medium by the first host device and by the second host device, if the electronic authenticator mutually authenticates the portable data-storage device with the first host device and with the second host device. A method and system configured to enable host devices secure access to data are also provided. | 07-04-2013 |
20140108473 | MAINTAINING ORDER AND FAULT-TOLERANCE IN A DISTRIBUTED HASH TABLE SYSTEM - Data storage systems and methods for storing data are described herein. The storage system includes a first storage node is configured to issue a first delivery request to a first set of other storage nodes in the storage system, the first delivery request including a first at least one data operation for each of the first set of other storage nodes and issuing at least one other delivery request, while the first delivery request remains outstanding, the at least one other delivery request including a first commit request for each of the first set of other storage nodes. The first node causes the first at least one data operation to be made active within the storage system in response to receipt of a commit indicator along with a delivery acknowledgement regarding one of the at least one other delivery request. | 04-17-2014 |
20140108723 | REDUCING METADATA IN A WRITE-ANYWHERE STORAGE SYSTEM - Systems and methods for reducing metadata in a write-anywhere storage system are disclosed herein. The system includes a plurality of clients coupled with a plurality of storage nodes, each storage node having a plurality of primary storage devices coupled thereto. A memory management unit including cache memory is included in the client. The memory management unit serves as a cache for data produced by the clients before the data is stored in the primary storage. The cache includes an extent cache, an extent index, a commit cache and a commit index. The movement of data and metadata is by an interval tree. Methods for reducing data in the interval tree increase data storage and data retrieval performance of the system. | 04-17-2014 |
20140172930 | FAILURE RESILIENT DISTRIBUTED REPLICATED DATA STORAGE SYSTEM - A failure resilient distributed replicated data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, it is partitioned into a plurality of data objects and a plurality of parity objects calculated. Reassembly instructions are created for the data item. The data objects and parity objects are spread across all nodes and zones in the storage system. Reassembly instructions are also spread across the zones. When a read request is received, the data item is prepared from the lowest latency nodes according to the reassembly instructions. This provides for data resiliency while keeping the amount of storage space required relatively low. | 06-19-2014 |
20140173235 | RESILIENT DISTRIBUTED REPLICATED DATA STORAGE SYSTEM - A resilient distributed replicated data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, it is partitioned into a plurality of data objects and a plurality of parity objects are calculated. Reassembly instructions are created for the data item. The data objects, parity objects and reassembly instructions are spread across nodes and zones in the storage system according to a policy for the data item. When a zone is inaccessible, a virtual zone is created and used until the intended zone is available. When a read request is received, the data item is prepared from the lowest latency nodes according to the reassembly instructions, and a virtual zone is accessed in place of a real zone when the real zone is inaccessible. | 06-19-2014 |
20140244672 | ASYMMETRIC DISTRIBUTED DATA STORAGE SYSTEM - Asymmetric distributed replicated data storage systems and methods are described herein. The storage system includes zones that are independent, and autonomous. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, it is partitioned into a plurality of data objects and a plurality of parity objects using erasure coding. The data objects and parity objects are spread across all nodes and zones in the storage system asymmetrically such that a first zone includes all of the data objects and no parity objects while the remaining zones include subsets of the data objects and all of the parity objects. The systems and methods provide for data resiliency while keeping the amount of storage space required relatively low. | 08-28-2014 |
20140280187 | DATA STORAGE SYSTEM HAVING MUTABLE OBJECTS INCORPORATING TIME - A data storage system having mutable objects incorporating time is described herein. According to the systems and methods described herein, a data item may be partitioned into parts (data objects) and stored as an index object. As the object storage system provides immutable objects, when a new version of a data item needs to be stored, only those parts (data objects) of the data item that changed need be saved rather than the entire data item. The systems and methods described herein allow for efficient storage, access and manipulation of mutable data items using an underlying immutable object system. | 09-18-2014 |
20140380093 | RESILIENT DISTRIBUTED REPLICATED DATA STORAGE SYSTEM - A resilient distributed replicated data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, it is partitioned into a plurality of data objects and a plurality of parity objects are calculated. Reassembly instructions are created for the data item. The data objects, parity objects and reassembly instructions are spread across nodes and zones in the storage system according to a policy for the data item. When a zone is inaccessible, a virtual zone is created and used until the intended zone is available. When a read request is received, the data item is prepared from the lowest latency nodes according to the reassembly instructions, and a virtual zone is accessed in place of a real zone when the real zone is inaccessible. | 12-25-2014 |
20150052167 | SEARCHABLE DATA IN AN OBJECT STORAGE SYSTEM - A searchable data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, a local database is updated with information about the newly stored data item. When a search for a data item meeting certain metadata criteria is received, multiple concurrent searches are conducted across all storage devices in all nodes in all zones of the storage system. The configuration of the data storage system allows a parallel concurrent search at constituent storage devices to be performed quickly. | 02-19-2015 |
Patent application number | Description | Published |
20090006787 | Storage device with write barrier sensitive write commands and write barrier insensitive commands - The invention is a storage device which implements a write barrier command and provides means for a host to designate other write commands as being sensitive or insensitive to the existence of write barrier commands. The device can optimize the execution of commands by changing the order of execution of write commands that are insensitive to write barrier command. In an embodiment of the invention a flag associated with the write command indicates whether the command is sensitive or insensitive to the existence of write barrier commands. In an embodiment of the invention the write barrier command can be implemented as a write command with a flag that indicates whether the command is a write barrier command. In one embodiment of the invention the queue of commands and data to be written to the media is stored in a non-volatile cache. | 01-01-2009 |
20100011149 | Data Storage Devices Accepting Queued Commands Having Deadlines - A data storage device accepts queued read and write commands that have deadlines. The queued read and write commands are requests to access the data storage device. The deadlines of the queued read and write commands can be advisory deadlines or mandatory deadlines. | 01-14-2010 |
20100011182 | Techniques For Scheduling Requests For Accessing Storage Devices Using Sliding Windows - A system includes a storage device and a scheduler. The scheduler determines if deadlines of requests for accessing the storage device fall within first and second sliding windows. The scheduler issues requests that are in the first sliding window in a first order of execution and requests that are in the second sliding window in a second order of execution. | 01-14-2010 |
20100205623 | Techniques For Emulating Sequential Device With Constrained Disk Drive - A disk drive apparatus includes at least one disk, a head-arm assembly, and a controller circuit. The head arm assembly includes at least one read/write head. The head-arm assembly is movable to enable the read/write head to access a writable surface of the disk. The controller circuit also causes the read/write head to record data on the writable surface of the disk in a write append format. | 08-12-2010 |
20130275548 | Automated Data Migration Across A Plurality of Devices - Approaches for a digital storage device that moves or transforms data between various storage locations based on anticipated use. A digital storage device comprises one or more processors and one or more storage mediums for storing digital data. The digital storage device comprises a software agent. The agent maintains a local index to a set of data sets stored on the storage mediums. The indexed files are associated with an identifier, which may identity any unique entity. The software agent sends the local index over a network to an index manager. The agent receives, from the index manager, a remote index that identifies storage locations for other data sets associated with the identifier. The agent may use the local and remote index to move data sets between storage locations and/or transform data sets based on which device they will be accessed. | 10-17-2013 |