Class / Patent application number | Description | Number of patent applications / Date published |
711126000 | User data cache | 56 |
20080229023 | SYSTEMS AND METHODS OF USING HTTP HEAD COMMAND FOR PREFETCHING - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 09-18-2008 |
20080229024 | SYSTEMS AND METHODS OF DYNAMICALLY CHECKING FRESHNESS OF CACHED OBJECTS BASED ON LINK STATUS - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 09-18-2008 |
20080229025 | SYSTEMS AND METHODS OF USING THE REFRESH BUTTON TO DETERMINE FRESHNESS POLICY - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 09-18-2008 |
20090006755 | Providing application-level information for use in cache management - In one embodiment, the present invention includes a method for associating a first identifier with data stored by a first agent in a cache line of a cache to indicate the identity of the first agent, and storing the first identifier with the data in the cache line and updating at least one of a plurality of counters associated with the first agent in a metadata storage in the cache, where the counter includes information regarding inter-agent interaction with respect to the cache line. Other embodiments are described and claimed. | 01-01-2009 |
20100005243 | Rendering Apparatus Which Parallel-Processes a Plurality of Pixels, and Data Transfer Method - A rendering apparatus includes a memory device, a cache memory, a cache control unit and a rendering process. The memory device stores image data. The cache memory executes transmission/reception of the image data to/from the memory device. The cache memory includes a plurality of entries, each of which is capable of storing the image data. The cache control unit manages data transfer between the memory device and the cache memory and stores information relating to a state of the cache memory. The cache control unit stores, in association with each of the entries, identification information of the image data transferred from the memory device to the entry of the cache memory and transfer information which is indicative of whether the image data is already transferred to the entry or not. The rendering process unit executes image rendering by using the image data in the cache memory. | 01-07-2010 |
20100030969 | Consistency Model for Object Management Data - A method and apparatus are provided for maintaining cache coherency of object management data in a computer system. The computer system is configured with a bit mask to represent changes in object management data. All changes in an object are reflected by setting an associated bit in the bit mask. A cache update of object management data is limited to the bit(s) set in the bit mask. | 02-04-2010 |
20100077147 | METHODS FOR CACHING DIRECTORY STRUCTURE OF A FILE SYSTEM - A file system cache method for a device accessing a file system is provided, wherein the device has a processing unit and a cache buffer. The method comprises accessing a folder in the file system, caching information of child folders of a currently accessed folder when accessing a root folder, caching information of parent folders of the current accessed folder when accessing a leaf folder, caching information of at least one parent folder and at least one child folder of the currently accessed folder when accessing a child folder not classified as a leaf folder, and removing cache buffer entries of sibling folders of the currently accessed folder. | 03-25-2010 |
20100095067 | Caching Web Page Elements In Accordance With Display Locations Of The Elements - Methods, apparatus, and products for caching web page elements in accordance with display locations of the elements, including maintaining, by a web browser in accordance with a cache retention policy, a local cache of previously displayed web page elements in dependence upon previous display locations of the elements including maintaining a cache retention score for each locally cached element; and displaying, by the web browser, a previously displayed web page including displaying one or more of the locally cached elements. | 04-15-2010 |
20100180082 | METHODS AND SYSTEMS FOR IMPLEMENTING URL MASKING - A method includes receiving a web content request including a URL string for locating the web content, and comparing the URL string to a list of URLs for which prefetched responses are available to see if the request can be fulfilled from these responses. The method further includes using a mask that excludes portions of the URL string that are not relevant to finding or selecting the web content when comparing the request to the list of prefetched URLs. If the request URL string matches the URL of a prefetched response other than the masked section, then the prefetched response can be supplied as a response to the incoming response. The method further includes parsing Java scripts in a web response to search for URLs that may be rendered on a web page and analyzing the scripts to identify bytes in the URL that would have random values. | 07-15-2010 |
20100199044 | INTERFACE APPARATUS, CALCULATION PROCESSING APPARATUS, INTERFACE GENERATION APPARATUS, AND CIRCUIT GENERATION APPARATUS - There is provided is an interface apparatus including: a stream converter receiving write-addresses and write-data, storing the received data in a buffer, and sorting the stored write-data in the order of the write-addresses to output the write-data as stream-data; a cache memory storing received stream-data if a load-signal indicates that the stream-data are necessarily loaded and outputting data stored in a storage device corresponding to an input cache-address as cache-data; a controller determining whether or not data allocated with a read-address have already been loaded, outputting the load-signal instructing the loading on the cache memory if not loaded, and outputting a load-address indicating a load-completed-address of the cache memory; and at least one address converter calculating which one of the storage devices the allocated data are stored in, by using the load-address, outputting the calculated value as the cache-address to the cache memory, and outputting the cache-data as read-data. | 08-05-2010 |
20100223430 | EXTENDED DATABASE ENGINE PROVIDING VERSIONING AND EMBEDDED ANALYTICS - A system for calculating analytics uses a relational database to store inputs, calculates results, and stores them in cache. The system also includes an access layer that provides a unified view of the data in server. A dynamic access layer is generated at runtime to run an analytic to provide a flexible framework for creating business logic. | 09-02-2010 |
20100250855 | COMPUTER-READABLE RECORDING MEDIUM STORING DATA STORAGE PROGRAM, COMPUTER, AND METHOD THEREOF - A computer-readable recording medium storing a data storage program, a method and a computer are provided. The computer includes a cache table including an address area for storing an address and a user data area for storing user data corresponding to the address, and executes an operation including, reading user data at a specified address from a recording medium, delta-decoding the read difference data, and determining the decompressed user data to be the read user data, and writing the read user data in the user data area of the cache table when a size of the user data read by the delta-decoding is equal to or less than a threshold value and writing an address corresponding to the read user data in the address area of the cache table, obtaining difference data between the user data requested to be written and the corresponding user data and writing the difference data. | 09-30-2010 |
20100318743 | DYNAMIC SCREENTIP LANGUAGE TRANSLATION - When a user interface cursor hovers over a user interface item, a determination is made as to whether the user interface item has an associated screentip. If the user interface item has an associated screentip, text associated with the screentip is identified, a translated text string is located for the text string, and the translated text string is displayed in the screentip. If the user interface item does not have an associated screentip, a determination is made as to whether the user interface item contains a text string. If so, a determination is made as to whether a translated text string is available that corresponds to the text in the user interface item. If so, the translated text string is displayed in a screentip for the user interface item. | 12-16-2010 |
20110022804 | METHOD AND SYSTEM FOR IMPROVING AVAILABILITY OF NETWORK FILE SYSTEM SERVICE - A method and system for improving availability of a network file system service are disclosed. In one embodiment. a method of a client device for improving an availability of a network file system service in a network of the client device and a file server includes receiving a user request. The method also includes selectively caching data associated with the user request and serviced by the file server to a storage device associated with the client device via a heuristics process which is based on one or more measurable parameters of the network. The method further includes forwarding the data from the file server or the storage device to service the user request. | 01-27-2011 |
20110035552 | User Interface Contrast Filter - A method of defining a dynamically adjustable user interface (“UI”) of a device is described. The method defines multiple UI elements for the UI, where each UI element includes multiple pixels. The method defines a display adjustment tool for receiving a single display adjustment parameter and in response adjusting the appearance of the UI by differentiating display adjustments to a first set of saturated pixels from the display adjustments to a second set of non-saturated pixels. | 02-10-2011 |
20110087842 | PRE-FETCHING CONTENT ITEMS BASED ON SOCIAL DISTANCE - Retrieving content items based on a social distance between a user and content providers. The social distance is determined based on, for example, user interaction with the content providers. The content providers are ranked, for the user, based on the determined social distance. Prior to a request from the user, the content items are pre-fetched based on the ranked content providers and constraints such as storage space, bandwidth, and battery power level of a computing device of the user. In some embodiments, additional content items are retrieved, or retrieved content items are deleted, as a variable-size cache on the computing device fills or changes size. | 04-14-2011 |
20110125970 | Automated Clipboard Software - A clipboard software application running on a computer system that automatically selects at least one data item to be pasted to a target destination area upon determining at least one data item in the clipboard memory buffer is appropriate for pasting to the target destination area. A clipboard memory buffer stores a plurality of data items, each data item associated with one or more data traits. The clipboard application selects at least one data item from the clipboard memory buffer upon determining a user selected data item is not appropriate for the target destination area. | 05-26-2011 |
20120042130 | Data Storage System - A data storage system includes a host computing system having a data storage server and a local cache. The host computing system has access via an internet connection to a data account with a cloud data storage provider. A data management protocol is stored on, and adapted to be employed by, the host computing system. The protocol directs the data storage server to store current data in the local cache and dormant data in the data account of the cloud data storage provider. | 02-16-2012 |
20120110267 | METHOD AND APPARATUS FOR PROVIDING EFFICIENT CONTEXT CLASSIFICATION - A method for providing context classification may include causing selection of a single core in a multi-core processor as a context core in a user terminal, configuring cache memory associated with the context core to enable the context core to process context information for the user terminal, and causing execution of prediction and control functions related to user interface interactions based on the context information processed at the context core. Corresponding apparatuses are also provided. | 05-03-2012 |
20120254543 | METHOD AND DEVICE FOR CACHING - The invention relates to a method and entity that allow for saving of uplink bandwidth in connection with peer-to-peer sharing in a wireless communication system. A caching entity, called a reverse cache, intercepts a point-to-point connection between a mobile network user plane gateway and a wireless user equipment running a peer-to-peer application. The reverse cache caches content loaded to the peer-to-peer application and stores information indicative of the wireless user equipment to which the cached content is loaded. A request on the point-to-point connection for delivery of a first content from the wireless user equipment is intercepted by the reverse cache. When the requested first content is cached in the reverse cache along with information indicating that the requested first content has been loaded to the wireless user equipment, the reverse cache responds by delivering the requested first content, without involving the wireless user equipment. | 10-04-2012 |
20120303901 | Distributed caching and analysis system and method - Distributed caching and analysis system and method are disclosed. In an example, a method for distributed caching and analyzing includes processing a local data partition on a distributed caching platform (DCP) by a query engine at each node in the DCP. The method also includes aggregating query results for a client from multiple nodes in the DCP for real-time, parallel analytics. | 11-29-2012 |
20120311268 | METHOD AND APPARATUS FOR CONTROLLING DATA STORAGE - Disclosed are a method and an apparatus for controlling data storage. The method includes: obtaining the number of copies of to-be-placed media content; inputting user set information, server set information, media traffic demand information, and network topology information that are collected into a joint optimization model that is based on server selection and traffic engineering to perform joint optimization, and obtaining output information; performing statistics collection on the output information to obtain user access statistics of the to-be-placed media content on each cache device; and placing, according to the user access statistics of the media content and the number of copies, the copies of the to-be-placed media content so that the copies of the to-be-placed media content are preferentially placed on a cache device having large user access statistics. Embodiments of the present invention also provide an apparatus for controlling data storage. | 12-06-2012 |
20130007369 | Transparent Cache for Mobile Users - A system includes a cache node operative to communicatively connect to a user device, cache data, and send requested cache data to the user device, and a first support cache node operative to communicatively connect to the cache node, cache data, and send requested cache data to the user device via the cache node. | 01-03-2013 |
20130124801 | SAS HOST CONTROLLER CACHE TRACKING - A technique to track a host controller cache that includes receiving from a host controller a command indicating whether a cache of the host controller has data which is to be stored to a storage system. In the event that the host controller fails, perform an operation to transfer control from the host controller to another host controller based on whether the command indicates that the data of the cache was stored to the storage system. | 05-16-2013 |
20130132676 | ASYNCHRONOUS DATA BINDING - The present invention extends to methods, systems, and computer program products for asynchronously binding data from a data source to a data target. A user interface thread and a separate thread are used to enable the user interface thread to continue execution rather than blocking to obtain updated data, to which elements of a user interface that the user interface thread is managing, are bound. The separate thread obtains updated data from a data source, stores the updated data in a local cache, and notifies the user interface thread of the updated data's presence in the local cache. The user interface thread, upon detecting the notification, accesses the updated data in the local cache and populates the updated data into the user interface | 05-23-2013 |
20130282984 | DATA CACHING METHOD AND APPARATUS - The present invention discloses a data caching method and apparatus, and relates to the field of network applications. The method includes: receiving a first data request; writing target data in the first data request into an on-chip Cache, and counting a storage time of the target data in the on-chip cache; enabling a delay expiry identifier of the target data when the storage time of the target data in the Cache reaches a preset delay time; and releasing the target data when the delay expiry identifier of the target data is in an enabled state and processing of the target data is complete. | 10-24-2013 |
20130318304 | PROVIDING DATA TO A USER INTERFACE FOR PERFORMANCE MONITORING - A method, system, and computer readable storage medium for providing data to a user interface for performance monitoring are disclosed, in which an a data definition is acquired, where the data definition is generated in response to a definition of the user interface. Data is acquired from data sources based on the data definition. The acquired data is processed based on the data definition, and the processed data is cached. | 11-28-2013 |
20140013057 | OBJECT TYPE AWARE BYTE CACHING - One or more embodiments perform byte caching. At least one data packet is received from at least one network node. At least one data object is received from the at least one data packet. An object type associated with the at least one data object is identified. The at least one data object is divided into a plurality of byte sequences based on the object type that is associated with the at least one data object. At least one byte sequence in the plurality of byte sequences is stored into a byte cache. | 01-09-2014 |
20140025896 | CACHING ELECTRONIC DOCUMENT RESOURCES IN A CLIENT DEVICE HAVING AN ELECTRONIC RESOURCE DATABASE - An electronic document references one or more electronic document resources stored on a host device. The host device may indicate in the electronic document that an electronic document is cacheable by a client device. When an electronic document resource is identified as cacheable by the client device, the client device caches the electronic document resource in a database stored in a computer-readable medium of the client device. The client device may also generate an electronic document resource catalog that identifies those electronic document resources that are cached in the database. When the client device next requests the electronic document from the host device, the client device may transmit the electronic document resource catalog to the host device. Upon receiving the electronic document resource catalog, the host device may modify the electronic document so that the electronic document references the electronic document resources cached in the database of the client device. | 01-23-2014 |
20140122806 | CACHE DEVICE FOR SENSOR DATA AND CACHING METHOD FOR THE SAME - A cache device includes a cache module, which comprises a sensor data access interface, a sensor data acquisition module and a driver library. The sensor data access interface receives a data request of back-end sensors from a front-end monitoring system. The sensor data acquisition module inquiries the sensors for the sensor data in accordance with the received request, receives and saves sensor data from the sensors, and replies the sensor data to the monitoring system through the sensor data access interface. The driver library includes at least one driver program, the sensor data acquisition module reads the sensors via executing the driver program, wherein, the driver library selects a corresponding communication protocol which is used by the inquired sensors for the executed driver program to use thereto. | 05-01-2014 |
20140143495 | METHODS AND APPARATUS FOR SOFT-PARTITIONING OF A DATA CACHE FOR STACK DATA - A method of partitioning a data cache comprising a plurality of sets, the plurality of sets comprising a plurality of ways, is provided. Responsive to a stack data request, the method stores a cache line associated with the stack data in one of a plurality of designated ways of the data cache, wherein the plurality of designated ways is configured to store all requested stack data. | 05-22-2014 |
20140181406 | SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM FOR SPOOL CACHE MANAGEMENT - A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory. The disclosed cache management mechanisms are effective for many workloads and are adaptable to various database usage scenarios without requiring detailed studies of the particular data demographics and workload. | 06-26-2014 |
20140237188 | ELECTRONIC INFORMATION CACHING - Electronic information is made more readily available to one or more access requestors based on an anticipated demand for the electronic information using a process, system or computer software. For instance, electronic information stored on a first storage medium is identified for transport (e.g., in response to a request of at least one of the access requestors), and the electronic information is transported accordingly. Afterwards, a determination is made to store the electronic information on a second storage medium that is more accessible to the access requestors than the first storage medium. The determination is based on an anticipated demand of the access requestors for the electronic information. The anticipated demand is determined based at least on information that is not particular to any single access requestor. The electronic information then is stored on the second storage medium and the access requestors are provided access to the electronic information from the second storage medium. | 08-21-2014 |
20140244934 | STORAGE APPARATUS - [Object] A storage apparatus capable of preventing degradation of processing performance when transferring data of a record format to a main frame is proposed. | 08-28-2014 |
20140281247 | METHOD TO ACCELERATE QUERIES USING DYNAMICALLY GENERATED ALTERNATE DATA FORMATS IN FLASH CACHE - A method for accelerating queries using dynamically generated columnar data in a flash cache is provided. In an embodiment, a method comprises a storage device receiving a first request for data that is stored in the storage device in a base major format in one or more primary storage devices. The storage device comprises a cache. The base major format is any one of: a row-major format, a column-major format and a hybrid-columnar format. Based on first one or more criteria, it is determined whether to rewrite the data into rewritten data in a rewritten major format. In response to determining to rewrite the data into rewritten data in a rewritten major format, the storage device rewrites at least a portion of the data into particular rewritten data in the rewritten major format. The rewritten data is stored in the cache. | 09-18-2014 |
20140289472 | APPLICATION-GUIDED BANDWIDTH-MANAGED CACHING - Methods and systems for populating a cache memory that services a media composition system. Caching priorities are based on a state of the media composition system, such as media currently within a media composition timeline, a composition playback location, media playback history, and temporal location within clips that are included in the composition. Caching may also be informed by descriptive metadata and media search results within a media composition client or a within a media asset management system accessed by the client. Additional caching priorities may be based on a project workflow phase or a client project schedule. Media may be partially written to or read from cache in order to meet media request deadlines. Caches may be local to a media composition system or remote, and may be fixed or portable. | 09-25-2014 |
20140310470 | INTELLIGENT CACHING - Disclosed are methods, systems, paradigms and structures for managing cache memory in computer systems. Certain caching techniques anticipate queries and caches the data that may be required by the anticipated queries. The queries are predicted based on previously executed queries. The features of the previously executed queries are extracted and correlated to identify a usage pattern of the features. The prediction model predicts queries based on the identified usage pattern of the features. The disclosed method includes purging data from the cache based on predefined eviction policies that are influenced by the predicted queries. The disclosed method supports caching time series data. The disclosed system includes a storage unit that stores previously executed queries and features of the queries. | 10-16-2014 |
20140325157 | DATA ACCESS REQUEST MONITORING TO REDUCE SYSTEM RESOURCE USE FOR BACKGROUND OPERATIONS - An I/O processing stack includes a proxy that can provide processing services for access requests to initialized and uninitialized storage regions. For a write request, the proxy stores write information in a write metadata repository. If the write is requested for an address in an initialized storage region of the storage system, the proxy performs a write to the initialized region based on region information in the write I/O access request. If the write is requested for an address in an uninitialized storage region of the storage system, the proxy performs an on-demand initialization of the storage region and then performs a write to the storage region based on region information provided by the proxy. | 10-30-2014 |
20150012709 | PROGRESSIVE VIRTUAL LUN - A system for progressive just-in-time restoration of data from backup media. Backup data may be stored on any kind of media such as DAS disk, object storage, USB drive, network share or tape. The backup data does not need to reside on contiguous media and can span multiple media. An index map is maintained that represents contiguous blocks of backup data of a volume. The backup data may be compressed, encrypted, or de-duplicated. The backup data may be located on different media, object stores, or network shares, or differing geographic locations. To perform a recovery, a virtual LUN is provided to the operating system and applications of the restored computer. | 01-08-2015 |
20150149726 | DATA DISTRIBUTION DEVICE AND DATA DISTRIBUTION METHOD - A data distribution device includes: a memory configured to store cache data of data to be distributed; and a processor coupled to the memory and configured to: read the cache data from the memory in accordance with a request message received from other devices to distribute the cache data to the other devices, update, when the request message is received, a counter value that gets closer to a given value with time, so as to make the counter value move away from the given value in accordance with a reference value that is a reciprocal of a threshold value of a reception rate of the request message, whether or not to store the cache data being determined based on the reception rate; and discard the cache data in the memory when the counter value becomes the given value. | 05-28-2015 |
20150347305 | METHOD AND APPARATUS FOR OUTPUTTING LOG INFORMATION - A method and an apparatus for outputting log information are disclosed in the field of information technology. In the method: a system thread acquires a plurality of pieces of log information from a plurality of application threads. The system thread establishes a log information cache queue. The system thread caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue. The system thread configures the log information located in the front of the log information cache queue, into a log file. | 12-03-2015 |
20150378617 | UTILIZATION OF DISK BUFFER FOR BACKGROUND REPLICATION PROCESSES - A method for accelerating a background replication process on storage volumes during application I/O (input/output) requests includes reading requested data from a first storage volume. The method stores the requested data in an embedded memory device, and providing the requested data to the application. The method receives a read request from the background replication process. The method responds to the read request from the background replication process by providing data from the embedded memory device to the requesting background replication process concurrently with providing data to the requesting application. The method stores, by the background replication process, the data provided from the embedded memory device onto a second storage volume. | 12-31-2015 |
20160005953 | ELECTRONIC DEVICE INCLUDING A SEMICONDUCTOR MEMORY - This technology provides an electronic device. An electronic device in accordance with an implementation of this document includes semiconductor memory, and the semiconductor memory includes a contact plug; a first stack structure disposed over the contact plug and coupled to the contact plug, wherein the first stack structure includes a pinning layer controlling a magnetization of a pinned layer; and a second stack structure disposed over the first stack structure and coupled to the first stack structure, wherein the second stack structure includes a MTJ (Magnetic Tunnel Junction) structure which includes the pinned layer having a pinned magnetization direction, a free layer having a variable magnetization direction, and a tunnel barrier layer interposed between the pinned layer and the free layer, wherein a width of the first stack structure is larger than a width of the contact plug and a width of the second stack structure. | 01-07-2016 |
20160011980 | DISTRIBUTED PROCESSING METHOD AND SYSTEM | 01-14-2016 |
20160041777 | CLIENT-SIDE DEDUPLICATION WITH LOCAL CHUNK CACHING - Techniques and mechanisms described herein facilitate the transmission of a data stream from a client device to a networked storage system. According to various embodiments, a fingerprint for a data chunk may be identified by applying a hash function to the data chunk via a processor. The data chunk may be determined by parsing a data stream at the client device. A determination may be made as to whether the data chunk is stored in a chunk file repository at the client device. A block map update request message including information for updating a block map may be transmitted to a networked storage system via a network. The block map may identify a designated memory location at which the chunk is stored at the networked storage system. | 02-11-2016 |
20160041916 | Systems and Methods to Manage Cache Data Storage - Systems and methods for managing records stored in a storage cache are provided. A cache index is created and maintained to track where records are stored in buckets in the storage cache. The cache index maps the memory locations of the cached records to the buckets in the cache storage and can be quickly traversed by a metadata manager to determine whether a requested record can be retrieved from the cache storage. Bucket addresses stored in the cache index include a generation number of the bucket that is used to determine whether the cached record is stale. The generation number allows a bucket manager to evict buckets in the cache without having to update the bucket addresses stored in the cache index. Further, the cache index can be expanded to accommodate very small records, such as those generated by legacy systems. | 02-11-2016 |
20160048456 | METHOD FOR INCREASING CACHE SIZE - A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree. The free tag indicates the data block is available to the system for storing data when needed, the used tag indicates the data block contains application data, and the semifree tag indicates the data block contains cache data and is available to the system for storing application data type if no blocks marked with the free tag are available to the system. | 02-18-2016 |
20160062901 | POPULATING ITEMS IN WORKLISTS USING EXISTING CACHE - Methods, systems, and computer-readable storage media for providing a worklist of a user with at least one item. In some implementations, actions include determining one or more timestamps, each timestamp indicating a time, at which an item cache was synchronized for a respective provider of one or more providers, transmitting one or more requests to one or more respective providers of the one or more providers, the one or more requests each including the one or more timestamps and indicating a user, receiving one or more responses, each response including a sub-set of items, each item in the sub-set of items being included in the sub-set of items based on the one or more timestamps, populating the worklist of the user with one or more items in the sub-set of items reusing a previously synchronized worklist database cache, and providing the worklist for display to the user on a display. | 03-03-2016 |
20160062910 | SELECTING HASH VALUES BASED ON MATRIX RANK - One embodiment of the present invention includes a hash selector that facilitates performing effective hashing operations. In operation, the hash selector creates a transformation matrix that reflects specific optimization criteria. For each hash value, the hash selector generates a potential hash value and then computes the rank of a submatrix included in the transformation matrix. Based on this rank in conjunction with the optimization criteria, the hash selector either re-generates the potential hash value or accepts the potential hash value. Advantageously, the optimization criteria may be tailored to create desired correlations between input patterns and the results of performing hashing operations based on the transformation matrix. Notably, the hash selector may be configured to efficiently and reliably incrementally generate a transformation matrix that, when applied to certain strides of memory addresses, produces a more uniform distribution of accesses across caches lines than previous approaches to memory addressing. | 03-03-2016 |
20160149120 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - This technology provides an electronic device and a method for fabricating the same. An electronic device in accordance with an implementation of this document includes semiconductor memory, and the semiconductor memory includes an interlayer dielectric layer formed over a substrate and having a hole; a conductive pattern filled in the hole and having a top surface located at a level substantially same as a top surface of the interlayer dielectric layer; and an MTJ (Magnetic Tunnel Junction) structure formed over the conductive pattern to be coupled to the conductive pattern and including a free layer having a variable magnetization direction, a pinned layer having a pinned magnetization direction and a tunnel barrier layer interposed between the free layer and the pinned layer, wherein an upper portion of the conductive pattern includes a first amorphous region. | 05-26-2016 |
20160154740 | Migration of Data to Register File Cache | 06-02-2016 |
20160188240 | METHOD FOR A SOURCE STORAGE DEVICE SENDING DATA TO A BACKUP STORAGE DEVICE FOR STORAGE, AND STORAGE DEVICE - In a backup method, a source storage device sends data to a backup storage device. The source storage device contains a processor and a cache. The processor receives a write data request which includes target data. And then, the processor reads a period ID recorded in a period ID table, wherein the period ID is corresponding to a first period. Next, the processor modifies the write data request by attaching the period ID to the target data and writes the modified write data request into the cache. After a backup task corresponding to the period is triggered, the processor obtains data received during the period corresponding to the period ID and sends the obtained data to the backup storage device. | 06-30-2016 |
20160196212 | PROVIDING DATA TO A USER INTERFACE FOR PERFORMANCE MONITORING | 07-07-2016 |
20160203084 | Cache Line Compaction of Compressed Data Segments | 07-14-2016 |
20160378815 | ABORTABLE TRANSACTIONS USING VERSIONED TUPLE CACHE - A transaction manager for handling operations on data in a storage system provides a system for executing transactions that uses a versioned tuple cache to achieve fast, abortable transactions using a redo-only log. The transaction manager updates an in-memory key-value store and also attaches a transaction identifier to the tuple as a minor key. Opportunistic locking can be accomplished due to the low cost of aborting transactions. | 12-29-2016 |
20170235676 | SYSTEM FOR DISTRIBUTED DATA PROCESSING WITH AUTOMATIC CACHING AT VARIOUS SYSTEM LEVELS | 08-17-2017 |