40th week of 2009 patent applcation highlights part 78 |
Patent application number | Title | Published |
20090248952 | DATA CONDITIONING TO IMPROVE FLASH MEMORY RELIABILITY - Methods and apparatus for managing data storage in memory devices utilizing memory arrays of varying density memory cells. Data can be initially stored in lower density memory. Data can be further read, compacted, conditioned and written to higher density memory as background operations. Methods of data conditioning to improve data reliability during storage to higher density memory and methods for managing data across multiple memory arrays are also disclosed. | 2009-10-01 |
20090248953 | STORAGE SYSTEM - A storage system capable of managing information in accordance with system configuration changes is provided. | 2009-10-01 |
20090248954 | Storage system - Provided is a storage system capable of holding data by associating main data for long-term retention with sub data for maintaining its readability. The storage system: stores first data in a first location within a storage area upon reception of a storage request for the first data with a first file identifier being specified; holds information for associating the first file identifier, the first location, a retention period of the data, and first version information with one another; stores second data in a second location within the storage area upon reception of a storage request for the second data with the first file identifier and second version information being specified; holds information for associating the first file identifier, the second location, and the second version information with one another; and inhibits the first data and the second data from being changed before elapse of the retention period of the data. | 2009-10-01 |
20090248955 | REDUNDANCY FOR CODE IN ROM - A memory device capable of replacing code in read-only memory (ROM) by using a ROM redundancy register is disclosed. The memory device includes a controller that accesses code in ROM by use of a ROM address. The memory device further includes a ROM redundancy register capable of storing one or more ROM addresses and storing code corresponding to the one or more ROM addresses. The one or more ROM addresses may represent address locations in ROM that need code replacement. The ROM redundancy register may determine whether code corresponding to the ROM address should be replaced by code stored in the ROM redundancy register. | 2009-10-01 |
20090248956 | Apparatus for Storing Management Information in a Computer System - An apparatus for providing management storage via a USB port of a computer system is disclosed. The apparatus includes a flash memory, a first and second switches, a first and second inverters, a designated port, and a controller. Coupled to the flash memory, the first and second switches are controlled by a main power of a computer system in a complementary manner. The first and second inverters, which are powered by a standby power of the computer system, are coupled to a respective control input of the first and second switches. The designated port, which is coupled to the flash memory via the first switch, allows data to be read from and written to the flash memory without booting up the computer system. On the other hand, the controller, which is coupled to the flash memory via the second switch, allows data to be read from and written to the flash memory by the computer system only after the computer system has been booted up. | 2009-10-01 |
20090248957 | MEMORY RESOURCE MANAGEMENT FOR A FLASH AWARE KERNEL - A memory system is provided. The system includes an operating system kernel that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the kernel to regulate read and write access to the one or more FLASH devices. | 2009-10-01 |
20090248958 | FLASH MEMORY USABILITY ENHANCEMENTS IN MAIN MEMORY APPLICATION - A memory system is provided. The system includes an operating system kernel that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the kernel to regulate read and write access to the one or more FLASH devices. | 2009-10-01 |
20090248959 | FLASH MEMORY AND OPERATING SYSTEM KERNEL - A memory system is provided. The system includes an operating system kernel that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the kernel to regulate read and write access to the one or more FLASH devices. | 2009-10-01 |
20090248960 | METHODS AND SYSTEMS FOR CREATING AND USING VIRTUAL FLASH CARDS - Creating and using virtual flash cards is disclosed. A disclosed method includes receiving an input of sets of flash data into a portable handheld device, associating related sets of the flash data based on manual inputs that define the relationship between the related sets of flash data, presenting one of the related sets of flash data via the handheld device and prompting a selection of a set of flash data that is associated with the presented set of flash data. Feedback is provided that indicates whether or not a selected set of flash data is correct. | 2009-10-01 |
20090248961 | MEMORY MANAGEMENT METHOD AND CONTROLLER FOR NON-VOLATILE MEMORY STORAGE DEVICE - A memory management method and a controller for a non-volatile memory storage device are provided. The memor management method and the controller are adapted for establishing a logical-to-physical mapping table of each block in a memory buffer of the controller by merely reading the data stored in a system management area within a start page of each block, so as to promote the management efficiency of the non-volatile memory storage device. In addition, the method and the controller of the present invention integrate all of or a part of the system management areas within the start page for efficiently managing and using the memory capacity of all the system management areas within the start page. | 2009-10-01 |
20090248962 | Memory system and wear leveling method thereof - A memory system includes a variable resistance memory configured to input and output data by a first unit and a translation layer for managing the degree of wear of the variable resistance memory by a second unit, different from the first unit. | 2009-10-01 |
20090248963 | MEMORY CONTROLLER AND MEMORY SYSTEM INCLUDING THE SAME - A memory system includes a memory system includes a nonvolatile memory including a memory space which is formatted from outside by an additional-write type file system, and a memory controller controlling the nonvolatile memory, the memory controller transmitting a write protect error when the memory controller is instructed to write data in an address which is equal to or smaller than an address of previously written data in an address area of the memory space. | 2009-10-01 |
20090248964 | MEMORY SYSTEM AND METHOD FOR CONTROLLING A NONVOLATILE SEMICONDUCTOR MEMORY - A memory system includes a nonvolatile semiconductor memory having blocks, the block being data erasing unit; and a controller configured to execute; an update processing for; writing superseding data in a block, the superseding data being treated as valid data; and invalidating superseded data having the same logical address as the superseding data, the superseded data being treated as invalid data; and a compaction processing for; retrieving blocks having invalid data using a management tablet the management table managing blocks in a linked list format for each number of valid data included in the block; selecting a compaction source block having at least one valid data from the retrieved blocks; copying a plurality of valid data included in the compaction source blocks into a compaction target block; invalidating the plurality of valid data in the compaction source blocks; and releasing the compaction source blocks in which all data are invalidated. | 2009-10-01 |
20090248965 | HYBRID FLASH MEMORY DEVICE AND METHOD OF CONTROLLING THE SAME - A hybrid flash memory device and a control method of the hybrid flash memory device are provided. The hybrid flash memory device includes a micro controller connected to a host bus for receiving data to be written in the hybrid flash memory device from a host via the host bus; and a memory module coupled to the micro controller. The flash module includes a first type of flash memory and a second type of flash memory. The data are determined to be written in a first log block of the first type of flash memory when the data size is not greater than a predetermined data size. On the contrary, the data are determined to be written in a second log block of the second type of flash memory when the data size is greater than the predetermined data size. | 2009-10-01 |
20090248966 | FLASH DRIVE WITH USER UPGRADEABLE CAPACITY VIA REMOVABLE FLASH - An exemplary data storage device includes a fixed storage medium, an expansion socket configured to selectively receive at least one removable memory card, and a controller configured to interface the fixed storage medium and the at least one removable memory card with a host device. An exemplary method includes verifying credentials with verification data stored on the fixed storage medium of the data storage unit, and protecting data on the removable storage medium removably attached to the data storage unit. | 2009-10-01 |
20090248967 | PORTABLE ALARM CONFIGURATION/UPDATE TOOL - A stand-alone portable alarm update tool includes a memory interface for receiving a computer readable memory; a serial port for interconnection to a security alarm panel, by way of a complementary port; a processor; and processor readable memory in communication with the processor, storing software adapting the processor to upload and download configuration files from a removable memory received by the memory port, to the alarm panel, by way of the serial port. Conveniently, the tool may be packaged in a hand-held casing, and which may also house a battery. In this way, the tool may be readily transported by an installer, without being unnecessary heavy or bulky. | 2009-10-01 |
20090248968 | REDUCTION OF LATENCY IN STORE AND FORWARD ARCHITECTURES UTILIZING MULTIPLE INTERNAL BUS PROTOCOLS - Disclosed is a store and forward device that reduces latency. The store and forward device allows front end devices having various transfer protocols to be connected in a single path through a RAM, while reducing latency. Front end devices that transfer data on a piecemeal basis are required to transfer all of the data to a RAM prior to downloading data to a back end. Front end devices that transfer data in a single download begin the transfer of data out of a RAM as soon as a threshold value is reached. Hence, the latency associated with downloading all of the data into a RAM | 2009-10-01 |
20090248969 | REGISTERED DIMM MEMORY SYSTEM - A Registered DIMM (RDIMM) system with reduced electrical loading on the data bus for increases memory capacity and operation frequency. In one embodiment, the data bus is buffered on the DIMM. In another embodiment, the data bus is selectively coupled to a group of memory chips via switches. | 2009-10-01 |
20090248970 | DUAL EDGE COMMAND - A technique to increase transfer rate of command and address signals via a given number of command and address pins in each of one or more integrated circuit memory devices during a clock cycle of a clock signal. In one example embodiment, the command and address signals are sent on both rising and falling edges of a clock cycle of a clock signal to increase the transfer rate and essentially reduce the number of required command and address pins in each integrated circuit memory device. | 2009-10-01 |
20090248971 | System and Dynamic Random Access Memory Device Having a Receiver - A dynamic random access memory device (DRAM) receiver circuit includes an input to receive a data signal, and also includes decision circuitry to make a decision about the received data signal based on a present sampled data signal and a coefficient value corresponding to at least one of a previously sampled data signals. | 2009-10-01 |
20090248972 | Dynamic Memory Supporting Simultaneous Refresh and Data-Access Transactions - Described are dynamic memory systems that perform overlapping refresh and data-access (read or write) transactions that minimize the impact of the refresh transaction on memory performance. The memory systems support independent and simultaneous activate and precharge operations directed to different banks. Two sets of address registers enable the system to simultaneously specify different banks for refresh and data-access transactions. | 2009-10-01 |
20090248973 | System and method for providing address decode and virtual function (VF) migration support in a peripheral component interconnect express (PCIE) multi-root input/output virtualization (IOV) environment - The present invention is a method for providing address decode and Virtual Function (VF) migration support in a Peripheral Component Interconnect Express (PCIE) multi-root Input/Output Virtualization (IOV) environment. The method may include receiving a Transaction Layer Packet (TLP) from the PCIE multi-root IOV environment. The method may further include comparing a destination address of the TLP with a plurality of base address values stored in a Content Addressable Memory (CAM), each base address value being associated with a Virtual Function (VF), each VF being associated with a Physical Function (PF). The method may further include when a base address value included in the plurality of base address values matches the destination address of the TLP, providing the matching base address value to the PCIE multi-root IOV environment by outputting from the CAM the matching base address value. The method may further include constructing a requestor ID for the VF associated with the matching base address value, the requestor ID being based upon the output matching base address value and a bus number for a PF which owns the CAM. | 2009-10-01 |
20090248974 | OPTIMIZING OPERATIONAL REQUESTS OF LOGICAL VOLUMES - A method, system, apparatus and computer program product for determining an optimal file operational time in a data storage system for use with a tape media storing data in a serpentine pattern on tape media is provided. The operational time is optimized based on a “sequence on tape” algorithm, a “minimum reversal of direction on tape” algorithm, or a “minimum delay to next data” algorithm. A model is used to determine the predicted performance of each of the algorithms, and the algorithm that provides the minimum overall operational time is chosen and applied for carrying out an operational process on the tape media. | 2009-10-01 |
20090248975 | SYSTEMS AND METHODS FOR MANAGING STALLED STORAGE DEVICES - Embodiments relate to systems and methods for managing stalled storage devices of a storage system. In one embodiment, a method for managing access to storage devices includes determining that a first storage device, which stores a first resource, is stalled and transitioning the first storage device to a stalled state. The method also includes receiving an access request for at least a portion of the first resource while the first storage device is in the stalled state and attempting to provide access to a representation of the portion of the first resource from at least a second storage device that is not in a stalled state. In another embodiment, a method of managing access requests by a thread for a resource stored on a storage device includes initializing a thread access level for an access request by a thread for the resource. The method also includes determining whether the storage device, which has a device access level, is accessible based at least in part on the thread access level and the device access level and selecting a thread operation based at least in part on the determination of whether the storage device is accessible. The thread operation may be selected from attempting the thread access request if the device is accessible and determining whether to restart the thread access request if the device is not accessible. | 2009-10-01 |
20090248976 | MULTI-CORE MEMORY THERMAL THROTTLING ALGORITHMS FOR IMPROVING POWER/PERFORMANCE TRADEOFFS - Embodiments of the invention are generally directed to systems, methods, and apparatuses for improving power/performance tradeoffs associated with multi-core memory thermal throttling algorithms. In some embodiments, the priority of shared resource allocation is changed on one or more points in a system, while the system is in dynamic random access memory (DRAM) throttling mode. This may enable the forward progress of cache bound workloads while still throttling DRAM for memory bound workloads. | 2009-10-01 |
20090248977 | VIRTUAL TAPE APPARATUS, VIRTUAL TAPE LIBRARY SYSTEM, AND METHOD FOR CONTROLLING POWER SUPPLY - A virtual tape apparatus, which can switch a power supply state to a tape apparatus to thereby suppress power consumption, has an access instruction unit and a power supply control unit. The access instruction unit determines whether or not it is necessary to supply power to a tape apparatus in which a physical tape is stored and which stores data to the physical tape based on an update state of data stored to a tape volume cache, and the power supply control unit switches a state of power supplied to the tape apparatus based on a result of determination executed by the access instruction unit. | 2009-10-01 |
20090248978 | USB DATA STRIPING - A striping system and method for distributing a payload of data across a plurality of parallel USB cables from a source to a destination is described. The striping devices reside in the architecture of a source and destination connected by more than one standardized USB bus cable. The striping devices increase the bandwidth between the source and the destination by providing more lanes of data traffic and utilizing segmentation and reassembly to ensure that the data is split up and then reassembled correctly into the original stream at the destination. The striping devices allow for user determination of usability along with self diagnostics as to the source's and destination's ability to handle striping. Other embodiments are described. | 2009-10-01 |
20090248979 | Storage apparatus and control method for same - The storage apparatus includes an external logical unit as a target of a plurality of host apparatuses, and an internal logical unit assigned to the external logical unit. The internal logical unit includes a common logical unit having common files commonly used by the plurality of host apparatuses, and an individual logical unit having individual files used by each of the plurality of host apparatuses. Upon receipt of a write request from the host apparatus, the storage apparatus performs data matching by comparing write data depending on the write request with data stored in the common logical unit at a management block unit. The storage apparatus stores matching data block in the common logical unit, and stores a non-matching data block in an individual logical block. | 2009-10-01 |
20090248980 | Storage System and Capacity Allocation Method Therefor - A storage system connected to a terminal, the computer system includes: a plurality of drive devices that respectively drive a plurality of physical disks each having a physical storage area; a RAID configuration unit that configures a plurality of RAID groups by grouping two or more of the plurality of physical disks; a logical disk creation unit that creates, for the terminal through the RAID group, a logical disk having a logical storage area associated with the physical storage area; a memory for storing a RAID group control table showing, for each the RAID group, (i) a free capacity that is the amount of physical storage area remaining in the RAID group to be able to be associated with the logical disk and (ii) a power status of the RAID group; a receiver that receives a request for creating a new logical disk; and an area allocation unit that allocates to the new logical disk the physical storage area remaining in the RAID group selected by giving priority to a RAID group in a powered state over a RAID group in a non-powered state with reference to the RAID group control table. | 2009-10-01 |
20090248981 | SEMICONDUCTOR STORAGE DEVICE - Provided is a semiconductor storage device having a first interface section meeting a USB standard for connection to host equipment, a NAND memory section that is a first semiconductor memory section, a second interface section to which small memory cards can be connected, each small memory card having a second semiconductor memory section, and a controller capable of controlling the NAND memory section and the second semiconductor memory sections by one linear address. | 2009-10-01 |
20090248982 | CACHE CONTROL APPARATUS, INFORMATION PROCESSING APPARATUS, AND CACHE CONTROL METHOD - A cache control apparatus determines whether to adopt or not data acquired by a speculative fetch by monitoring a status of the speculative fetch which is a memory fetch request output before it becomes clear whether data requested by a CPU is stored in a cache of the CPU and time period obtained by adding up the time period from when the speculative fetch is output to when the speculative fetch reaches a memory controller and time period from completion of writing of data to a memory which is specified by a data write command that has been issued, before issuance of the speculative fetch, for the same address as that for which the speculative fetch is issued to when a response of the data write command is returned. | 2009-10-01 |
20090248983 | TECHNIQUE TO SHARE INFORMATION AMONG DIFFERENT CACHE COHERENCY DOMAINS - A technique to enable information sharing among agents within different cache coherency domains. In one embodiment, a graphics device may use one or more caches used by one or more processing cores to store or read information, which may be accessed by one or more processing cores in a manner that does not affect programming and coherency rules pertaining to the graphics device. | 2009-10-01 |
20090248984 | METHOD AND DEVICE FOR PERFORMING COPY-ON-WRITE IN A PROCESSOR - There are disclosed a method and device for performing Copy-on-Write in a processor. The processor comprises: processor cores, L | 2009-10-01 |
20090248985 | Data Transfer Optimized Software Cache for Regular Memory References - Mechanisms are provided for optimizing regular memory references in computer code. These mechanisms may parse the computer code to identify memory references in the computer code. These mechanisms may further classify the memory references in the computer code as either regular memory references or irregular memory references. Moreover, the mechanisms may transform the computer code, by a compiler, to generate transformed computer code in which regular memory references access a storage of a software cache of a data processing system through a high locality cache mechanism of the software cache. | 2009-10-01 |
20090248986 | Apparatus for and Method of Implementing Multiple Content Based Data Caches - A novel and useful mechanism enabling the partitioning of a normally shared L1 data cache into several different independent caches, wherein each cache is dedicated to a specific data type. To further optimize performance each individual L1 data cache is placed in relative close physical proximity to its associated register files and functional unit. By implementing separate independent L1 data caches, the content based data cache mechanism of the present invention increases the total size of the L1 data cache without increasing the time necessary to access data in the cache. Data compression and bus compaction techniques that are specific to a certain format can be applied each individual cache with greater efficiency since the data in each cache is of a uniform type. | 2009-10-01 |
20090248987 | Memory System and Data Storing Method Thereof - A memory system includes a memory device having a cache area and a main area, and a memory controller configured to control the memory device, wherein the memory controller is configured to dump file data into the cache area in response to a flush cache command. | 2009-10-01 |
20090248988 | MECHANISM FOR MAINTAINING CONSISTENCY OF DATA WRITTEN BY IO DEVICES - A multi-core microprocessor includes, in part, a cache coherence manager that maintains coherence among the multitude of microprocessor cores, and an I/O coherence unit that maintains coherent traffic between the I/O devices and the multitude of processing cores of the microprocessor. The I/O coherence unit stalls non-coherent I/O write requests until it receives acknowledgement that all pending coherent I/O write requests issued prior to the non-coherence I/O write requests have been made visible to the processing cores. The I/O coherence unit ensures that MMIO read responses are not delivered to the processing cores until after all previous I/O write requests are made visible to the processing cores. Deadlock conditions are prevented by limiting MMIO requests in such a way that they can never block I/O write requests from completing. | 2009-10-01 |
20090248989 | Multiprocessor computer system with reduced directory requirement - The invention has application in implementation of large Symmetric Multiprocessor Systems with a large number of nodes which include processing elements and associated cache memories. The illustrated embodiment of the invention provides for interconnection of a large number of multiprocessor nodes while reducing over the prior art the size of directories for tracking of memory coherency throughout the system. The embodiment incorporates within the memory controller of each node, directory information relating to the current locations of memory blocks which allows for elimination at a higher level in the node controllers of a larger volume of directory information relating to the location of memory blocks. This arrangement thus allows for more efficient implementation of very large multiprocessor computer systems. | 2009-10-01 |
20090248990 | PARTITION-FREE MULTI-SOCKET MEMORY SYSTEM ARCHITECTURE - A technique to increase memory bandwidth for throughput applications. In one embodiment, memory bandwidth can be increased, particularly for throughput applications, without increasing interconnect trace or pin count by pipelining pages between one or more memory storage areas on half cycles of a memory access clock. | 2009-10-01 |
20090248991 | Termination of Prefetch Requests in Shared Memory Controller - A real request from a CPU to the same memory bank as a prior prefetch request is transmitted to the per-memory bank logic along with a kill signal to terminate the prefetch request. This avoids waiting for a prefetch request to complete before sending the real request to the same memory bank. The kill signal gates off any acknowledgement of completion of the prefetch request. This invention reduces the latency for completion of a high priority real request when a low priority speculative request to a different address in the same memory bank has already been dispatched. | 2009-10-01 |
20090248992 | Upgrade of Low Priority Prefetch Requests to High Priority Real Requests in Shared Memory Controller - A prefetch controller implements an upgrade when a real read access request hits the same memory bank and memory address as a previous prefetch request. In response per-memory bank logic promotes the priority of the prefetch request to that of a read request. If the prefetch request is still waiting to win arbitration, this upgrade in priority increases the likelihood of gaining access generally reducing the latency. If the prefetch request had already gained access through arbitration, the upgrade has no effect. This thus generally reduces the latency in completion of a high priority real request when a low priority speculative prefetch was made to the same address. | 2009-10-01 |
20090248993 | MULTIPORT MEMORY AND INFORMATION PROCESSING SYSTEM - In an information processing system, a plurality of information processing devices CHIP | 2009-10-01 |
20090248994 | MEMORY RANK BURST SCHEDULING - A method, device, and system are disclosed. In one embodiment the method includes grouping multiple memory requests into multiple of memory rank queues. Each rank queue contains the memory requests that target addresses within the corresponding memory rank. The method also schedules a minimum burst number of memory requests within one of the memory rank queues to be serviced when the burst number has been reached in the one of the plurality of memory rank queues. Finally, if a memory request exceeds an aging threshold, then that memory request will be serviced | 2009-10-01 |
20090248995 | ALLOCATION CONTROL APPARATUS AND METHOD THEREOF - An allocation control apparatus may access an address table storing addresses of slice areas allocated in a storage area for an entire storage system having a plurality of storage devices and addresses that do not correspond to allocated slice areas. The allocation control apparatus includes a reception unit receiving a request for allocating an arbitrary storage capacity an allocation unit allocating, by referring to the address table, an address that does not correspond to the allocated slice area for at least a part of the requested storage capacity and allocates an address for the slice area to the remaining storage capacity when the reception unit receives the allocation request, and a transmission unit transmitting the result allocated by the allocation unit to a requesting source of the allocation request. | 2009-10-01 |
20090248996 | APPARATUS AND METHODS FOR WIDGET-RELATED MEMORY MANAGEMENT - Apparatus and methods for changing operational modes of a widget and changing content feed to a widget based on operational mode changes and/or memory availability on the wireless device are provided. Apparatus and methods for managing the runtime memory usage of mobile widgets on a wireless device by changing widget states based on widget usage data are also provided. | 2009-10-01 |
20090248997 | De-Interleaving using minimal memory - A de-interleaver for receiving data blocks including data units in an interleaved order, each data unit having a de-interleaved location within the data block, placing the data units in a memory buffer, and outputting the data units in a de-interleaved order from the memory buffer, the de-interleaver including an output unit configured to output a data unit from a location in the memory buffer of a next data unit in de-interleaved order, thereby to provide the data block in de-interleaved order, and an input unit configured with the output unit to input an incoming data unit, the incoming data unit being in the interleaved order, into the location in the memory buffer vacated by the next, in de-interleaved order, data unit being output. Related apparatus and methods are also described. | 2009-10-01 |
20090248998 | STORAGE SYSTEM, CONTROL UNIT, IMAGE FORMING APPARATUS, IMAGE FORMING METHOD, AND COMPUTER READABLE MEDIUM - A storage system includes: N pieces of storages that stores electronic information, N being an integral number that is two or more; and a controller that obtains electronic information to be written, wherein the controller, in a case where the electronic information to be written is a first kind of electronic information, divides the electronic information to be written into N pieces, and independently writes the divided electronic information into each of the N pieces of storage, and the controller, in the electronic information to be written is a second kind of electronic information, redundantly writes the electronic information to be written into each of the N pieces of storage. | 2009-10-01 |
20090248999 | MEMORY CONTROL APPARATUS, MEMORY CONTROL METHOD AND INFORMATION PROCESSING SYSTEM - A memory control apparatus, a memory control method and an information processing system are disclosed. Fetch response data retrieved from a main storage unit is received, while bypassing a storage unit, by a first port in which the received fetch response data can be set. The fetch response data retrieved from the main storage unit, if unable to be set in the first port, is set in a second port through the storage unit. A transmission control unit performs priority control operation to send out, in accordance with a predetermined priority, the fetch response data set in the first port or the second port to the processor. As a result, the latency is shortened from the time when the fetch response data arrives to the time when the fetch response data is sent out toward the processor in response to a fetch request from the processor. | 2009-10-01 |
20090249000 | Method and system for error correction of a storage media - A data file on a storage media is processed during playback or execution to identify unreadable data. Replacement data corresponding to the unreadable data is obtained over a communications network, and the replacement data is used to playback or execute the data file as if the data file does not contain any unreadable data. | 2009-10-01 |
20090249001 | Storage Systems Using Write Off-Loading - Improved storage systems which use write off-loading are described. When a request to store some data in a particular storage location is received, if the particular storage location is unavailable, the data is stored in an alternative location. In an embodiment, the particular storage location may be unavailable because it is powered down or because it is overloaded. The data stored in the alternative location may be subsequently recovered and written to the particular storage location once it becomes available. | 2009-10-01 |
20090249002 | INFORMATION COLLECTION APPARATUS, METHOD, AND PROGRAM - An information collection apparatus which collects, from a plurality of devices each having a plurality of states including a power-supply state indicating ON or OFF of a power-supply, state information indicating a state of each device, the apparatus (a) stores, in a first memory, the power-supply state of each device, (b) receives state information transmitted from each device whose state is changed, (c) rewrites, when a power-supply state information indicating the power-supply state is received, the power-supply state stored in the first memory in accordance with the obtained power-supply state information, (d) collects periodically the state information from each device whose power-supply state stored in the first memory is ON by issuing, at regular intervals, a first request for the state information to the device, and (e) transmits the state information collected periodically to an external apparatus. | 2009-10-01 |
20090249003 | METHOD AND SYSTEM FOR MULTIPLEXING CONCATENATED STORAGE DISK ARRAYS TO FORM A RULES-BASED ARRAY OF DISKS - A method, system and computer-readable medium are disclosed for efficiently multiplexing concatenated storage devices. An intelligent storage controller continuously monitors data access of a number of concatenated storage devices. In response to a request to write new data, the controller writes a primary data copy to the concatenated storage device having the lowest data access. Then the controller writes a secondary data copy to the device having the next lowest data access. In response to a read request, the controller reads data from the data copy located on the concatenated storage device having the lower data access. In response to an update request, the controller, after determining that data access does not exceed a predetermined threshold, the controller updates the data copy having the lowest data access, set that copy as the new primary copy and subsequently updates the other copy, setting that copy as the new secondary copy. | 2009-10-01 |
20090249004 | DATA CACHING FOR DISTRIBUTED EXECUTION COMPUTING - Embodiments for caching and accessing Directed Acyclic Graph (DAG) data to and from a computing device of a DAG distributed execution engine during the processing of an iterative algorithm. In accordance with one embodiment, a method includes processing a first subgraph of the plurality of subgraphs from the distributed storage system in the computing device. The first subgraph being processed with associated input values in the computing device to generate first output values in an iteration. The method further includes storing a second subgraph in a cache of the device. The second subgraph being a duplicate of the first subgraph. Moreover, the method also includes processing the second subgraph with the first output values to generate second output values if the device is to process the first subgraph in each of one or more subsequent iterations. | 2009-10-01 |
20090249005 | SYSTEM AND METHOD FOR PROVIDING A BACKUP/RESTORE INTERFACE FOR THIRD PARTY HSM CLIENTS - A method for performing a backup of a stub object located on a file system managed by a hierarchical storage manager configured to migrate data objects from the file system to a migration storage pool is provided. The stub object includes information for recalling a migrated data object. The method comprises determining whether a backup copy of the migrated data object is stored in a backup storage pool if the backup is performed in an incremental backup operation; directing the hierarchical storage manager to recall the migrated data object to the file system if the backup copy of the migrated data object is not stored in the backup storage pool or if the backup is performed in a selective backup operation; creating the backup copy of the migrated data object if the migrated data object is recalled; storing the backup copy of the migrated data object in the backup storage pool if the migrated data object is recalled; creating a backup copy of the stub object if the migrated data object is not recalled; storing the backup copy of the stub object from the file system in the backup storage pool if the migrated data object is not recalled; and logically grouping the backup copy of the migrated data object with the backup copy of the stub object in the backup storage pool such that the backup copy of the migrated data object cannot be deleted from the backup storage pool unless the backup copy of the stub object does not exist in the backup storage pool if the migrated data object is not recalled. | 2009-10-01 |
20090249006 | System and Method for Setting an Activation State for a Device Used in a Backup Operation - Various embodiments of a system and method for performing a backup operation are disclosed. Backup operation information may be stored, where the backup operation information specifies a backup operation to be performed using at least a first device. Subsequent to storing the backup operation information, state information for the first device may be stored, where the state information indicates whether the first device is eligible for use in backup operations. Before the backup operation is performed, the state information for the first device may be accessed. If the state information for the first device indicates that the first device is eligible for use in backup operations then the backup operation may be performed using the first device (as well as possibly other devices). If the state information for the first device indicates that the first device is ineligible for use in backup operations then the backup operation may be prevented from using the first device. | 2009-10-01 |
20090249007 | METHOD AND SYSTEM FOR ACCESSING DATA USING AN ASYMMETRIC CACHE DEVICE - A system configured to receive a first request for a first datum, query the cache metadata to determine whether the first datum is present in the main memory or the asymmetric cache device (ACD), retrieve the first datum from the main memory when the first datum is present in the main memory, retrieve the first datum from the ACD when the first datum is present in the ACD and not present in the main memory, store a copy of the first datum in the main memory when the first datum is present in the ACD and not present in the main memory, update the cache metadata to indicate that the copy of the first datum is stored in the main memory when the first datum is present in the ACD and not present in the main memory, and retrieve the first datum from the disk when the first datum is not present in the ACD and is not present in the main memory. | 2009-10-01 |
20090249008 | Disk array device - In control of the disk array device (backup system), when a blackout occurs, the disk array device is first operated in a first method to backed up a main memory by using a power supply from a battery. During the first method, a blackout continuous time and the like are integrated, and at a timing in which the integrated value satisfies a condition, the first method is then shifted to the second method to evacuate data from the main memory onto a nonvolatile memory based on a power supply. | 2009-10-01 |
20090249009 | METHOD FOR COPYING DATA FROM AN EXTERNAL STORAGE DEVICE TO A COMPUTER, AND COMPUTER CAPABLE OF PERFORMING THE METHOD - In a method for copying data from an external storage device to a computer, the computer is provided with a basic input/output system (BIOS) program used for performing the method. The method includes the steps of: (a) after the computer is powered on, initializing the external storage device in response to a hot key trigger; and (b) storing the data from the external storage device to the computer. Since data is copied from the external storage device to the computer immediately after powering on the computer, efficiency is enhanced. | 2009-10-01 |
20090249010 | APPARATUS AND METHOD FOR CONTROLLING COPYING - Pre-update data is copied from a first storage device onto a second storage device in response to an update instruction to update data on the backup target volume on the first storage device. A copy status of each data on the backup target volume is managed with position information of the data mapped thereto. If bad data is present in the data on the backup target volume, the position information indicating the position of the bad data is searched. In accordance with the copy status managed with the position information mapped thereto, it is determined whether the pre-update data of the bad data is stored on the second storage device. | 2009-10-01 |
20090249011 | DELIVERY DATA BACKUP APPARATUS, DELIVERY DATA BACKUP METHOD AND DELIVERY DATA BACKUP PROGRAM - A delivery data backup apparatus, a delivery data backup method, and delivery data backup program are provided. The apparatus includes a delivery data receiving part receiving, from a data delivery server, delivery data transmitted from the data delivery server to a terminal device in response to a download request for the delivery data issued by the terminal device is provided. The apparatus includes a temporary storage part temporarily storing the delivery data received by the delivery data receiving part and a delivery data backup storage part storing, for backup, the received delivery data. The apparatus includes a backup process part moving the delivery data having been temporarily stored in the temporary storage part from the temporary storage part to the delivery data backup storage part in accordance with a backup instruction for the delivery data issued by the terminal device. | 2009-10-01 |
20090249012 | SYSTEM MANAGING A PLURALITY OF VIRTUAL VOLUMES AND A VIRTUAL VOLUME MANAGEMENT METHOD FOR THE SYSTEM - This invention provides a control technique of a data processing system, in which functions of a highly-functional high-performance storage system are achieved in an inexpensive storage system so as to effectively use the existing system and reduce the cost of its entire system. This system has a RAID system, an external subsystem, a management server, a management client and the like. The management server includes an information management table for storing mapping information of the RAID system and the external subsystem. When performing copy process, the pair creation in which a logical volume of the RAID system is set as a primary volume of copy source and a logical volume of a mapping object of the RAID system mapped from the logical volume of the external subsystem is set as a secondary volume of copy destination is executed from the management client by using the information management table. | 2009-10-01 |
20090249013 | SYSTEMS AND METHODS FOR MANAGING STALLED STORAGE DEVICES - Embodiments relate to systems and methods for managing stalled storage devices of a storage system. In one embodiment, a method for managing access to storage devices includes determining that a first storage device, which stores a first resource, is stalled and transitioning the first storage device to a stalled state. The method also includes receiving an access request for at least a portion of the first resource while the first storage device is in the stalled state and attempting to provide access to a representation of the portion of the first resource from at least a second storage device that is not in a stalled state. In another embodiment, a method of managing access requests by a thread for a resource stored on a storage device includes initializing a thread access level for an access request by a thread for the resource. The method also includes determining whether the storage device, which has a device access level, is accessible based at least in part on the thread access level and the device access level and selecting a thread operation based at least in part on the determination of whether the storage device is accessible. The thread operation may be selected from attempting the thread access request if the device is accessible and determining whether to restart the thread access request if the device is not accessible. | 2009-10-01 |
20090249014 | SECURE MANAGEMENT OF MEMORY REGIONS IN A MEMORY - Systems and/or methods that facilitate controlling access to memory regions in a memory component(s) are presented. A memory component can comprise an access management component that can facilitate controlling access to memory regions that can be respectively associated with authentication credentials. The access control component can facilitate access of a memory region when received authentication information matches authentication information contained in a security record associated with the memory region. The access management component can facilitate a wipe erase of a memory region(s) to facilitate secure removal of information from the memory region when predetermined criteria is satisfied. The access management component can facilitate locking a memory region when a maximum number of attempts to access a memory region are unsuccessful to facilitate security of the memory regions and/or data associated therewith, where a locked memory region remains locked until a reset is performed. | 2009-10-01 |
20090249015 | OPERATING SYSTEM BASED DRAM / FLASH MANAGEMENT SCHEME - A memory system is provided. The system includes an operating system kernel that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the kernel to regulate read and write access to the one or more FLASH devices. | 2009-10-01 |
20090249016 | APPARATUS AND METHOD TO ESTABLISH A LOGICAL CONFIGURATION FOR A DATA STORAGE LIBRARY - A method to configure a storage library, comprising the steps of establishing a logical configuration for said storage library comprising a plurality of physical objects, by configuring a plurality of logical objects using a plurality of logical configuration commands, and adding that plurality of logical objects to the logical configuration. The method further adds the plurality of logical configuration commands to a Configuration Library, and saves that Configuration Library for later use. | 2009-10-01 |
20090249017 | Systems and Methods for Memory Management for Rasterization - Methods for managing a single memory pool comprising frame buffer memory and display list memory are presented. The single memory pool can comprise sub-pools including: a super-block pool comprising a plurality of super-block objects; a node pool comprising a plurality of node objects; and a block-pool comprising a plurality of blocks. The method may comprise: receiving a memory allocation request directed to at least one of the sub-pools; allocating an object local to the sub-pool identified in the memory request, if local sub-pool objects are available to satisfy the memory request; allocating an object from super-block pool, if the memory request is directed to the node-pool or block-pool and there are no available local objects in the respective sub-pools to satisfy the memory request; and applying at least one of a plurality of memory freeing strategies, if the sub-pools lack available free objects. | 2009-10-01 |
20090249018 | Storage management method, storage management program, storage management apparatus, and storage management system - A storage management method for allocating a dynamic allocation pool so as to avoid throughput reduction. An operation management server determines the dynamic allocation pool managing allocation of a real volume in a storage device to a virtual volume. The operation management server acquires an I/O characteristic of an application being executed in a business server, records I/O characteristic information indicative of a linkage between the application and the I/O characteristic of the application for the each application in an application management table, creates an application group on the basis of the I/O characteristics of the application management table for the each application, and links the created application group to the dynamic allocation pool. | 2009-10-01 |
20090249019 | METHOD OF ALLOCATING PHYSICAL MEMORY IN SPECIFIED ADDRESS RANGE UNDER LINUX SYSTEM PLATFORM - A method of allocating a physical memory in a specified address range under a Linux system platform is applied in a testing process of a physical memory under a Linux operating system. In this method, according to a specified address range and a size of a memory to be allocated, a large amount of physical memories in the system are allocated in a specified address range, and then the information about the allocated memories is transmitted, so as to map, inspect, and release the memories, thereby effectively supporting the test for physical memories under the Linux operating system. | 2009-10-01 |
20090249020 | TECHNIQUES FOR OPTIMIZING CONFIGURATION PARTITIONING - Techniques for optimizing configuration partitioning are disclosed. In one particular exemplary embodiment, the techniques may be realized as a system for configuration partitioning comprising a module for providing one or more policy managers, a module for providing one or more applications, the one or more applications assigned to one or more application groups, a module for associating related application groups with one or more blocks, and a module for assigning each of the one or more blocks to one of the one or more policy managers, wherein if one or more of the one or more blocks cannot be assigned to a policy manager, breaking the one or more blocks into the one or more application groups and assigning the one or more application groups to one of the one or more policy managers. | 2009-10-01 |
20090249021 | Method And Systems For Invoking An Advice Operation Associated With A Joinpoint - Methods and systems are described for invoking an advice operation associated with a joinpoint. In one embodiment, the method includes identifying, based on a pointcut specification included in an aspect specification, a joinpoint in a machine code program component. The joinpoint includes a machine code instruction. The method further includes identifying, based on an advice specification included in the aspect specification, an advice operation included in a machine code program component. The method still further includes detecting an access to the machine code instruction in the joinpoint for execution by a processor. The method also includes invoking the advice operation in association with detecting the access to the machine code instruction. | 2009-10-01 |
20090249022 | METHOD FOR ACHIEVING SEQUENTIAL I/O PERFORMANCE FROM A RANDOM WORKLOAD - Some embodiments of the present invention provide methods, computer media encoding instructions, and systems for receiving write requests directed to non-sequential logical block addresses and writing the write requests to sequential disk block addresses in a storage system. Some embodiments further include overprovisioning a storage system to include an increment of additional storage space such that it is more likely a large enough sequential block of storage will be available to accommodate incoming write requests. | 2009-10-01 |
20090249023 | APPLYING VARIOUS HASH METHODS USED IN CONJUNCTION WITH A QUERY WITH A GROUP BY CLAUSE - A novel method is described for applying various hash methods used in conjunction with a query with a Group By clause. A plurality of drawers are identified, wherein each of the drawers is made up of a collection of cells from a single partition of a Group By column and each of the drawers being defined for a specific query. A separate hash table is independently computed for each of the drawers and a hashing scheme (picked from among a plurality of hashing schemes) is independently applied for each of the drawers. | 2009-10-01 |
20090249024 | Address generation for quadratic permutation polynomial interleaving - For address generation, a block size and a skip value are obtained, and at least one address, at least one increment value, and a step value are initialized. For a count index not in excess of a block size, iteratively performed are: selection of an output address for output from at least one phase responsive to at least the at least one address; first update of the at least one address as being equal to summation of the at least one increment and the at least one address modulo the block size; and second update of the at least one increment as being equal to summation of the at least one increment and the step value modulo the block size. The selection and the first and second updates are iteratively repeated responsive to increments of the count index to output a sequence of addresses. | 2009-10-01 |
20090249025 | Serial Data Processing Circuit - A serial data processing circuit that realizes the same performance as that of the pipeline processing with low power consumption. First to fourth latch units receive, in parallel, data sets supplied to a logic circuit. These latch units sequentially latch the data sets sequentially supplied to the logic circuit and output N data sets in parallel. A Selector sequentially selects the data sets supplied from these latch units and supplies the selected data sets to the logical circuit. For example, when the first latch unit latches data (a), the selector selects the data (a) and supplies it to the logic circuit. When the second latch unit latches data (b), the selector selects the data (b) and supplies it to the logic circuit. The logic circuit processes N serial data sets during each cycle. | 2009-10-01 |
20090249026 | Vector instructions to enable efficient synchronization and parallel reduction operations - In one embodiment, a processor may include a vector unit to perform operations on multiple data elements responsive to a single instruction, and a control unit coupled to the vector unit to provide the data elements to the vector unit, where the control unit is to enable an atomic vector operation to be performed on at least some of the data elements responsive to a first vector instruction to be executed under a first mask and a second vector instruction to be executed under a second mask. Other embodiments are described and claimed. | 2009-10-01 |
20090249027 | METHOD AND APPARATUS FOR SCRAMBLING SEQUENCE GENERATION IN A COMMUNICATION SYSTEM - A wireless communications method is provided. The method includes employing a processor executing computer executable instructions stored on a computer readable storage medium to implement various acts. The method also includes generating cyclic shifts for a sequence generator by masking shift register output values with one or more vectors. The method includes forwarding the sequence generator to a future state based in part on the output values and the vectors. | 2009-10-01 |
20090249028 | PROCESSOR WITH INTERNAL RASTER OF EXECUTION UNITS - The present invention relates to a processor that, as its main feature, has an internal raster of ALUs, with the help of which sequential programs are executed. The connections between the ALUs are automatically created at runtime dynamically by means of multiplexers. A central decoding and configuration unit that creates configuration data for the ALU grid from a stream of conventional assembler commands at runtime is responsible for creating the connections. In addition to the ALU grid, a special unit for the execution of memory accesses and another unit for the processing of branch instructions are provided. The novel architecture that is the foundation of the processor makes efficient execution of both control flow- and data flow-oriented tasks possible. | 2009-10-01 |
20090249029 | METHOD FOR AD-HOC PARALLEL PROCESSING IN A DISTRIBUTED ENVIRONMENT - An overall processing time to rasterize, at the first device, the electronic document to be rendered is computed. Also, a rendering time to render, at the first device, the electronic document to be rendered is computed. When the overall processing time to rasterize at the first device is greater than the rendering time to render at the first device, the electronic document to be rendered is parsed into a first document and sub-documents. A productivity capacity of each node is determined, the productivity capacity being a measured of the processing power of the node and the communication cost of exchanging information between the first device and the node. A sub-document is rasterized at a node when a productivity capacity of the node reduces the processing time to rasterize the electronic document to be rendered to be less than the computed overall processing time. The rasterized first document and each rasterized sub-document are aggregated to create a rasterized electronic document to be rendered at the first device. | 2009-10-01 |
20090249030 | Multiprocessor System Having Direct Transfer Function for Program Status Information in Multilink Architecture - A multiprocessor system can directly transmit storage-state information in a multilink architecture. The multiprocessor system includes a first processor; a multiport semiconductor memory device coupled to the first processor; a nonvolatile semiconductor memory device; and a second processor coupled with the multiport semiconductor memory device and the nonvolatile semiconductor memory device in a multilink architecture, storing data, having been written in a shared memory area of the multiport semiconductor memory device by the first processor, in the nonvolatile semiconductor memory device, and directly transmitting storage-state information on whether the storing of the data in the nonvolatile semiconductor memory device has been completed, in response to a request of the first processor, without passing it through the multiport semiconductor memory device. Accordingly a processor indirectly coupled to a nonvolatile memory can directly check a program completion state for write data and thus enhancing a data storage performance of the system. | 2009-10-01 |
20090249031 | INFORMATION PROCESSING APPARATUS AND ERROR PROCESSING - An information processing apparatus includes a first processing unit, a second processing unit, and a common storage unit that is commonly accessed by the first processing unit and the second processing unit. The first processing unit writes a request in the common storage unit for requesting the second processing unit to perform a certain process, and notifies the second processing unit of the request. The second processing unit writes a notification in the common storage unit indicating the process is completed in response to the request. | 2009-10-01 |
20090249032 | INFORMATION APPARATUS - An information apparatus comprises: a barrel shifter composed of a bidirectional 1-bit shifter, . . . , and a bidirectional 24-bit shifter which are connected in series; a control unit for outputting an endian conversion control signal SE indicating one of a shift operation and endian conversion; an endian conversion unit for generating data by endian conversion using data obtained by performing a shift operation in the bidirectional 8-bit shifter and the bidirectional 24-bit shifter; and a selector for selecting, when the endian conversion control signal SE indicates a shift operation, data outputted from the bidirectional 24-bit shifter, and selecting, when the endian conversion control signal SE indicates endian conversion, the data outputted from the endian conversion unit. | 2009-10-01 |
20090249033 | Data processing apparatus and method for handling instructions to be executed by processing circuitry - A data processing apparatus and method are provided for handling instructions to be executed by processing circuitry. The processing circuitry has a plurality of processor states, each processor state having a different instruction set associated therewith. Pre-decoding circuitry receives the instructions fetched from the memory and performs a pre-decoding operation to generate corresponding pre-decoded instructions, with those pre-decoded instructions then being stored in a cache for access by the processing circuitry. The pre-decoding circuitry performs the pre-decoding operation assuming a speculative processor state, and the cache is arranged to store an indication of the speculative processor state in association with the pre-decoded instructions. The processing circuitry is then arranged only to execute an instruction in the sequence using the corresponding pre-decoded instruction from the cache if a current processor state of the processing circuitry matches the indication of the speculative processor state stored in the cache for that instruction. This provides a simple and effective mechanism for detecting instructions that have been corrupted by the pre-decoding operation due to an incorrect assumption of processor state. | 2009-10-01 |
20090249034 | PROCESSOR AND SIGNATURE GENERATION METHOD, AND MULTIPLE SYSTEM AND MULTIPLE EXECUTION VERIFICATION METHOD - A processor performs instruction execution regardless of a program order. An execution unit executes an instruction, and transmits end information of the instruction whose execution has ended. A retire unit receives the end information, rearranges a result of the instruction whose execution has ended in a program order to determine the instruction execution, and transmits completed instruction information which reports that the instruction execution has been determined. A signature generation unit receives the completed instruction information from the retire unit, and generates a signature using the completed instruction information. | 2009-10-01 |
20090249035 | MULTI-CYCLE REGISTER FILE BYPASS - A method of reducing latency in instruction processing in a system, includes calculating a result of a first execution unit, storing the result of the first execution unit in a register file, forwarding the result of the first execution unit, through the bypass unit, to a second execution unit, the second execution unit conducting an instruction dependent on the result, forwarding the result of the first execution unit, from the bypass unit, to a third execution unit, without accessing the register file, the third execution unit conducting an instruction dependent on the result, wherein the execution units can extract the result of the first execution unit through the bypass unit until the new result is calculated, wherein after the new result is calculated, the execution units can access the result of the first execution unit through the register file. | 2009-10-01 |
20090249036 | EFFICIENT METHOD AND APPARATUS FOR EMPLOYING A MICRO-OP CACHE IN A PROCESSOR - Methods and apparatus for using micro-op caches in processors are disclosed. A tag match for an instruction pointer retrieves a set of micro-op cache line access tuples having matching tags. The set is stored in a match queue. Line access tuples from the match queue are used to access cache lines in a micro-op cache data array to supply a micro-op queue. On a micro-op cache miss, a macroinstruction translation engine (MITE) decodes macroinstructions to supply the micro-op queue. Instruction pointers are stored in a miss queue for fetching macroinstructions from the MITE. The MITE may be disabled to conserve power when the miss queue is empty-likewise for the micro-op cache data array when the match queue is empty. Synchronization flags in the last micro-op from the micro-op cache on a subsequent micro-op cache miss indicate where micro-ops from the MITE merge with micro-ops from the micro-op cache. | 2009-10-01 |
20090249037 | Pipeline processors - A method and apparatus are provided for executing instructions from a plurality of instruction threads on a multi-threaded processor. The instruction threads may each include instructions of different complexity. A plurality of pipelines for executing instructions are provided and an instruction scheduler determines on each clock cycle the pipelines upon which instructions will be executed. Some of the pipelines are configured to appear to the instruction threads as single pipelines but in fact comprise two pipeline paths, one for executed instructions of lower complexity and the other. The instruction scheduler determines on which of the two pipeline paths an instruction should execute. | 2009-10-01 |
20090249038 | STREAM DATA PROCESSING APPARATUS - In a normal operation state, a connection management section writes data transmitted from a first processing section to a data temporary storage section and reads data to be received by a second processing section from the data temporary storage section. Upon receiving control signals which instruct a change of the subject of processing, the first processing section and the second processing section output a transmitting-end clear request and a receiving-end clear request, respectively. The connection management section reads data from the empty data storage section after a transmitting-end clear request is received and until a receiving-end clear request is received, and writes data to the empty data storage section after a receiving-end clear request is received and until a transmitting-end clear request is received. | 2009-10-01 |
20090249039 | Providing Extended Precision in SIMD Vector Arithmetic Operations - The present invention provides extended precision in SIMD arithmetic operations in a processor having a register file and an accumulator. A first set of data elements and a second set of data elements are loaded into first and second vector registers, respectively. Each data element comprises N bits. Next, an arithmetic instruction is fetched from memory. The arithmetic instruction is decoded. Then, the first vector register and the second vector register are read from the register file. The present invention executes the arithmetic instruction on corresponding data elements in the first and second vector registers. The resulting element of the execution is then written into the accumulator. Then, the resulting element is transformed into an N-bit width element and written into a third register for further operation or storage in memory. The transformation of the resulting element can include, for example, rounding, clamping, and/or shifting the element. | 2009-10-01 |
20090249040 | Embedded Control System - An embedded control system capable of ensuring precision in arithmetic with data in the floating-point format and also avoiding a shortage of the storage area of a memory is provided. | 2009-10-01 |
20090249041 | SYSTEM AND METHOD FOR REDUCING POWER CONSUMPTION IN A DEVICE USING REGISTER FILES - A device and method for reducing the power consumption of an electronic device using register file with bypass mechanism. The width of a pulse controlling the word write operation may be extended twice as long so that the extended portion substantially overlaps a following word read pulse. The extension of the pulse width of the read operation may enable lowering the Vcc Min value for the electronic device and thus may lower the power consumption of the device. | 2009-10-01 |
20090249042 | GATEWAY APPARATUS, CONTROL INSTRUCTION PROCESSING METHOD, AND PROGRAM - A gateway apparatus includes a translator connected to a first network for one or more controllers to control one or more devices, and one or more aggregators. The translator includes an acquisition unit which acquires load information concerning a load on each of the controllers, a control instruction reception unit which receives a control instruction for a device from a client via the second network, a determination unit which determines whether the instruction is an aggregation target, based on the information, a first transfer unit which transfers the instruction to the aggregator corresponding to the instruction, and a second transfer unit which receives an aggregate control instruction from the aggregator. The aggregator includes a third transfer unit which receives the instruction from the translator, an aggregation unit which aggregates the plurality of instructions into one aggregate control instruction, and a fourth transfer unit which transfers the aggregate control instruction. | 2009-10-01 |
20090249043 | Apparatus, method and computer program for processing instruction - A plurality of instructions to be executed in an order of being issued without an appointment of a waiting time or a starting moment are designed to be executed after a certain waiting time; instructions to be executed in an order of being issued without designation of starting moment or waiting time are provided with starting moment or waiting time information so that the instructions can be executed in an order designated by the time information. | 2009-10-01 |
20090249044 | Apparatus for and Method for Life-Time Test Coverage for Executable Code - a novel and useful apparatus for and method of associating a dedicated coverage bit to each instruction in a software system. Coverage bits are set every time the software application runs, enabling a more comprehensive and on-going code coverage analysis. The code coverage bit mechanism enables code coverage analysis for all installations of a software application, not just software in development mode or at a specific installation. Code coverage bits are implemented in either the instruction set architecture (ISA) of the central processing unit, the executable file of a software application, a companion file to the executable file or a code coverage table residing in memory of the computer system. | 2009-10-01 |
20090249045 | APPARATUS AND METHOD FOR CONDENSING TRACE INFORMATION IN A MULTI-PROCESSOR SYSTEM - A computer readable storage medium includes executable instructions to characterize a coherency controller. The executable instructions define ports to receive processor trace information from a set of processors. The processor trace information from each processor includes a processor identity and a condensed coherence indicator. Circuitry produces a trace stream with trace metrics and condensed coherence indicators. | 2009-10-01 |
20090249046 | APPARATUS AND METHOD FOR LOW OVERHEAD CORRELATION OF MULTI-PROCESSOR TRACE INFORMATION - A method of coordinating trace information in a multiprocessor system includes receiving processor trace information from a set of processors. The processor trace information from each processor includes a processor identity and a coherence indicator that demarks selective shared memory transactions. Coherence manager trace information is generated for each of the processors. The coherence manager trace information for each processor includes trace metrics and a coherence indicator. | 2009-10-01 |
20090249047 | METHOD AND SYSTEM FOR RELATIVE MULTIPLE-TARGET BRANCH INSTRUCTION EXECUTION IN A PROCESSOR - A method and system for relative multiple-target branch instruction execution in a processor is provided. One implementation involves receiving an instruction for execution; determining a next instruction to execute based on multiple condition bits or outcomes of a comparison by the current instruction; obtaining a specified instruction offset in the current instruction; and using the offset as the basis for multiple instruction targets based on said outcomes, wherein the number of conditional branches is reduced. | 2009-10-01 |
20090249048 | BRANCH TARGET BUFFER ADDRESSING IN A DATA PROCESSOR - A data processing system includes a branch target buffer (BTB) including a plurality of entries, each entry comprising a tag portion and a long branch indicator. The system also includes segment target address storage circuitry which stores a plurality of segment target addresses, index storage circuitry which stores a plurality of indices for indexing into the segment target address storage circuitry, and control circuitry which receives an instruction address and determines whether the instruction address matches a valid entry in the BTB. When the instruction address matches a valid entry in the BTB and the long branch indicator of the valid entry indicates a long branch, the index storage circuitry provides a selected index of the plurality of indices selected by the received instruction address. In response to the selected index, the segment target address storage circuitry provides a selected segment target address as a higher order target address portion. | 2009-10-01 |
20090249049 | PRECISE BRANCH COUNTING IN VIRTUALIZATION SYSTEMS - A method for precisely counting guest branch instructions in a virtualized computer system is described. In one embodiment, guest instructions execute in a direct execution mode of the virtualized computer system. The direct execution mode operates at a first privilege level having a lower privilege than a second privilege level. A branch count of previously executed first privilege level branch instructions is maintained as instructions execute. Execution of a first privilege level branch instruction caused by a control transfer to the direct execution mode is detected. Responsive to the detection, a guest branch instruction count is determined based on the first privilege level branch count. | 2009-10-01 |
20090249050 | SYSTEM AND METHOD FOR ESTABLISHING A TRUST DOMAIN ON A COMPUTER PLATFORM - Embodiments of the invention provide systems and methods associated with a measurement engine in a server platform. In one such embodiment of the invention, the measurement engine hardware verifies/authenticates its own firmware and then system initialization firmware by measuring such firmware and storing measurement results in a register that is not spoofable by malicious code. In this instance, the measurement engine holds the host CPU complex in a reset state until the measurement engine has verified the system initialization firmware. In another such embodiment of the invention, the measurement engine hardware also measures firmware associated with one or more system service processors and stores such measurement results in a register. In this case, the measurement engine holds the system service processors and the host CPU complex in reset until the measurements are completed. Other embodiments are described. | 2009-10-01 |
20090249051 | SYSTEMS AND METHODS FOR MANAGING USER CONFIGURATION SETTINGS - A computer system may include a virtual configuration settings package that captures a user's configuration settings in a user layer. The user layer may represent the files, registry entries, and the like, that make up the virtualized configuration settings. The configuration settings may be captured by filtering file system requests through a virtualization driver. The file system requests that are associated with the user's configuration settings may be redirected to the user layer. Virtualizing the configuration settings may make them much simpler to manage. The virtual configuration settings package may be selectively activated or deactivated, imported and exported, reset, deleted, and so forth. The user layer may include configuration settings from the operating system, applications, and the like. | 2009-10-01 |