25th week of 2011 patent applcation highlights part 80 |
Patent application number | Title | Published |
20110153886 | DEVICE THAT USES PARAMETERS TO PROVIDE MULTI-CHANNEL SERIAL DATA TRANSMISSIONS AND METHOD THEREOF - A device that uses parameters to provide multi-channel serial data transmissions and the method thereof are provided. A first serial device configures with the channel parameters corresponding to a procedure executed on a second serial device. The invention determines whether the data channel corresponding to the procedure requires sending/receiving data occupies a physical line. If so, the first serial device transmits data to the second serial device. Otherwise, the first serial device transmits a channel switch request to the second serial device. After the first serial device and the second serial device are both switched the data channel corresponding to the procedure to occupy the physical line, the first serial device and the procedure could exchange data. Therefore, the first serial device could establish new connection without closing original connection. It achieves the goal of using a single serial port to have data channels of different operation modes or connections. | 2011-06-23 |
20110153887 | METHOD AND SYSTEM FOR DYNAMICALLY PROGRAMMABLE SERIAL/PARALLEL BUS INTERFACE - Aspects of a method and system for dynamically programmable serial/parallel bus interface may include performing in a first communication device coupled to a communication bus, attaching communication protocol information to a data signal for each data transaction with one or more other communication devices communicatively coupled to the communication bus. The one or more other communication devices may be controlled utilizing the attached communication protocol information. The communication protocol information may be dynamically adjusted and/or adaptively adjusted. The communication bus may be a serial or parallel communication bus. The serial communication bus may be a two-wire, three-wire, or four-wire bus. The attached communication protocol information comprises a multi-wire protocol, a 3-wire protocol, a Serial Peripheral Interface (SPI) protocol, a System Power Management Interface (SPMI), or an RF Bus protocol. | 2011-06-23 |
20110153888 | CASCADE-ABLE SERIAL BUS DEVICE WITH CLOCK AND MANAGEMENT AND CASCADE METHODS USING THE SAME - A cascade-able serial bus device for coupling between a host device and another serial bus device is disclosed. The host device includes a serial bus interface. The serial bus device includes a first connection interface, a second connection interface and a bypassing module. The first connection interface is coupled to the serial bus interface of the host device. The second connection interface is coupled to the second serial bus device. The bypassing module is coupled to a chip select (CS) signal line of the serial bus interface and the second connection interface for selectively bypassing or non-bypassing the CS signal to the second serial bus device. | 2011-06-23 |
20110153889 | COUPLING DEVICES, SYSTEM COMPRISING A COUPLING DEVICE AND METHOD FOR USE IN A SYSTEM COMPRISING A COUPLING DEVICE - The invention relates to coupling devices, a system comprising a coupling device and a method for use in a system comprising a coupling device. | 2011-06-23 |
20110153890 | Methods and Apparatus for Providing Data Transfer Control - A variety of advantageous mechanisms for improved data transfer control within a data processing system are described. A DMA controller is described which is implemented as a multiprocessing transfer engine supporting multiple transfer controllers which may work independently or in cooperation to carry out data transfers, with each transfer controller acting as an autonomous processor, fetching and dispatching DMA instructions to multiple execution units. In particular, mechanisms for initiating and controlling the sequence of data transfers are provided, as are processes for autonomously fetching DMA instructions which are decoded sequentially but executed in parallel. Dual transfer execution units within each transfer controller, together with independent transfer counters, are employed to allow decoupling of source and destination address generation and to allow multiple transfer instructions in one transfer execution unit to operate in parallel with a single transfer instruction in the other transfer unit. Improved flow control of data between a source and destination is provided through the use of special semaphore operations, signals and message synchronization which may be invoked explicitly using SIGNAL and WAIT type instructions or implicitly through the use of special “event-action” registers. Transfer controllers are also described which can cooperate to perform “DMA-to-DMA” transfers. Message-level synchronization can be used by transfer controllers to synchronize with each other. | 2011-06-23 |
20110153891 | COMMUNICATION APPARATUS AND COMMUNICATION CONTROL METHOD - A communication apparatus including a master device ( | 2011-06-23 |
20110153892 | ACCESS ARBITRATION APPARATUS, INTEGRATED CIRCUIT DEVICE, ELECTRONIC APPARATUS, ACCESS ARBITRATION METHOD, AND PROGRAM - An access arbitration apparatus includes: a group setting information storage section; and an access control section, wherein the group setting information storage section stores group setting information that specifies which of the following groups each of a plurality of masters belongs to, a first group or a second group whose priority is lower than that of the first group, and the access control section identifies an access request source, based on an access request signal from each of the plurality of masters, and repeatedly performs a first group process and a second group process in an alternate manner, the first group process being a process of granting access rights valid for predetermined time to the entire first access request source set, the second group process being a process of granting access rights valid for predetermined time to part of the second access request source set. | 2011-06-23 |
20110153893 | Source Core Interrupt Steering - An embodiment of the invention includes (i) receiving a core identifier that corresponds with a processor source core; (ii) receiving an input/output request, produced from the source core, that is associated with the core identifier; (iii) and directing an interrupt, which corresponds to the request, to the source core based on the core identifier. Other embodiments are described herein. | 2011-06-23 |
20110153894 | INTERRUPT-HANDLING-MODE DETERMINING METHOD OF EMBEDDED OPERATING SYSTEM KERNEL - Provided is a method capable of providing an improved response property appropriate for the characteristics of a system by automatically choosing an interrupt handling mode used for each device. The method is a method in which the embedded operating system kernel determines a handling mode for all individual interrupts, the method includes: dividing interrupt handling modes into a first interrupt handling mode and a second interrupt handling mode which has a different process speed from the first interrupt handling mode, and variably determining a distribution ratio in which each of the interrupts are distributed to the first interrupt handling mode or to the second interrupt handling mode according to a predetermined process condition during boot-up. | 2011-06-23 |
20110153895 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM AND COMPUTER READABLE MEDIUM - An image processing apparatus includes an attachment unit, an image processing unit, a storage unit, a discrimination unit and a control unit. A memory unit storing at least one piece of information is detachably attached to the attachment unit. The image processing unit performs image processing. The storage unit stores the information stored in the memory unit through the attachment unit or stores a result of processing executed by the image processing unit. The discrimination unit discriminates processing information inputted externally for instructing the image processing unit to execute processing. The control unit controls a process of transferring the information stored in the memory unit or the result of processing executed by the image processing unit and stored in the storage unit in accordance with a result of discrimination executed by the discrimination unit. | 2011-06-23 |
20110153896 | SEMICONDUCTOR MEMORY CARD, METHOD FOR CONTROLLING THE SAME, AND SEMICONDUCTOR MEMORY SYSTEM - A semiconductor memory card which can be attached to a host apparatus and can be removed from the host apparatus includes a plurality of data transfer terminals, and an internal circuit transmitting a first signal to at least one first data transfer terminal comprising at least one of the data transfer terminals and transmitting a second signal to at least one second data transfer terminal comprising at least one of the data transfer terminals different from the first data transfer terminals. The second signal is generated by executing a logical operation on the first signal. | 2011-06-23 |
20110153897 | SEMICONDUCTOR DEVICE AND DATA PROCESSING SYSTEM - The present invention is to provide a semiconductor device that can correctly switch endians on the outside even if the endian of a parallel interface is not recognized on the outside. The semiconductor device includes a switching circuit and a first register. The switching circuit switches between whether a parallel interface with the outside is to be used as a big endian or a little endian. A first register holds control data of the switching circuit. The switching circuit regards the parallel interface as the little endian when first predetermined control information, that is unchanged in the values of specific bit positions even if its high-order and low-order bit positions are transposed, is supplied to the first register, and regards the parallel interface as the big endian when second predetermined control information, that is unchanged in the values of specific bit positions even if its high-order and low-order bit positions are transposed, is supplied to the first register. Whatever the endian setting status, the control information can be correctly inputted without being influenced by the endian setting status. | 2011-06-23 |
20110153898 | VEHICLES INCLUDING BUS-COUPLED HUB UNIT AND POWERTRAIN ELECTRONIC CONTROL UNIT AND METHOD - A vehicle includes a body structure, a powertrain, a plurality of sensors, a hub unit, a plurality of leads, a powertrain electronic control unit, and a bus. The powertrain is supported by the body structure and includes an engine and a transmission. Each of the sensors is operable for detecting a respective parameter of the powertrain and for generating a corresponding sensor signal. The leads each couple a respective one of the sensors to the hub unit. The bus couples the hub unit to the powertrain electronic control unit. The hub unit is configured to receive the sensor signals from the sensors and, in response to the sensor signals, generate feedback signals and transmit the feedback signals over the bus to the powertrain electronic control unit. A method is also provided. | 2011-06-23 |
20110153899 | Computer Peripheral Expansion Apparatus - Computer peripheral expansion apparatus, methods of operation, and computer program products including blade peripheral expansion units (‘BPEUs’), each BPEU including a peripheral interconnect multiplexer coupled for peripheral interconnect data communications through an upstream peripheral interconnect bus (‘PIB’) segment to a host blade, the upstream PIB segment fanned out by the multiplexer into two or more peripheral downstream interconnect channels, the multiplexer connecting the upstream PIB segment to only one of the downstream channels at a time; and the two or more downstream peripheral interconnect channels, at least one of the downstream channels connected to at least one peripheral interconnect device (‘PID’) in the BPEU, the peripheral interconnect device being a device that communicates with the host blade according to a peripheral interconnect data communications protocol, one of the downstream channels configured to connect to an upstream PIB segment in another BPEU. | 2011-06-23 |
20110153900 | VARIABLE READ LATENCY ON A SERIAL MEMORY BUS - Systems and/or methods are provided that facilitate employing a variable read latency on a serial memory bus. In an aspect, a memory can utilize an undefined amount of time to obtain data from a memory array and prepare the data for transfer on the serial memory bus. The serial memory bus can be driven to a defined state. When data is ready for transfer, the memory can assert a start bit on the serial memory bus to notify a host prior to initiating the data transfer. | 2011-06-23 |
20110153901 | VIRTUAL USB KEY FOR BLADE SERVER - A system for sharing data contained on a peripheral device amongst a plurality of blade servers is disclosed. The system includes a memory device for storing the data contained on the peripheral device. The memory device is partitioned into memory areas. Each memory area stores one copy of the data. The system also includes a processor coupled to the memory device for assigning one of the memory areas to each blade server. The system also includes a switch controller coupled to the processor and to the plurality of blade servers for establishing communication between the plurality of blade servers and the plurality of assigned memory areas. | 2011-06-23 |
20110153902 | Test Interface Card and Testing Method - A test interface card includes: a first specification bus adapted for coupling between a first specification interface controller of a device under test (DUT) and a signal converting interface card, and for transmitting a first test signal that is outputted by the first specification interface controller to the signal converting interface card for processing; a second specification bus adapted for coupling between the signal converting interface card and a storage module of the DUT, and for transmitting a processed signal that is outputted by the signal converting interface card as a result of processing the first test signal to the storage module; and a third specification bus adapted for forming a closed circuit with a second specification interface controller of the DUT, and for transmitting a second test signal that is outputted by the second specification interface controller back to the second specification interface controller. | 2011-06-23 |
20110153903 | METHOD AND APPARATUS FOR SUPPORTING STORAGE MODULES IN STANDARD MEMORY AND/OR HYBRID MEMORY BUS ARCHITECTURES - A memory/storage module is provided that implements a solid state drive compatible with Serial Advanced Technology Attachment (SATA) or Serial Attached SCSI (SAS) signaling on a double-data-rate compatible socket. A detachable daughter card may be coupled to the memory module for converting a memory bus voltage to a second voltage for memory devices on the memory module. Additionally, a hybrid memory bus on a host system is provided that supports either DDR-compatible memory modules and/or SATA/SAS-compatible memory modules. In one example, the memory/storage module couples to a first bus (DDR3 compatible socket) to obtain voltage and/or other signals, but uses a second bus for data transfers. In another example, the memory module may repurpose/reuse electrical paths that typically carry non-data signals for data traffic to/from the memory/storage module. Such data traffic for the memory/storage module permits concurrent data traffic for other memory modules on the same memory bus. | 2011-06-23 |
20110153904 | Wireless universal serial bus system and driving method thereof - Disclosed is a wireless universal serial bus system that includes a device; a first host communicating with the device according to a wireless universal serial bus protocol; and a second host communicating with the device according to a wireless universal serial bus protocol, wherein when the first host receives a beacon from the second host, the first host provides new host information read out from the beacon to the device. | 2011-06-23 |
20110153905 | METHOD AND APPARATUS FOR I/O PATH SWITCHING - A system for input/output path switching comprises a host; a network switch coupled to the host; and a plurality of storage systems which include a first storage system and a second storage system. For switching an I/O path, from a path between the host and the first storage system via the network switch to another path between the host and the second storage system via the network switch, one of the host or the network switch changes FCID (Fibre Channel Node port identifier) information therein, to migrate a WWPN (World Wide Port Name) from association with the first storage system network interface to association with the second storage system network interface. The FCID information includes address information of storage system network interfaces of the storage systems for connecting to the network switch. | 2011-06-23 |
20110153906 | SWITCH AND NETWORK BRIDGE APPARATUS - A network system that is part of a main system includes: a first PCI express-network bridge with a first control unit and a first PCI express adapter terminating a first PCI express bus; and a second PCI express-network bridge connected to the first PCI express-network bridge through a network. The second PCI express-network bridge includes a second control unit and a second PCI express adapter terminating a second PCI express bus, wherein the first control unit detects a destination of a packet sent from the first PCI express adapter, searches a physical address of the destination from a packet encapsulating table, and encapsulates the packet in a frame so that the frame includes the physical address, and wherein the second control unit removes the encapsulation tagged to the packet, and transfers the packet to the destination through the second PCI express bus by referring to a PCI express configuration register. | 2011-06-23 |
20110153907 | PATH MAINTENANCE MECHANISM - In the computer system including a host computer and a storage system, the storage system includes a physical disk and a disk controller, and provides a storage area of the physical disk as at least one logical unit. The processor obtains, at a first time point and a second time point different from the first time point, a relation between a logical path and a component through which the logical path passes, stores, as logical path connection information, the relations obtained at the first time point and the second time point, refers to the logical path connection information to compare the logical paths existing at the first time point and the logical paths existing at the second time point with each other, and specifies the logical path which does not exist at the second time point among the logical paths existing at the first time point. | 2011-06-23 |
20110153908 | ADAPTIVE ADDRESS MAPPING WITH DYNAMIC RUNTIME MEMORY MAPPING SELECTION - A system monitors and dynamically changes memory mapping in a runtime of a computing system. The computing system has various memory resources, and multiple possible mappings that indicate how data is to be stored in and subsequently accessed from the memory resources. The performance of each memory mapping may be different under different runtime or load conditions of the computing device. A memory controller can monitor runtime performance of the current memory mapping and dynamically change memory mappings at runtime based on monitored or observed performance of the memory mappings. The performance monitoring can be modified for any of a number of different granularities possible within the system, from the byte level to memory channel. | 2011-06-23 |
20110153909 | Efficient Nested Virtualization - In one embodiment of the invention, the exit and/or entry process in a nested virtualized environment is made more efficient. For example, a layer | 2011-06-23 |
20110153910 | Flash Memory-Interface - Flash-type memory access and control is facilitated (e.g., as random-access memory). According to an example embodiment, an interface communicates with and controls a flash memory circuit over a peripheral interface bus. The interface uses a FIFO buffer coupled to receive data from and store data for the flash memory circuit and to provide access to the stored data. An interface controller communicates with the flash memory circuit via the peripheral interface bus to initialize the flash memory circuit and to access data thereto, in response to requests from a processor. In some applications, the flash memory circuit is initialized by sending commands to it. The interface may be placed into a read-only mode in which data in the flash memory is accessed as part of main (computer) processor memory, using the FIFO to buffer data from the flash. | 2011-06-23 |
20110153911 | METHOD AND SYSTEM FOR ACHIEVING DIE PARALLELISM THROUGH BLOCK INTERLEAVING - A method and system for achieving die parallelism through block interleaving includes non-volatile memory having a multiple non-volatile memory dies, where each die has a cache storage area and a main storage area. A controller is configured to receive data and write sequentially addressed data to the cache storage area of a first die. The controller, after writing sequentially addressed data to the cache storage area of the first die equal to a block of the main storage area of the first die, writes additional data to a cache storage area of a next die until sequentially addressed data is written into the cache area of the next die equal to a block of the main storage area. The cache storage area may be copied to the main storage area on the first die while the cache storage area is written to on the next die. | 2011-06-23 |
20110153912 | Maintaining Updates of Multi-Level Non-Volatile Memory in Binary Non-Volatile Memory - A method of operating a memory system is presented. The memory system includes a controller and a non-volatile memory circuit, where the non-volatile memory circuit has a first portion, where data is stored in a binary format, and a second portion, where data is stored in a multi-state format. The controller manages the transfer of data to and from the memory system and the storage of data on the non-volatile memory circuit. The method includes receiving a first set of data and storing this first set of data in a first location in the second portion of the non-volatile memory circuit. The memory system subsequently receives updated data for a first subset of the first data set. The updated data is stored in a second location in the first portion of the non-volatile memory circuit, where the controller maintains a logical correspondence between the second location and the first subset of the first set of data. | 2011-06-23 |
20110153913 | Non-Volatile Memory with Multi-Gear Control Using On-Chip Folding of Data - A memory system and methods of its operation are presented. The memory system includes a controller and a non-volatile memory circuit, where the non-volatile memory circuit has a first section, where data is stored in a binary format, and a second section, where data is stored in a multi-state format. The memory system receives data from the host and performs a binary write operation of the received data to the first section of the non-volatile memory circuit. The memory system subsequently folds portions of the data from the first section of the non-volatile memory to the second section of the non-volatile memory, wherein a folding operation includes reading the portions of the data from the first section rewriting it into the second section of the non-volatile memory using a multi-state programming operation. The controller determines to operate the memory system according to one of multiple modes. The modes include a first mode, where the binary write operations to the first section of the memory are interleaved with folding operations at a first rate, and a second mode, where the number of folding operations relative to the number of the binary write operations to the first section of the memory are performed at a higher than in the first mode. The memory system then operates according to determined mode. The memory system may also include a third mode, where folding operations are background operations executed when the memory system is not receiving data from the host. | 2011-06-23 |
20110153914 | REPURPOSING NAND READY/BUSY PIN AS COMPLETION INTERRUPT - A system and method of controlling a flash memory device such as a NAND memory device may involve receiving a command to execute an operation. A Ready/Busy contact of the memory device may be pulsed low in response to determining that execution of the operation has completed. | 2011-06-23 |
20110153915 | READ PREAMBLE FOR DATA CAPTURE OPTIMIZATION - Systems and/or methods are provided that facilitate data capture optimization for devices accessing memories via a bus. In an aspect, a memory can output a read preamble prior to pushing data onto a bus. The read preamble can be a known sequence of one or more bits. A host device accessing the memory via the bus can analyze the read preamble and, particularly, timing characteristics of the read preamble. The timing characteristics can be utilized to identify an optimal capture point within a window of data validity. | 2011-06-23 |
20110153916 | HYBRID MEMORY ARCHITECTURES - Methods and apparatuses for providing a hybrid memory module having both volatile and non-volatile memories to replace a DDR channel in a processing system. | 2011-06-23 |
20110153917 | STORAGE APPARATUS AND ITS CONTROL METHOD - Proposed are a storage apparatus and its control method capable of performing power saving operations while covering the shortcomings of a flash memory such as the life being short and much time being required for rewriting data. This storage apparatus manages the storage areas provided by each of multiple nonvolatile memories as a pool, provides a virtual volume to a host computer, dynamically allocates the storage area from a virtual pool to the virtual volume according to a data write request from the host computer for writing data into the virtual volume, and places the data in the allocated storage area. In addition, the storage apparatus centralizes the placement destination of data from the host computer to a storage area provided by certain nonvolatile memories and stop the power supply to the nonvolatile memories that are unused, monitors the data rewrite count and/or access frequency to storage areas provided by the nonvolatile memories that are active, migrates data to another storage area if the data rewrite count increases, and distributes the data placement destination if the access frequency becomes excessive. | 2011-06-23 |
20110153918 | DATA WRITING METHOD AND DATA STORAGE DEVICE - The invention provides a data writing method for a flash memory. First, a write command, a write address, and write data are received from a host. When a total number of block pairs in the flash memory is equal to a threshold value, and execution of the write command increases the total number of block pairs, the write data is written to a data buffer block of the flash memory, and the write address is stored in an address storage table. A target block pair comprising a target mother block and a target child block is then selected from the block pairs for integration. The target mother block and the target child block are integrated into an integrated block during receiving intervals of a plurality of subsequent write commands. Finally, the write command is executed according to the write data stored in the data buffer block and the write address stored in the address storage table. | 2011-06-23 |
20110153919 | DEVICE, SYSTEM, AND METHOD FOR REDUCING PROGRAM/READ DISTURB IN FLASH ARRAYS - A method, device and computer readable medium for programming a nonvolatile memory block. The method may include programming information, by a memory controller, to the nonvolatile memory block by performing a sequence of programming phases of descending bit significances. The device may include a nonvolatile memory block; and a memory controller that may be configured to determine a bit significance level of the nonvolatile memory block; program the nonvolatile memory block by performing at least one programming phase; and program the nonvolatile memory block to an erase value that may be higher than the pre-erase value; wherein the erase value and the pre-erase value may be selected based on the bit significance level of the nonvolatile memory block. The method may include packing three single level cell (SLC) nonvolatile memory blocks to one three-bit per cell nonvolatile memory block in order of the three SLC bit significances. | 2011-06-23 |
20110153920 | ELECTRONIC APPARATUS OF RECORDING DATA USING NON-VOLATILE MEMORY - An electronic apparatus for recording data using a non-volatile memory is provided. The electronic apparatus includes a non-volatile memory and a controller. The non-volatile memory stores a plurality of sets of playing information of the electronic apparatus. The controller is coupled to the non-volatile memory for receiving an input data and transforming a data structure of the input data into a bitmapping data structure. The controller includes a bitmapping module that is capable of transforming the input data into data having at least one bit but less than one byte in a bitmapping manner. | 2011-06-23 |
20110153921 | System Embedding Plural Controller Sharing Nonvolatile Memory - An embedded memory card system includes a first CPU, a second CPU, a nonvolatile memory storing data, and a device busy state machine selecting one of the first CPU and the second CPU to access the nonvolatile memory. The nonvolatile memory is accessed by the one of the first CPU and the second CPU selected by the device busy state machine. | 2011-06-23 |
20110153922 | NON-VOLATILE MEMORY DEVICE HAVING ASSIGNABLE NETWORK IDENTIFICATION - Memory devices and methods disclosed such as a memory device having a plurality of memory dies where each die includes a network identification that uniquely identifies the memory die on a bus. Access for each memory die to the bus can be scheduled by a bus controller. | 2011-06-23 |
20110153923 | HIGH SPEED MEMORY SYSTEM - A high speed memory system includes a plurality of memory devices; a plurality of buffers; and a memory controller. The plurality of buffers is respectively coupled to the plurality of memory devices. The memory controller is coupled to the plurality of buffers, for generating a plurality of control signal to the plurality of buffers and sequentially controlling access to the plurality of memory devices in a time-sharing manner according to a clock. | 2011-06-23 |
20110153924 | CORE SNOOP HANDLING DURING PERFORMANCE STATE AND POWER STATE TRANSITIONS IN A DISTRIBUTED CACHING AGENT - A method and apparatus may provide for detecting a performance state transition in a processor core and bouncing a core snoop message on a shared interconnect ring in response to detecting the performance state transition. The core snoop message may be associated with the processor core, wherein a plurality of processor cores may be coupled to the shared interconnect ring via a distributed last level cache controller. | 2011-06-23 |
20110153925 | MEMORY CONTROLLER FUNCTIONALITIES TO SUPPORT DATA SWIZZLING - A memory controller that can determine a swizzling pattern between the memory controller and memory devices. The memory controller generates a swizzling map based on the determined swizzling pattern. The memory controller may internally swizzle data using the swizzling map before writing the data to memory so that the data appears in the correct order at the pins of the memory chip(s). On reads, the controller can internally de-swizzle the data before performing the error correction operations using the swizzling map. | 2011-06-23 |
20110153926 | Controlling Access To A Cache Memory Using Privilege Level Information - In one embodiment, a cache memory includes entries each to store a ring level identifier, which may indicate a privilege level of information stored in the entry. This identifier may be used in performing read accesses to the cache memory. As an example, a logic coupled to the cache memory may filter an access to one or more ways of a selected set of the cache memory based at least in part on a current privilege level of a processor and the ring level identifier of the one or more ways. Other embodiments are described and claimed. | 2011-06-23 |
20110153927 | Storage Control Device, Electronic Device, and Storage Control Method - According to one embodiment, a storage control device includes a controller, a detector, and a refreshing module. The controller writes image data, which is to be output to a display module, to a storage device and outputs the image data from the storage device to the display module. The detector detects a blanking period during which the controller does not write the image data to the storage device and does not output the image data from the storage device to the display module. The refreshing module refreshes the storage device by rewriting the image data to the storage device at a predetermined time interval if the detector detects a blanking period. | 2011-06-23 |
20110153928 | MEMORY UTILIZATION TRACKING - A hardware memory control unit that includes a register block and hardware logic. The register block includes, for a hardware memory segment, an access count register for storing an access count, a low threshold register for storing a low threshold, and a high threshold register for storing a high threshold. The hardware logic includes functionality to increment the access count stored in the access count register for each memory access to the hardware memory segment performed during a predefined duration of time, and, at the end of the predefined duration of time, perform a response action when the access count stored in the access count register is less than the low threshold stored in the low threshold register, and perform a response action when the access count stored in the access count register is greater than the high threshold stored in the high threshold register. A power saving mode of the hardware memory segment is modified based on performing the response action. | 2011-06-23 |
20110153929 | DISK MEMORY UTILIZATION MANAGEMENT USING AVAILABLE SLOT CLUSTERS - Efficient reclamation of available memory slots in a computer memory storage unit is achieved by identifying clusters of available memory spaces resulting from the deletion of a record from the storage unit. A cluster may include one or more contiguous available memory slots. An active cluster is elected by selecting the larger of two clusters, the first being the largest cluster resulting solely from processing of the current record delete request and the second being an active cluster identified in a prior record delete operation. Other clusters are defined as passive clusters. When a record is to be written into the disk memory, available memory slots in the active cluster are first used, following by unused memory slots and then by available memory slots in passive clusters. | 2011-06-23 |
20110153930 | STORAGE SYSTEM AND PROCESSING EFFICIENCY IMPROVING METHOD OF STORAGE SYSTEM - A storage system | 2011-06-23 |
20110153931 | HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS - A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD. | 2011-06-23 |
20110153932 | MULTI-COLUMN ADDRESSING MODE MEMORY SYSTEM INCLUDING AN INTEGRATED CIRCUIT MEMORY DEVICE - A memory system includes a master device, such as a graphics controller or processor, and an integrated circuit memory device operable in a dual column addressing mode. The integrated circuit memory device includes an interface and column decoder to access a row of storage cells or a page in a memory bank. During a first mode of operation, a first row of storage cells in a first memory bank is accessible in response to a first column address. During a second mode of operation, a first plurality of storage cells in the first row of storage cells is accessible in response to a second column address during a column cycle time interval. A second plurality of storage cells in the first row of storage cells is accessible in response to a third column address during the column cycle time interval. The first and second pluralities of storage cells are concurrently accessible from the interface. | 2011-06-23 |
20110153933 | DATA STORAGE ON WRITEABLE REMOVABLE MEDIA IN A COMPUTING DEVICE - On a computing device making use of removable storage media, the mechanical nature of the process for removing of the media enables the device to detect the beginning of this process before it reaches the point where the removable media has been removed to the extent that it is no longer operable. The minimum time taken to reach this point from the detection of the beginning of the process is with the present invention used to compute the size a data chunk which is guaranteed to be completely written provided the write begins before the start of removal is detected. By breaking down all lengthy write operations into chunks which can be written within this minimum time period, the risk of corruption of the removable media and the loss of data can be eliminated. | 2011-06-23 |
20110153934 | MEMORY CARD AND COMMUNICATION METHOD BETWEEN A MEMORY CARD AND A HOST UNIT - A memory card and a communication method between a memory card and a host unit are disclosed. High throughput of data between the memory card and the host unit is guaranteed by providing a communication interface between the memory card and the host unit including a first communication interface between a memory unit of the memory card and a control unit of the memory card and a second communication interface between the control unit of the memory card and the host unit. | 2011-06-23 |
20110153935 | NUMA-AWARE SCALING FOR NETWORK DEVICES - The present disclosure describes a method and apparatus for network traffic processing in a non-uniform memory access architecture system. The method includes allocating a Tx/Rx Queue pair for a node, the Tx/Rx Queue pair allocated in a local memory of the node. The method further includes routing network traffic to the allocated Tx/Rx Queue pair. The method may include designating a core in the node for network traffic processing. Of course, many alternatives, variations and modifications are possible without departing from this embodiment. | 2011-06-23 |
20110153936 | Aggregate Symmetric Multiprocessor System - An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer. | 2011-06-23 |
20110153937 | SYSTEMS AND METHODS FOR MAINTAINING TRANSPARENT END TO END CACHE REDIRECTION - The present disclosure presents systems and methods for maintaining original source and destination IP addresses of a request while performing intermediary cache redirection. An intermediary receives a request from a client destined to a server identifying a client IP address as a source IP address and a server IP address as a destination IP address. The intermediary transmits the request to a cache server, the request maintaining original IP addresses and identifying a MAC address of the cache server as the destination MAC address. The intermediary receives the request from the cache server responsive to a cache miss, the received request maintaining the original source and destination IP addresses. The intermediary identifying that the third request is coming from the cache server via one or more data link layer properties of the third transport layer connection. The intermediary transmits to the server the request identifying the client IP address as the source IP address and the server IP address as the destination IP address. | 2011-06-23 |
20110153938 | SYSTEMS AND METHODS FOR MANAGING STATIC PROXIMITY IN MULTI-CORE GSLB APPLIANCE - The present invention is directed towards systems and methods for providing static proximity load balancing via a multi-core intermediary device. An intermediary device providing global server load balancing identifies a size of a location database comprising static proximity information. The intermediary device stores the location database to an external storage of the intermediary device responsive to determining the size of the location database is greater than a predetermined threshold. A first packet processing engine on the device receives a domain name service request for a first location, determines that proximity information for the first location is not stored in a first memory cache, transmits a request to a second packet processing engine for proximity information of the first location, and transmits a request to the external storage for proximity information of the first location responsive to the second packet processing engine not having the proximity information. | 2011-06-23 |
20110153939 | SEMICONDUCTOR DEVICE, CONTROLLER ASSOCIATED THEREWITH, SYSTEM INCLUDING THE SAME, AND METHODS OF OPERATION - In one embodiment, the semiconductor device includes a data control unit configured to selectively process data for writing to a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a write operation, the group of processing functions including at least three processing functions. The enabled processing function may be performed based on a signal received over a single pin associated with the group of processing functions. In another embodiment, the semiconductor device includes a data control unit configured to process data read from a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a read operation. Here, the group of processing functions including at least two processing functions. | 2011-06-23 |
20110153940 | METHOD AND APPARATUS FOR COMMUNICATING DATA BETWEEN PROCESSORS IN MOBILE TERMINAL - A data communication method between processors in a portable terminal and an apparatus thereof are provided. The method includes storing data to be transmitted from a first processor to a second processor in a transmission buffer, determining a size of a free space in a shared memory, sequentially transmitting the data stored in the transmission buffer to the shared memory in units of the size of the free space to the shared memory, and reading out the data transmitted to the shared memory and storing the read data in a reception buffer by a second processor. | 2011-06-23 |
20110153941 | Multi-Autonomous System Anycast Content Delivery Network - A content delivery network includes first and second sets of cache servers, a domain name server, and an anycast island controller. The first set of cache servers is hosted by a first autonomous system and the second set of cache servers is hosted by a second autonomous system. The cache servers are configured to respond to an anycast address for the content delivery network, to receive a request for content from a client system, and provide the content to the client system. The first and second autonomous systems are configured to balance the load across the first and second sets of cache servers, respectively. The domain name server is configured to receive a request from a requestor for a cache server address, and provide the anycast address to the requestor in response to the request. The anycast island controller is configured to receive load information from each of the cache servers, determine an amount of requests to transfer from the first autonomous system to the second autonomous system; send an instruction to the first autonomous system to transfer the amount of requests to the second autonomous system. | 2011-06-23 |
20110153942 | REDUCING IMPLEMENTATION COSTS OF COMMUNICATING CACHE INVALIDATION INFORMATION IN A MULTICORE PROCESSOR - A processor may include several processor cores, each including a respective higher-level cache, wherein each higher-level cache includes higher-level cache lines; and a lower-level cache including lower-level cache lines, where each of the lower-level cache lines may be configured to store data that corresponds to multiple higher-level cache lines. In response to invalidating a given lower-level cache line, the lower-level cache may be configured to convey a sequence including several invalidation packets to the processor cores via an interface, where each member of the sequence of invalidation packets corresponds to a respective higher-level cache line to be invalidated, and where the interface is narrower than an interface capable of concurrently conveying all invalidation information corresponding to the given lower-level cache line. Each invalidation packet may include invalidation information indicative of a location of the respective higher-level cache line within different ones of the processor cores. | 2011-06-23 |
20110153943 | Aggregate Data Processing System Having Multiple Overlapping Synthetic Computers - A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool. | 2011-06-23 |
20110153944 | Secure Cache Memory Architecture - A variety of circuits, methods and devices are implemented for secure storage of sensitive data in a computing system. A first dataset that is stored in main memory is accessed and a cache memory is configured to maintain logical consistency between the main memory and the cache. In response to determining that a second dataset is a sensitive dataset, the cache memory is directed to store the second dataset in a memory location of the cache memory without maintaining logical consistency with the dataset and main memory. | 2011-06-23 |
20110153945 | Apparatus and Method for Controlling the Exclusivity Mode of a Level-Two Cache - A method of controlling the exclusivity mode of a level-two cache includes generating level-two cache exclusivity control information at a processor in response to an exclusivity mode indicator, and utilizing the level-two cache exclusivity control information to configure the exclusivity mode of the level-two cache. | 2011-06-23 |
20110153946 | DOMAIN BASED CACHE COHERENCE PROTOCOL - Briefly stated, technologies are generally described for accessing a data block in a cache with a domain based cache coherence protocol. A first processor in a first tile and first domain can be configured to evaluate a request to access the data block. A cache in a second tile in the first domain can be configured to send the data block to the first tile when the data block is cached in the second tile. The first processor can be configured to send the request to a third tile in another domain when the cached location is outside the first processor's domain. The third processor can be configured to determine and send the request to a data domain associated with the cached location of the data block. A fourth tile can be configured to receive the request and send the data block to the first tile. | 2011-06-23 |
20110153947 | INFORMATON PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - In an information processing device for processing VLIW includes memory banks, a memory banks are used to store an instruction word group constituting a very-long instruction. A program counter outputs an instruction address indicating a head memory bank containing a head part of the very long instruction of the next cycle. A memory bank control device uses information regarding the instruction address for the very long instruction and the number of memory banks associated with the very long instruction to specify the use memory bank to be used in the next cycle and the nonuse memory bank not to be used in the next cycle. The memory bank control device controls the operation of the nonuse memory bank. The instruction decoder decodes the very long instruction fetched from the use memory bank. An arithmetic device executes the decoded very long instruction. | 2011-06-23 |
20110153948 | SYSTEMS, METHODS, AND APPARATUS FOR MONITORING SYNCHRONIZATION IN A DISTRIBUTED CACHE - Systems, apparatus, and method of monitoring synchronization in a distributed cache are described. In an exemplary embodiment, a first and second processing core process a first and second thread respectively. A first and second distributed cache slices store data for either or both of the first and second processing cores. A first and second core interface co-located with the first and second processing cores respectively maintain a finite state machine (FSM) to be executed in response to receiving a request from a thread of its co-located processing core to monitor a cache line in the distributed cache. | 2011-06-23 |
20110153949 | DELAYED REPLACEMENT OF CACHE ENTRIES - A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry. | 2011-06-23 |
20110153950 | Cache memory, cache memory system, and method program for using the cache memory - A cache memory includes: a plurality of MSHRs (Miss Status/Information Holding Registers); a memory access identification unit that identifies a memory access included in an accepted memory access request; and a memory access association unit that associates a given memory access with the MSHR that is used when the memory access turns out to be a cache miss and determines, on the basis of the association, a candidate for the MSHR that is used by the memory access identified by the access identification unit. | 2011-06-23 |
20110153951 | GLOBAL INSTRUCTIONS FOR SPIRAL CACHE MANAGEMENT - A pipelined cache memory and a method of operation support global operations within the cache. The cache may be a spiral cache, with a move-to-front M2F network for moving values from a backing store to a front-most tile coupled to a processor or lower-order level of a memory hierarchy and a spiral push-back network for pushing out modified values to the backing-store. The cache controller manages application of global commands by propagating individual commands to the tiles. The global commands may provide zeroing, flushing and reconciling of the given tiles. Commands for interrupting and resuming interrupted global commands may be implemented, to reduce halting or slowing of processing while other global operations are in process. A line detector within each tile supports reconcile and flush operations, and a line patcher in the controller provides for initializing address ranges with no processor intervention. | 2011-06-23 |
20110153952 | SYSTEM, METHOD, AND APPARATUS FOR A CACHE FLUSH OF A RANGE OF PAGES AND TLB INVALIDATION OF A RANGE OF ENTRIES - Systems, methods, and apparatus for performing the flushing of a plurality of cache lines and/or the invalidation of a plurality of translation look-aside buffer (TLB) entries is described. In one such method, for flushing a plurality of cache lines of a processor a single instruction including a first field that indicates that the plurality of cache lines of the processor are to be flushed and in response to the single instruction, flushing the plurality of cache lines of the processor. | 2011-06-23 |
20110153953 | SYSTEMS AND METHODS FOR MANAGING LARGE CACHE SERVICES IN A MULTI-CORE SYSTEM - A multi-core system that includes a 64-bit cache storage and a 32-bit memory storage that stores a 32-bit cache object directory. One or more cache engines execute on cores of the multi-core system to retrieve objects from the 64-bit cache, create cache directory objects, insert the created cache directory object into the cache object directory, and search for cache directory objects in the cache object directory. When an object is stored in the 64-bit cache, a cache engine can create a cache directory object that corresponds to the cached object and can insert the created cache directory object into an instance of a cache object directory. A second cache engine can receive a request to access the cached object and can identify a cache directory object in the instance of the cache object directory, using a hash key calculated based on one or more attributes of the cached object. | 2011-06-23 |
20110153954 | STORAGE SUBSYSTEM - Provided is a storage subsystem capable of speeding up the input/output processing for a cache memory. Microprocessor Packages manage information related to a VDEV ownership for controlling virtual devices and a cache segment ownership for controlling cache segments in units of Microprocessor Packages, and one Microprocessor among multiple Microprocessors belonging to the determined Microprocessor Package to perform input/output processing for the virtual devices searches cache control information stored in the Package Memory without searching the cache control information in the shared memory, and if data exists in the cache memory, accesses the cache memory, and if it does not, accesses the virtual devices. | 2011-06-23 |
20110153955 | SOFTWARE ASSISTED TRANSLATION LOOKASIDE BUFFER SEARCH MECHANISM - A computer implemented method searches a unified translation lookaside buffer. Responsive to a request to access the unified translation lookaside buffer, a first order code within a first entry of a search priority configuration register is identified. A unified translation lookaside buffer is then searched according to the first order code for a hashed page entry. If the hashed page entry is not found when searching a unified translation lookaside buffer according to the first order code, a second order code is identified within a second entry of the search priority configuration register. The unified translation lookaside buffer is then searched according to the second order code for the hashed page entry. | 2011-06-23 |
20110153956 | Cache Coherent Switch Device - In one embodiment, the present invention includes a switch device to be coupled between a first semiconductor component and a processor node by interconnects of a communication protocol that provides for cache coherent transactions and non-cache coherent transactions. The switch device includes logic to handle cache coherent transactions from the first semiconductor component to the processor node, while the first semiconductor component does not include such logic. Other embodiments are described and claimed. | 2011-06-23 |
20110153957 | SHARING VIRTUAL MEMORY-BASED MULTI-VERSION DATA BETWEEN THE HETEROGENOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data. | 2011-06-23 |
20110153958 | NETWORK LOAD REDUCING METHOD AND NODE STRUCTURE FOR MULTIPROCESSOR SYSTEM WITH DISTRIBUTED MEMORY - Provided are a network load reducing method and a node structure for a multiprocessor system with a distributed memory. The network load reducing method uses a multiprocessor system including a node having a distributed memory and an auxiliary memory storing a sharer history table. The network load reducing method includes recording the history of a sharer node in the sharer history table of the auxiliary memory, requesting share data with reference to the sharer history table of the auxiliary memory, and deleting share data stored in the distributed memory and updating the sharer history table of the auxiliary memory. | 2011-06-23 |
20110153959 | IMPLEMENTING DATA STORAGE AND DUAL PORT, DUAL-ELEMENT STORAGE DEVICE - A method for implementing data storage and a dual port, dual element storage device are provided. A storage device includes a predefined form factor including a first port and a second port, and a first storage element and a second storage element. A controller coupled between the first port and second port, and the first storage element and second storage element controls access and provides two separate data paths to the first storage element and second storage element. | 2011-06-23 |
20110153960 | TRANSACTIONAL MEMORY IN OUT-OF-ORDER PROCESSORS WITH XABORT HAVING IMMEDIATE ARGUMENT - Methods, systems, and apparatuses to provide an XABORT in a transactional memory access system are described. In one embodiment, the stored value is a context value indicating the context in which a transactional memory execution was aborted. A fallback handler may use the context value to perform a series of operations particular to the context in which the abort occurred. | 2011-06-23 |
20110153961 | STORAGE DEVICE WITH FUNCTION OF VOLTAGE ABNORMAL PROTECTION AND OPERATION METHOD THEREOF - The present invention discloses a storage device and an operation method thereof. The storage device includes a non-volatile memory for storing data, a control unit coupled to the non-volatile memory, a power supply unit coupled to an external power source and converting the external power source to a suitable voltage for the non-volatile memory and the control unit, and a power monitor unit for monitoring the external power source. When the external power source falls below a low voltage threshold of the non-volatile memory, a control signal is transmitted into the control unit so as to stop accessing the non-volatile memory. The non-volatile memory finishes the last processing procedure according to the last programming instruction sent by the control unit before the control signal for protecting the data stored in the non-volatile memory. | 2011-06-23 |
20110153962 | ENDLESS MEMORY - A storage device includes a controller that is configured to execute safe deletion operations so as to free up storage space on the device in response to triggering events. The safe deletion operations ensure that the data states of a host device making use of the storage device and the storage device itself are synchronized so as to prevent deletion of data from the storage device before it is offloaded to another storage platform. | 2011-06-23 |
20110153963 | Memory Controller and Associated Control Method - A memory controller and an associated controlling method are provided. The memory controller is connected to a memory module, and includes a FIFO buffer for receiving valid data outputted from the memory module, a write pointer for indicating written data stored in the FIFO buffer, and a read pointer for indicating read data stored in the FIFO buffer. According to the controlling method, during a CAS latency of the memory module after a read command is generated, the value of the write pointer is controlled to have the same value as that of the read pointer. | 2011-06-23 |
20110153964 | PRIORITIZING SUBGROUPS IN A CONSISTENCY GROUP - A method which prioritizes the subgroups in a consistency group by usage and/or business process. Thereafter, in case of abnormal operation of the process for copying the consistency group from primary storage to secondary storage, only a portion of the subgroups of the consistency group are copied from primary storage to secondary storage. | 2011-06-23 |
20110153965 | SYSTEMS AND METHODS FOR VIRTUALIZING STORAGE SYSTEMS AND MANAGING DATA INDEPENDENTLY - Method, data processing systems, and computer program products are provided for virtualizing and managing a storage virtualization system (SVS) in a storage management architecture. Source data is copied from the source storage media to target data in a target storage media based on a predefined copy policy in a copy mapping table. A relation between the source data and the target data is tracked in a copy mapping table. It is determined if a copy of the requested data exists using the copy mapping table. A least utilized storage system having a copy of the requested storage media is determined. Access to the requested storage media in the least utilized storage system is tested. If access is not possible, access to a copy of the requested storage media in another storage system is provided by updating a frontend-backend mapping table and forwarding all data access commands to the other system. | 2011-06-23 |
20110153966 | STORAGE CONTROLLER AND DATA MANAGEMENT METHOD - Upon receiving a primary/secondary switching command from a secondary host system, a secondary storage control device interrogates a primary storage control device as to whether or not yet to be transferred data that has not been remote copied from the primary storage control device to the secondary storage control device is present. In the event that yet to be transferred data is present, the secondary storage control device receives yet to be transferred data from the primary storage control device and updates a secondary volume. The primary storage control device then manages positions of updates to the primary volume due to host accesses to the primary volume occurring at the time of the secondary storage control device receiving the primary/secondary switching command onwards using a differential bitmap table. | 2011-06-23 |
20110153967 | STORAGE AREA DYNAMIC ASSIGNMENT METHOD - A storage system allocates a data storage area in response to an access request from a first computer if the capacity of a first physical storage device configuring a first logical storage area, provided to the first computer, is equal to or lower than a predetermined threshold. The storage system associates the first logical storage area with another physical storage device, which is different from the first physical storage device associated with a second logical storage area provided to the first computer and a second computer, and allocates a data storage area from the another physical storage device if the capacity of the first physical storage device associated with the first logical storage area exceeds the predetermined threshold. | 2011-06-23 |
20110153968 | DATA DUPLICATION CONTROL METHOD - When there is a change in a group of volumes managed by a host computer, data duplication processing is immediately carried out against the changed volume. The host computer includes a volume-managing portion, a data duplication-controlling portion which executes the data duplication of data stored in a volume in a main data center, and a data duplication storing portion which stores data necessary for the data duplication. The data duplication-controlling portion compares data held by the volume-managing portion with the data in the data duplication storing portion, and updates the data in the data duplication storing portion based on the data held by the volume-managing portion. | 2011-06-23 |
20110153969 | DEVICE AND METHOD TO CONTROL COMMUNICATIONS BETWEEN AND ACCESS TO COMPUTER NETWORKS, SYSTEMS OR DEVICES - A network security device and method for one way or secure communication are disclosed. At least one processor is connected to a higher level network port and a lower level network port, and is connectable to a shared memory. The at least one processor is configured to send a data to the lower level network port via the shared memory in response to receiving the data from the higher level network port and to decline or ignore any request from the lower level network port to write to the shared memory. The at least one processor, which may be a higher level processor, may be further configured to decline or ignore any request from the higher level network port to read the shared memory. A lower level processor, connected to the lower level network port, may be at least conditionally disabled from writing to the shared memory. | 2011-06-23 |
20110153970 | Method and Apparatus for the Execution of a Program - An apparatus and a method is provided for the execution of a program by a program-controlled device, in which the program-controlled device receives instructions and automatically executes the program if it receives an access instruction for accessing a protected memory area. The invention further relates to a programmable transponder containing at least one such program-controlled device. | 2011-06-23 |
20110153971 | Data Processing System Memory Allocation - The present invention provides a data processing system with multiple logical partitions that isolate memory resources for applications contained in the logical partitions. A method is provided for moving a specific memory quantity between two logical partitions by first computing a threshold amount. Then, if the specific memory quantity to be transferred is less than the threshold amount, removing the specific memory quantity from memory assigned in the first partition and adding the specific memory quantity to memory assigned in the second partition. However, if the specific memory quantity is greater than the threshold amount, the method provides for removing an amount equal to the threshold from memory assigned in the first partition and adding that threshold amount to memory assigned in the second partition and repeating the removing and adding steps until the specific memory quantity has been transferred. | 2011-06-23 |
20110153972 | FREE SPACE DEFRAGMENTION IN EXTENT BASED FILE SYSTEM - Example apparatus, methods, data structures, and computers defragment unallocated space in a storage associated with an extent based file system. One example method locates a first unallocated area having a desired size and a desired location to receive an extent from a first end of an allocated area in the storage. The example method then swaps the extent from the first end of the allocated area with the first unallocated area. The example method also locates a second unallocated area having a desired size and a desired location to receive an extent from a second opposite end of the allocated area in the storage. The example method then swaps the extent from the second end of the allocated area with the second unallocated area. The example method may continue to swap until no more suitable unallocated regions are available to receive an extent sliced off an allocated area. | 2011-06-23 |
20110153973 | HYBRID SOLID-STATE MEMORY SYSTEM HAVING VOLATILE AND NON-VOLATILE MEMORY - A hybrid solid-state memory system is provided for storing data. The solid-state memory system comprises a volatile solid-state memory, a non-volatile solid-state memory, and a memory controller. Further, a method is provided for storing data in the solid-state memory system. The method comprises the following steps. A write command is received by the memory controller. Write data is stored in the volatile memory in response to the write command. Data is transferred from the volatile memory to the non-volatile memory in response to a data transfer request. | 2011-06-23 |
20110153974 | SYSTEM AND METHOD OF OPERATING MEMORY DEVICES OF MIXED TYPE - A memory system architecture is provided in which a memory controller controls memory devices in a serial interconnection configuration. The memory controller has an output port for sending memory commands and an input port for receiving memory responses for those memory commands requisitioning such responses. Each memory device includes a memory, such as, for example, NAND-type flash memory, NOR-type flash memory, random access memory and static random access memory. Each memory command is specific to the memory type of a target memory device. A data path for the memory commands and the memory responses is provided by the interconnection. A given memory command traverses memory devices in order to reach its intended memory device of the serial interconnection configuration. Upon its receipt, the intended memory device executes the given memory command and, if appropriate, sends a memory response to a next memory device. The memory response is transferred to the memory controller. | 2011-06-23 |
20110153975 | METHOD FOR PRIORITIZING VIRTUAL REAL MEMORY PAGING BASED ON DISK CAPABILITIES - A method manages memory paging operations. Responsive to a request to page out a memory page from a shared memory pool, the method identifies whether a physical space within one of a number of paging space devices has been allocated for the memory page. If physical space within the paging space device has not been allocated for the memory page, a page priority indicator for the memory page is identified. The memory page is then allocated to one of a number of memory pools within one of the number of paging space devices. The memory page is allocated one of the memory pools according to the page priority indicator of the memory page. The memory page is then written to the allocated memory pools. | 2011-06-23 |
20110153976 | METHODS AND APPARATUSES TO ALLOCATE FILE STORAGE VIA TREE REPRESENTATIONS OF A BITMAP - Methods and apparatuses that search tree representations of a bitmap for available blocks to allocate in storage devices are described. An allocation request for a file may be received to initiate the search. In one embodiment, the bitmap may include an array of bits corresponding to blocks in the storage devices. Each bit may indicate whether one of the blocks is available. The tree representations may include at least one red-black tree having nodes corresponding to one or more consecutive bits in the bitmap indicating an extent of available blocks. One of the tree representations may be selected according to a file associated with an allocation request to identify an extent of available block matching the allocation request. The tree representations may be synchronized as the bitmap is updated with changes of block allocations in the storage devices. | 2011-06-23 |
20110153977 | STORAGE SYSTEMS AND METHODS - Systems and methods for information storage replication are presented. In one embodiment a storage flow control method includes receiving a memory operation indication; performing a pre-reserve allocation process before proceeding with the memory operation, wherein the pre-reserve allocation process includes converting available unallocated memory space to allocated memory space if there is sufficient available unallocated memory space to perform the memory operation; executing the memory operation if the pre-reserve allocation process returns an indication there is sufficient memory space allocated to perform the memory operation; and aborting the memory operation if the pre-reserve allocation process returns an indication there is sufficient memory space allocated to perform the memory operation. In one embodiment, the memory operation is a write operation. The memory operation can be a write operation. | 2011-06-23 |
20110153978 | Predictive Page Allocation for Virtual Memory System - A virtual memory method for allocating physical memory space required by an application by tracking the page space used in each of a sequence of invocations by an application requesting memory space; keeping count of the number of said invocations; and determining the average page space used for each of said invocations from the count and previous average. Then, this average page space is recorded as a predicted allocation for the next invocation. This recorded average space is used for the next invocation. If there is any additional page space required by said next invocation, this additional page space may be accessed through any conventional default page space allocation. | 2011-06-23 |
20110153979 | MODIFIED B+ TREE TO STORE NAND MEMORY INDIRECTION MAPS - Embodiments of the invention generally pertain to memory devices and more specifically to reducing the write amplification of memory devices without increasing cache requirements. Embodiments of the present invention may be represented as a modified B+ tree in that said tree comprises a multi-level tree in which all data items are stored in the leaf nodes of the tree. Each non-leaf node in the tree will reference a large number of nodes in the next level down from the tree. Modified B+ trees described herein may be represented as data structures used to map memory device page addresses. The entire modified B+ tree used to map said pages may be stored on the same memory device requiring limited amounts of cache. These embodiments may be utilized by low cost controllers that require good sequential read and write performance without large amounts of cache. | 2011-06-23 |
20110153980 | MULTI-STAGE RECONFIGURATION DEVICE AND RECONFIGURATION METHOD, LOGIC CIRCUIT CORRECTION DEVICE, AND RECONFIGURABLE MULTI-STAGE LOGIC CIRCUIT - To provide a device to reconfigure multi-level logic networks, which enable logic modification and reconfiguration of a multi-level logic network with small circuit area and low-power dissipation in a simple manner. For example, in the case of reconfiguring a multi-level logic network following logic modification for deleting an output vector F(b) of an objective logic function F(X) corresponding to an input vector b, unmodified pq elements are selected one by one from the nearest pq element E | 2011-06-23 |
20110153981 | Heterogeneous computer architecture based on partial reconfiguration - Systems and methods for partial reconfiguration of reconfigurable application specific integrated circuit (ASIC) devices that may employ an interconnection template to allow partial reconfiguration (PR) blocks of an ASIC device to be selectively and dynamically interconnected and/or disconnected in standardized fashion from communication with a packet router within the same ASIC device. | 2011-06-23 |
20110153982 | SYSTEMS AND METHODS FOR COLLECTING DATA FROM MULTIPLE CORE PROCESSORS - Systems and methods are disclosed for collecting data from cores of a multi-core processor using collection packets. A collection packet can traverse through cores of the multi-core processor while accumulating requested data. Upon completing the accumulation of the requested data from all required cores, the collection packet can be transmitted to a system operator for system maintenance and/or monitoring. | 2011-06-23 |
20110153983 | Gathering and Scattering Multiple Data Elements - According to a first aspect, efficient data transfer operations can be achieved by: decoding by a processor device, a single instruction specifying a transfer operation for a plurality of data elements between a first storage location and a second storage location; issuing the single instruction for execution by an execution unit in the processor; detecting an occurrence of an exception during execution of the single instruction; and in response to the exception, delivering pending traps or interrupts to an exception handler prior to delivering the exception. | 2011-06-23 |
20110153984 | DYNAMIC VOLTAGE CHANGE FOR MULTI-CORE PROCESSING - Embodiments of the disclosure generally set forth techniques for supplying different voltage levels and clock signals to a processor core. One example method includes determining a first workload of a first processor core in the multi-core processor for performing a first computing task associated with a first image area and a first geometric mapping between the first computing task and the first processor core, selecting a first voltage level or a first clock signal having a first clock frequency for the first processor core based on the determined first workload, wherein the first voltage level is compatible with the selected first clock frequency, initiating a voltage change to the first processor core based on the selected first voltage level, and initiating a clock change to the first processor core based on the selected first clock signal having the first clock frequency. | 2011-06-23 |
20110153985 | SYSTEMS AND METHODS FOR QUEUE LEVEL SSL CARD MAPPING TO MULTI-CORE PACKET ENGINE - The present invention is directed towards systems and methods for distributed operation of a plurality of cryptographic cards in a multi-core system. In various embodiments, a plurality of cryptographic cards providing encryption/decryption resources are assigned to a plurality of packet processing engines in operation on a multi-core processing system. One or more cryptographic cards can be configured with a plurality of hardware or software queues. The plurality of queues can be assigned to plural packet processing engines so that the plural packet processing engines share cryptographic services of a cryptographic card having multiple queues. In some embodiments, all cryptographic cards are configured with multiple queues which are assigned to the plurality of packet processing engines configured for encryption operation. | 2011-06-23 |