48th week of 2011 patent applcation highlights part 65 |
Patent application number | Title | Published |
20110296045 | STREAMING MEDIA DELIVERY COMPOSITE - A system delay factor associated with a segment file of a streaming media product is determined. A file transfer delay factor associated with the segment file of the streaming media product is also determined. A media delivery composite is determined for the segment file of the streaming media product based upon, at least in part, the system delay factor associated with the segment file and the file transfer delay factor associated with the segment file. | 2011-12-01 |
20110296046 | ADAPTIVE PROGRESSIVE DOWNLOAD - Data packets to be transferred over a network as part of a temporally ordered content stream are obtained by an adaptive progressive download (APD) server. The APD server divides the data packets of the content stream into epochs of contiguous data, the epochs including a current epoch. The APD server determines a bit rate available on the network for transferring the current epoch and calculates an estimate of a playback time of the content stream buffered at a computer to which the content stream is being transferred and played back. The calculation of the estimate is based at least in part on the bit rate available on the network and an encoding bit rate of the content stream. The APD server controls the transfer of the content stream over the network in accordance with the estimated playback time. | 2011-12-01 |
20110296047 | METHOD AND APPARATUS FOR SEAMLESS PLAYBACK OF MEDIA - Methods and apparatus are provided for seamless playback of media files by an application of a computing device. In one embodiment a method includes detecting a user selection for playback of one or more media files by the application, initiating playback of a first media file by the application based on the selection, and determining a time period to pre-load a second media file during playback of the first media file. The method may further include pre-loading data for the second media file, initiating playback of a buffer portion following an endpoint of the first media file, and continuing playback of the second media file following the endpoint of the buffer portion. | 2011-12-01 |
20110296048 | Method and system for stream handling using an intermediate format - A method of delivering a live stream is implemented within a content delivery network (CDN) and includes the high level functions of recording the stream using a recording tier, and playing the stream using a player tier. The step of recording the stream includes a set of sub-steps that begins when the stream is received at a CDN entry point in a source format. The stream is then converted into an intermediate format (IF), which is an internal format for delivering the stream within the CDN and comprises a stream manifest, a set of one or more fragment indexes (FI), and a set of IF fragments. The player process begins when a requesting client is associated with a CDN HTTP proxy. In response to receipt at the HTTP proxy of a request for the stream or a portion thereof, the HTTP proxy retrieves (either from the archive or the data store) the stream manifest and at least one fragment index. Using the fragment index, the IF fragments are retrieved to the HTTP proxy, converted to a target format, and then served in response to the client request. The source format may be the same or different from the target format. Preferably, all fragments are accessed, cached and served by the HTTP proxy via HTTP. | 2011-12-01 |
20110296049 | METHOD AND SYSTEM FOR REALIZING MASSIVE TERMINALS ACCESS OF A STREAMING MEDIA SERVER - The present invention relates to IPTV technical field, and discloses a method and system for implementing access of a large number of terminals to a streaming media server to solve the technical problems of the streaming media server having low processing efficiency and not supporting the access of a large number of terminals by TCP short connections when there are a large number of terminals accessing to the streaming media server in the prior art. The present invention uses EPOLL/POLL event polling interface to poll the events in the established link, thereby improving the capability of the system to accept the access of a large number of terminals. A socket file descriptor is used as an index entry of the polling list in the present invention such that the link retrieval efficiency is improved when there are a large number of terminals accessing. | 2011-12-01 |
20110296050 | REALTIME WEBSITES WITH PUBLICATION AND SUBSCRIPTION - Architecture that utilizes a long poll publication/subscription (pubsub) model for updating realtime objects of a webpage. Each realtime-enabled object is a pubsub entity in a pubsub service. Each rendering of the webpage creates a subscription on a page object. The entity in the pubsub service enables the realtime communications of content to the webpage object. The architecture provides light-weight realtime anonymous pubsub at scale, a light-weight pubsub that can scale to the web on the backend, and integration into existing website code by plugging in at the javascript level. | 2011-12-01 |
20110296051 | ROUTE AWARE NETWORK LINK ACCELERATION - A method and apparatus for route aware network link acceleration provides a managed communication channel for accelerated and reliable network communication between a client and other network devices as needed. The communication channel may comprise one or more segments having increased speed, reliability, security, or other improved characteristics as compared to traditional communication links. Network traffic may be routed through one or more of the segments based on various criteria to improve communication of the traffic. In one embodiment, the segments may be arranged in a daisy chain configuration and be provided by one or more chaining nodes. | 2011-12-01 |
20110296052 | Virtual Data Center Allocation with Bandwidth Guarantees - A virtual data center allocation architecture with bandwidth guarantees that provides for the creation of multiple virtual data centers from a single physical infrastructure. The virtual data center allocation is accomplished in three steps. First, clusters are created from the servers in the physical infrastructure. Second, a bipartite graph is built to map the virtual machines to the servers located in a particular cluster and finally a path is calculated between two virtual machines. The virtual data centers may be dynamically expanded or contracted based on changing bandwidth guarantees. | 2011-12-01 |
20110296053 | APPLICATION-LAYER TRAFFIC OPTIMIZATION SERVICE SPANNING MULTIPLE NETWORKS - Using the ALTO Service, networking applications can request through the ALTO protocol information about the underlying network topology from the ISP or Content Provider. The ALTO Service provides information such as preferences of network resources with the goal of modifying network resource consumption patterns while maintaining or improving application performance. This document describes, in one example, an ALTO server that intersects network and cost maps for a first network with network and cost maps for a second network to generate a master cost map that includes one or more master cost entries that each represent a cost to traverse a network from an endpoint in the first network to an endpoint in the second network. Using the master cost map, a redirector may select a preferred node in the first network with which to service a content request received from a host in the second network. | 2011-12-01 |
20110296054 | Network Message Transmission - A method, computer program product, and apparatus for transmitting a message over a network are presented. A processor unit receives the message for transmission over the network and a portion of an address for a source from which the message is to be transmitted. The processor unit identifies an interface configured to transmit messages from the source onto the network using the portion of the address. The processor unit then transmits the message from the source onto the network using the interface. | 2011-12-01 |
20110296055 | ID SETTING SYSTEM, ID SETTING METHOD AND DISPLAY UNIT USING THE SAME - An ID setting method and system capable of easily setting IDs of a plurality of display units. The ID setting system includes a plurality of display units connected through an input port and an output port to each other, and a control unit that controls assignment of an ID to each of the plurality of display units. Each of the display units compares a present ID to an initial ID, and disables a connection between the output port and an another display unit when the present ID and the initial ID match. Accordingly, a user can easily assign IDs to each of the plurality of display units. | 2011-12-01 |
20110296056 | HIGH-SPEED INTERFACE FOR DAISY-CHAINED DEVICES - A plurality of devices are operated by storing at a device a first ID number received at a first port of the device and a second ID number received at a second port of the device. The device receives a data command through at least one of the first and second ports. The data command has a command ID number. The device executes the data command when at least one of the command ID number is equal to the first ID number when the data command is received at the first port and the command ID number is equal to the second ID number when the data command is received at the second port. | 2011-12-01 |
20110296057 | Event Handling In An Integrated Execution Environment - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for handling input received from a common interface of a program and a runtime environment when both the program and the runtime environment are configured to consume the received input. Given that both a browser program and a media player program hosted by the browser program are configured to detect an event of a certain type, there may be a contention of whether the browser program or the media player program may act first on the detected event. The disclosed systems and techniques enable interpretation of a user's intent when the user interacts with a webpage hosting media content and when the user's input occurs over media content rendered by the media player program. Similar advantages may also be realized within the context of another execution environment, or other program, different than a browser program. | 2011-12-01 |
20110296058 | MEDIA PLAYER DEVICE AND METHOD FOR WAKE-UP THEREOF - A media player device and a method for wake-up thereof are provided. The method includes: when the media player device is in a standby mode, checking whether an external device is connected to the plurality of contact locations; and if the external device is connected to the media player device, waking up the media player device. | 2011-12-01 |
20110296059 | System and method for seamless management of multi-personality mobile devices - A system and method for managing multi-mode mobile devices from a personal computer (PC) in which the two devices are communicating over a point to point connection such as a Universal Serial Bus (USB) or TCP/IP and provides a mechanism to remotely control the personality of the device over the communications link. Furthermore, the system and method allows the user of the PC to control when the personality change occurs and also allows specific system events to control personality changes automatically on the user's behalf. Additionally, the system and method controls the user experience on both the mobile device and PC to ensure that the appropriate application is available to accept a connection to the new personality on both the mobile device and the PC. | 2011-12-01 |
20110296060 | Apparatus for Monitoring At Least One Gage Device Using USB Virtual COM And Keyboard Emulator - Embodiments of an intelligent cable and flexible multiplexer to monitor at least one gage device are taught herein. The cables and multiplexers can receive any brand or electronic gages using a variety of asynchronous or synchronous communication protocols and provide outputs according to a desired communication protocol, including USB and RS232. One embodiment desirably monitors a gage device using a USB Virtual COM and keyboard emulator. | 2011-12-01 |
20110296061 | Apparatus, Methods, And Computer-Code For Handling An Impending Decoupling Between A Peripheral Device And A Host Device - Apparatus, methods and computer-code are disclosed where an impending decoupling between a peripheral device and a host is detected. In some embodiments, in response to the detected impending disconnection, a user alert signal is generated. In some embodiments, an ‘onboard detector’ that is associated with housing of the peripheral device and operative to detect the impending disconnection is provided. In some embodiments, the user alert signal is generated in accordance with inter-device data flow between the host and the peripheral device. Exemplary peripheral devices include but are not limited to transient storage devices such as a USB flash drives (UFD). | 2011-12-01 |
20110296062 | STORAGE APPARATUS AND METHOD FOR CONTROLLING STORAGE APPARATUS - An object is to efficiently and securely process a data I/O request received from an external apparatus by a storage apparatus. A storage apparatus | 2011-12-01 |
20110296063 | BUFFER MANAGER AND METHODS FOR MANAGING MEMORY - Some of the embodiments of the present disclosure provide a method comprising managing a plurality of buffer addresses in a system-on-chip (SOC); and if a number of available buffer addresses in the SOC falls below a low threshold value, obtaining one or more buffer addresses from a memory, which is external to the SOC, to the SOC. Other embodiments are also described and claimed. | 2011-12-01 |
20110296064 | UPLINK DATA THROTTLING BY BUFFER STATUS REPORT (BSR) SCALING - A technique for uplink data throttling includes buffer status report (BSR) scaling. A target data flow rate may be determined based on at least on condition of a wireless device. The buffer status report may be adjusted to cause the target flow rate and transmitted by the wireless device. The wireless device may then receive a flow control command based on the buffer status report. | 2011-12-01 |
20110296065 | CONTROL UNIT FOR THE EXCHANGE OF DATA WITH A PERIPHERAL UNIT, PERIPHERAL UNIT, AND METHOD FOR DATA EXCHANGE - A control unit is described that has at least one communications interface for the exchange of data with at least one peripheral unit, the communications interface being configured for transmitting synchronization signals to the peripheral unit in a first, synchronous operating mode. The communications interface is configured to change a time interval between two successive synchronization signals. | 2011-12-01 |
20110296066 | SYSTEM ON CHIP AND TRANSMISSION METHOD UNDER AXI BUS - A system on chip (soc) and A transmission method under Advanced eXtensible Interface (AXI) bus are disclosed. The system includes a master device, a first extending module, a first interconnection structure, a first subtracting module, a second interconnection structure, and a slave device. The first extending module is configured to add N bits into an identifier (ID) carried in a transmission request, where N is equal to a sum of bits added by all interconnection structures in a longest loop of a system into the ID carried in the transmission request that passes through the interconnection structures. The first subtracting module is configured to subtract M bits from the ID carried in the transmission request output by the first interconnection structure when a slave device to be accessed by the master device is not a slave device connected with the first interconnection structure, where M is equal to the number of bits added by the first interconnection structure into the ID carried in the transmission request that passes through the first interconnection structure. The embodiments reduce costs and avoid the problems caused by ID compression. | 2011-12-01 |
20110296067 | AUTOMATIC ADDRESSING SCHEME FOR 2 WIRE SERIAL BUS INTERFACE - An automatic addressing bus system and method of communication comprising a main and an end device, wherein the respective bus controllers used in the main and end devices comprise multi-master capability. The main controlling device has an address known to the end device to be connected, the end device is able to actively initiate the address allocation procedure, without the need for user interaction. The method and system of the present system may be implemented using such known bus systems such as 2-wire serial buses, in particular I | 2011-12-01 |
20110296068 | OPTIMIZED ARBITER USING MULTI-LEVEL ARBITRATION - An apparatus comprising a first sub-arbiter circuit and a second sub-arbiter circuit. The first sub-arbiter circuit may be configured to determine a winning channel from a plurality of channel requests based on a first criteria. The second sub-arbiter circuit may be configured to determine a winning channel received from the plurality of channel requests based on a second criteria. The second sub-arbiter may also be configured to optimize the order of the winning channels from the first sub-arbiter by overriding the first sub-arbiter if the second criteria creates a more efficient data transfer. | 2011-12-01 |
20110296069 | Fabric Based Lock Manager Service - A replicated finite state machine lock service facilitates resource sharing in a distributed system. A lock request from a client identifies a resource and a lock-mode, and requests a leaseless lock on the resource. The service uses client instance identifiers to categorize requests as duplicate, stale, abandoned, or actionable. A lock may be abandoned when a client holding the lock goes down. After a per-client abandonment timer expires, the lock service may treat any exclusive lock granted to the client as abandoned, and treat any non-exclusive lock granted to the client as unlocked. The service tries to notify a lock-holding client if another client requests the same lock, and treats the lock as abandoned if the notification attempt fails. An abandoned read lock is granted to a different client on request. An abandoned write lock is granted or refused depending on whether the requesting client accepts abandoned write locks. | 2011-12-01 |
20110296070 | MOTHERBOARD AND COMPUTER USING SAME - A motherboard includes a main board, a CPU seat, and at least one CPU expansion interface. The first CPU seat is seated on the main board. The at least one CPU expansion interface is formed on the main board abutting the first CPU seat for providing an expansion accessing terminal to add another CPU to the motherboard. | 2011-12-01 |
20110296071 | BLADE SERVER AND METHOD FOR ADDRESS ASSIGNMENT IN BLADE SERVER SYSTEM - A blade server and a method for auto-assigning a unique communication address for a blade server system. The blade server system includes a plurality of blade servers and a mainboard with a plurality of slots. The plurality of blade servers is received in the mainboard operating in the blade server system. The blade server includes at least one processor, a memory and a bus. The blade server connects the memory to the at least one processor by the bus and assigns a unique slot identification to the receiving slot. The blade server detects the unique slot identification of receiving slot of the mainboard. Then, the blade server assigns the unique communication address to the memory of the blade server according to the unique slot identification of the receiving slot. | 2011-12-01 |
20110296072 | SYSTEM AND METHOD FOR CONTROLLING PCI-E SLOTS OF COMPUTER - A method for controlling peripheral component interconnect express (PCI-E) slots of a computer reads processor configuration information of a PCI-E slot unit from a CMOS chip when the computer boots up, and controls a GPIO interface to output a first control signal to a PCI-E multiplexer according to the processor configuration information, to control a PCI-E slot unit of the computer to connect to one of processors of the computer through the PCI-E multiplexer according to the first control signal. Then the method checks whether the processor connected to the PCI-E slot unit is running normally. In addition, the method controls the GPIO interface to output a second control signal to the PCI-E multiplexer if the processor connected to the PCI-E slot unit is not running normally, to control the PCI-E slot unit to connect to another processor through the PCI-E multiplexer according to the second control signal. | 2011-12-01 |
20110296073 | TIME ALIGNING CIRCUIT AND TIME ALIGNING METHOD FOR ALIGNING DATA TRANSMISSION TIMING OF A PLURALITY OF LANES - A time aligning circuit includes a plurality of buffers, a plurality of delay selectors, a plurality of adjustment symbol generators, and a controller. Each buffer receives an ordered set on a corresponding lane. Each delay selector delays an output of the ordered set of the corresponding buffer. Each adjustment symbol generator outputs an adjustment symbol or the output received from the corresponding delay selector according to an adjustment control signal. When an initial symbol of a designated delay selector is detected but initial symbols of other delay selectors are not received yet, the controller generates the delay control signal to the designated delay selector and generates the adjustment control signal to control a designated adjustment symbol generator corresponding to the designated delay selector in order to output one adjustment symbol until initial signals of all delay selectors are detected. | 2011-12-01 |
20110296074 | MEMORY MAPPED INPUT/OUTPUT BUS ADDRESS RANGE TRANSLATION FOR VIRTUAL BRIDGES - In an embodiment, a south chip comprises a first virtual bridge connected to a shared egress port and a second virtual bridge also connected to the shared egress port. The first virtual bridge receives a first secondary bus identifier, a first subordinate bus identifier, and a first MMIO bus address range from a first north chip. The second virtual bridge receives a second secondary bus identifier, a second subordinate bus identifier, and a second MMIO bus address range from a second north chip. The first virtual bridge stores the first secondary bus identifier, the first subordinate bus identifier, and the first MMIO bus address range. The second virtual bridge stores the second secondary bus identifier, the second subordinate bus identifier, and the second MMIO bus address range. The first north chip and the second north chip are connected to the south chip via respective first and second point-to-point connections. | 2011-12-01 |
20110296075 | MOTHERBOARD USED IN SERVER COMPUTER - An exemplary motherboard includes a substrate, a first CPU socket provided on the substrate for receiving a first CPU, a second CPU socket provided on the substrate for receiving a second CPU, a switching circuit connected to the first CPU and the second CPU, at least one quick path interconnect (QPI) bus connecting the first CPU to the second CPU, a number of first peripheral component interconnect express (PCI-e) interfaces connected to the first CPU via a number of first wires, a number of second PCI-e interfaces connected to the second CPU via a number of second wires, and a activating chip connected to the first CPU and the second CPU via the switching circuit and configured for starting a peripheral device connected to the first PCI-e interfaces or the second PCI-e interfaces. | 2011-12-01 |
20110296076 | HYBRID DATA TRANSMISSION EXCHANGER AND HYBRID DATA TRANSMISSION METHOD - The present invention discloses a hybrid data transmission exchanger and a hybrid data transmission method, whereby hosts can access storage units and share data. The hybrid data transmission exchanger comprises an embedded central processing unit, a virtual bridge/switch unit, an optical fiber network connection unit and an Ethernet connection unit. The embedded central processing unit is connected with the storage units and detects the virtual bridge/switch unit, optical fiber network connection unit and Ethernet connection unit to detect the connection states of a host. A host can directly access the storage units via the optical fiber network connection unit or the Ethernet connection unit. When a host is linked to the exchanger via a PCIe interface, the virtual bridge/switch unit converts an address area and a request identification code of the host to correspond to the embedded central processing unit, whereby the host can access storage units. | 2011-12-01 |
20110296077 | MEMORY HUB ARCHITECTURE HAVING PROGRAMMABLE LANE WIDTHS - A processor-based system includes a processor coupled to a system controller through a processor bus. The system controller is used to couple at least one input device, at least one output device, and at least one data storage device to the processor. Also coupled to the processor bus is a memory hub controller coupled to a memory hub of at least one memory module having a plurality of memory devices coupled to the memory hub. The memory hub is coupled to the memory hub controller through a downstream bus and an upstream bus. The downstream bus has a width of M bits, and the upstream bus has a width of N bits. Although the sum of M and N is fixed, the individual values of M and N can be adjusted during the operation of the processor-based system to adjust the bandwidths of the downstream bus and the upstream bus. | 2011-12-01 |
20110296078 | MEMORY POOL INTERFACE METHODS AND APPARATUSES - Techniques are provided which may be implemented in various methods and/or apparatuses that to provide a memory pool interface capability to interface with a plurality of shared processes/engines and/or a virtual buffer interface associated there with. | 2011-12-01 |
20110296079 | System and Method for Emulating Preconditioning of Solid-State Device - Systems and methods for reducing problems and disadvantages associated with traditional approaches to preconditioning solid-state devices are provided. A method may include storing at least one preconditioning status parameter indicative of at least one variable associated with preconditioning emulation of a solid state device (SSD) including a flash memory. The method may also include modifying a mapping table based on the at least one preconditioning status parameter to emulate preconditioning of the SSD, the mapping table including information for translating virtual logical block addresses (LBAs) of the SSD as seen by the processor into physical LBAs of the flash memory. | 2011-12-01 |
20110296080 | Method Of Writing To A NAND Memory Block Based File System With Log Based Buffering - A method of operating a controller for controlling the programming of a NAND memory chip is shown. The NAND memory chip has a plurality of blocks with each block having a certain amount of storage, wherein the amount of storage in each block is the minimum erasable unit. The method comprising storing in a temporary storage a first plurality of groups of data, wherein each of the groups of data is to be stored in a block of the NAND memory chip. Each group of data is indexed to the block with which it is to be stored. Finally, the groups of data associated with the same block are programmed into the same block in the same programming operation. | 2011-12-01 |
20110296081 | DATA ACCESSING METHOD AND RELATED CONTROL SYSTEM - A data access method and a related control system are provided according to embodiments of the present invention, which enhances the read/write performance of a data storage unit by performing pre-accessing operations upon the data storage unit. The data access method includes receiving a plurality of access requests and a plurality of corresponding addresses to access a plurality of data corresponding to the plurality of access requests from a storage unit; and performing a pre-accessing operation upon the storage unit according to the uniformity of the plurality of access requests and the continuity between the plurality of addresses. | 2011-12-01 |
20110296082 | Method for Improving Service Life of Flash - A method for increasing service life of flash is provided. The method comprises the following steps: reading a data {T} from the flash, calculating and obtaining the corresponding old original data Bi according to the mapping relationship, wherein i is a natural number, j=(n−1)˜0, n is an even number; determining whether the data Bi to be written in is the same as the old original data Bi by comparison, if they are the same, it is not necessary to update the data of this byte in the flash; if the value of the data Bi to be written in is not the same as the value of the old original data Bi, checking whether it is possible to write into the flash directly; if possible, writing into the flash directly; and if it is impossible to write into the flash directly, performing the operation of erasing block. | 2011-12-01 |
20110296083 | DATA STORAGE APPARATUS AND METHOD OF CALIBRATING MEMORY - According to one embodiment, a data storage apparatus includes an interface module and a controller. The interface module is configured to control rewritable nonvolatile memories provided for the respective channels. The controller is configured to write calibration data to the nonvolatile memories of any channel designated, through the interface module at the same time, in order to perform calibration. | 2011-12-01 |
20110296084 | DATA STORAGE APPARATUS AND METHOD OF WRITING DATA - According to one embodiment, a data storage apparatus includes a flash memory and a controller for controlling the flash memory. The flash memory is configured to store data is written in units of a prescribed size. In order to write data smaller than the prescribe size, the controller first isolates attribute data from each save data item of the prescribed size, which has been read from any flash memory, and then stores the attribute data to an attribute data memory, and finally transfers the user data contained in the save data, to a save data memory. | 2011-12-01 |
20110296085 | CACHE MEMORY MANAGEMENT IN A FLASH CACHE ARCHITECTURE - Provided are a system, method, and computer program product for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached. | 2011-12-01 |
20110296086 | FLASH MEMORY HAVING TEST MODE FUNCTION AND CONNECTION TEST METHOD FOR FLASH MEMORY - A flash memory including a controller, the controller including: a state machine; a state decoder that determines whether a state of the state machine is in a specified mode; a command decoder that determines whether an input signal received through an external pin specifies a write operation for writing a specific value into a specific address; and a test mode setting circuit that sets a test mode while the specified mode is maintained when the state decoder determines that the state of the state machine is in the specified mode and when the command decoder determines that the input signal received through the external pin specifies a write operation for writing a specific value into a specific address. | 2011-12-01 |
20110296087 | METHOD OF MERGING BLOCKS IN A SEMICONDUCTOR MEMORY DEVICE, AND SEMICONDUCTOR MEMORY DEVICE TO PERFORM A METHOD OF MERGING BLOCKS - In a method of merging blocks in a semiconductor memory device according to example embodiments, a plurality of data are written into one or more first blocks using a first program method. One or more merge target blocks that are required to be merged are selected among the one or more first blocks. A merge-performing block for a block merge operation is selected among the one or more first blocks and one or more second blocks. A plurality of merge target data are written from the merge target blocks into the merge-performing block using a second program method that is different from the first program method. | 2011-12-01 |
20110296088 | MEMORY MANAGEMENT STORAGE TO A HOST DEVICE - Systems and methods of memory management storage to a host device are disclosed. A method is performed in a data storage device with a non-volatile memory and a controller operative to manage the non-volatile memory and to generate management data for managing the non-volatile memory. The method includes performing, at a given time, originating at the controller data management transfer to a host device or originating at the controller data management retrieval from the host device. | 2011-12-01 |
20110296089 | PROGRAMMING METHOD AND DEVICE FOR A BUFFER CACHE IN A SOLID-STATE DISK SYSTEM - Provided are a method and apparatus for programming a buffer cache in a Solid State Disk (SSD) system. The buffer cache programming apparatus in the SSD system may include a buffer cache unit to store pages, a memory unit including a plurality of memory chips, and a control unit to select at least one of the page as a victim page, based on a delay occurring when a page is stored in at least one target memory chip among the plurality of memory chips. | 2011-12-01 |
20110296090 | Combining Memory Operations - Systems and processes may include a memory coupled to a memory controller. Command signals for performing memory access operations may be received. Attributes of the command signals, such as type, time lapsed since receipt, and relatedness to other command signals, may be determined. Command signals may be sequenced in a sequence of execution based on the attributes. Command signals may be executed in the sequence of execution. | 2011-12-01 |
20110296091 | STORAGE SYSTEM WHICH UTILIZES TWO KINDS OF MEMORY DEVICES AS ITS CACHE MEMORY AND METHOD OF CONTROLLING THE STORAGE SYSTEM - Provide is a storage system including one or more disk drives, and one or more cache memories for temporarily storing data read from the disk drives or data to be written to the disk drives, in which: the cache memories includes volatile first memories and non-volatile second memories; and the storage system receives a data write request, stores the requested data in the volatile first memories, selects one of memory areas of the volatile first memories if a total capacity of free memory areas contained in the volatile first memories is less than a predetermined threshold, write data stored in the selected memory area in the non-volatile second memories, and changes the selected memory area to a free memory area. Accordingly, there can be realized capacity enlarging of the cache memory using a non-volatile memory device while realizing a high speed similar to that of a volatile memory device. | 2011-12-01 |
20110296092 | Storing a Driver for Controlling a Memory - Systems and techniques for accessing a memory, such as a NAND or NOR flash memory, involve storing an operating application for a computing device in a first memory and storing a driver containing software operable to control the first memory in a second memory that is independently accessible from the first memory. By storing the driver in a second memory that is independently accessible from the first memory, changes to the driver and/or the first memory can be made without altering the operating application. | 2011-12-01 |
20110296093 | PROGRAM AND SENSE OPERATIONS IN A NON-VOLATILE MEMORY DEVICE - Methods for programming and sensing in a memory device, a data cache, and a memory device are disclosed. In one such method, all of the bit lines of a memory block are programmed or sensed during the same program or sense operation by alternately multiplexing the odd or even page bit lines to the dynamic data cache. The dynamic data cache comprises dual SDC, PDC, DDC1, and DDC2 circuits such that one set of circuits is coupled to the odd page bit lines and the other set of circuits is coupled to the even page bit lines. | 2011-12-01 |
20110296094 | CIRCULAR WEAR LEVELING - A method for flash memory management comprises providing a head pointer configured to define a first location in a flash memory, and a tail pointer configured to define a second location in a flash memory. The head pointer and tail pointer define a payload data area. Payload data is received from a host, and written to the flash memory in the order it was received. The head pointer and tail pointer are updated such that the payload data area moves in a circular manner within the flash memory. | 2011-12-01 |
20110296095 | DATA MOVEMENT ENGINE AND MEMORY CONTROL METHODS THEREOF - A data movement engine (DME) for an electronic device is disclosed. The DME has an address generating module and a direct memory access (DMA) module. When the memory is switched to a lower power consumption state, a refresh area of a memory of the electronic device is refreshed and a non-refresh area of the memory is not refreshed. The address generating module obtains at least one source address of data in the non-refresh area, and generates at least one destination address for moving data from the non-refresh area to the refresh area and thereby a source-to-destination mapping table is generated. The DMA module performs a first data movement to move data from the non-refresh area to the refresh area according to the source-to-destination mapping table and independently of a microprocessor of the electronic device. | 2011-12-01 |
20110296096 | Method And Apparatus For Virtualized Microcode Sequencing - In one embodiment, the present invention includes a processor having multiple cores and an uncore. The uncore may include a microcode read only memory to store microcode to be executed in the cores (that themselves do not include such memory). The cores can include a microcode sequencer to sequence a plurality of micro-instructions (uops) of microcode that corresponds to a macro-instruction to be executed in an execution unit of the corresponding core. Other embodiments are described and claimed. | 2011-12-01 |
20110296097 | Mechanisms for Reducing DRAM Power Consumption - Mechanisms are provided for inhibiting precharging of memory cells of a dynamic random access memory (DRAM) structure. The mechanisms receive a command for accessing memory cells of the DRAM structure. The mechanisms further determine, based on the command, if precharging the memory cells following accessing the memory cells is to be inhibited. Moreover, the mechanisms send, in response to the determination indicating that precharging the memory cells is to be inhibited, a command to blocking logic of the DRAM structure to block precharging of the memory cells following accessing the memory cells. | 2011-12-01 |
20110296098 | System and Method for Reducing Power Consumption of Memory - Systems and methods for reducing problems and disadvantages associated with power consumption in memory devices are disclosed. In accordance with one embodiment of the present disclosure, a method for improving performance and reducing power consumption in memory may include tracking whether individual units of a memory system are active or inactive. The method may also include placing inactive individual units of the memory system in a self-refresh mode, such that the inactive individual units self-refresh their contents. The method may further include placing active individual units of the memory system in a command-based refresh mode, such that the active individual units are refreshed in response to a received command to refresh their contents. | 2011-12-01 |
20110296099 | Access device and method for accelerating data storage and retrieval into and from storage device - The present invention is to provide an access device connected to a computer, a hard disk drive and a memory disk respectively, wherein the hard disk drive has a normal region divided into a plurality of regular sections, the memory disk is divided into a plurality of mirroring sections, and the access device stores an index table comprising a plurality of fields each having a flag. The access device can execute the steps of receiving a read instruction from the computer; reading sequentially the fields corresponding to the read instruction; reading data stored in a mirroring section corresponding to a field thus read and sending the data to the computer when the flag in the field is a first value; and reading data stored in a regular section corresponding to the field and sending the data to the computer when the flag in the field is a second value. | 2011-12-01 |
20110296100 | MIGRATING WRITE INFORMATION IN A WRITE CACHE OF A STORAGE SYSTEM - To migrate data from a first storage system to a second storage system, the second storage system detects a migration of a persistent storage media from the first storage system to the second storage system. In response to detecting the migration of the persistent storage media, write information from a write cache in the first storage system is copied to a write cache in the second storage system, where the write caches in the first and second storage systems were not maintained synchronously before the write information from the write cache in the first storage system is copied to the write cache in the second storage system. | 2011-12-01 |
20110296101 | COMPUTER SYSTEM HAVING AN EXPANSION DEVICE FOR VIRTUALIZING A MIGRATION SOURCE LOGICAL UNIT - A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit. | 2011-12-01 |
20110296102 | STORAGE APPARATUS COMPRISING RAID GROUPS OF RAID 1 SERIES AND CONTROL METHOD OF WRITING TO RAID GROUP OF RAID 1 SERIES - A RAID group of RAID 1 series comprises one or more pairs of first storage devices and second storage devices. A storage apparatus reads data from the entire area of a first storage block group including the write destination of write target data in the first storage device. The storage apparatus, in accordance with the write target data and staging data which is the read data, generates one or more data units each of which is the data configured of the write target data or the copy of the same and the staging data part or the copy of the same and of the same size as the first storage block group. The controller writes any of the one or more data units to the first storage block group in the first storage device and, at the same time, writes any of the one or more data units to the second storage block group corresponding to the first storage block group and of the same size as the same in the second storage device. | 2011-12-01 |
20110296103 | STORAGE APPARATUS, APPARATUS CONTROL METHOD, AND RECORDING MEDIUM FOR STORAGE APPARATUS CONTROL PROGRAM - According to an embodiment, a storage apparatus including: a plurality of storage media of which a first RAID is composed, a logical storage area of the first RAID for storing data being set over; a plurality of expansion storage media of which a second RAID is composed; a spare storage medium that is different from in any of the storage media or the expansion storage media; and a configuration control unit that when the spare storage medium is set in the first RAID, sets the logical storage area in the storage media, and when the expansion storage media are set in the storage apparatus, sets the expansion storage media in the second RAID, excludes the spare storage medium that is set in the first RAID, and moves the logical storage area that is set in the first RAID to the second RAID. | 2011-12-01 |
20110296104 | STORAGE SYSTEM - A storage system includes: a distribution storage processing means configured to distribute and store a plurality of fragment data into a plurality of storing means; a data location monitoring means configured to monitor a data location status of the fragment data and store data location information representing the data location status; and a data restoring means configured to, when the storing means is down, regenerate the fragment data having been stored in the down storing means based on the fragment data stored in the other storing means. The storage system also includes: a data location returning means configured to, when the down storing means recovers, return a data location of the fragment data by using the fragment data stored in the storing means having recovered so that the data location status becomes as represented by the data location information stored by the data location monitoring means. | 2011-12-01 |
20110296105 | SYSTEM AND METHOD FOR REALIZING RAID-1 ON A PORTABLE STORAGE MEDIUM - A system of realizing RAID-1 on a portable storage medium includes a Universal Serial Bus device and the portable storage medium. The portable storage medium is divided into a main partition and at least one backup partition according to a RAID-1 mode. The Universal Serial Bus device is coupled to the portable storage medium for receiving a write command and/or a read command transmitted by a host, and writing data to the portable storage medium and/or reading data from the portable storage medium according to the write command and/or the read command. The Universal Serial Bus device does not transmit capacity information of the at least one backup partition to the host. | 2011-12-01 |
20110296106 | SYSTEM FOR REALIZING MULTI-PORT STORAGE MEDIA BASED ON A UASP PROTOCOL OF A USB SPECIFICATION VERSION 3.0 AND METHOD THEREOF - A system for realizing multi-port storage media based on a UASP protocol of a USB specification version 3.0 includes a Universal Serial Bus, at least one storage media, and a storage device, where the storage device stores a mapping table. The Universal Serial Bus is used for transmitting at least one write data command. Each storage media is used for replying a write ready command to the Universal Serial Bus after receiving a write data command. When the Universal Serial Bus transmits a data including a command tag according to the write ready command, the storage device finds a number mapping to the command tag according to the command tag and the mapping table, and transmits the data to a storage media mapping to the number. | 2011-12-01 |
20110296107 | Latency-Tolerant 3D On-Chip Memory Organization - A mechanism is provided within a 3D stacked memory organization to spread or stripe cache lines across multiple layers. In an example organization, a 128B cache line takes eight cycles on a 16B-wide bus. Each layer may provide 32B. The first layer uses the first two of the eight transfer cycles to send the first 32B. The next layer sends the next 32B using the next two cycles of the eight transfer cycles, and so forth. The mechanism provides a uniform memory access. | 2011-12-01 |
20110296108 | Methods to Estimate Existing Cache Contents for Better Query Optimization - A method for estimating contents of a cache determines table descriptors referenced by a query, and scans each page header stored in the cache for the table descriptor. If the table descriptor matches any of the referenced table descriptors, a page count value corresponding to the matching referenced table descriptor is increased. Alternatively, a housekeeper thread periodically performs the scan and stores the page count values in a central lookup table accessible by threads during a query run. Alternatively, each thread independently maintains a hash table with page count entries corresponding to table descriptors for each table in the database system. A thread increases or decreases the page count value when copying or removing pages from the cache. A page count value for each referenced table descriptor is determined from a sum of the values in the hash tables. A master thread performs bookkeeping and prevents hash table overflows. | 2011-12-01 |
20110296109 | CACHE CONTROL FOR ADAPTIVE STREAM PLAYER - An adaptive stream player that has control over whether a retrieved stream is cached in a local stream cache. For at least some of the stream portions requested by the player, before going out over the network, a cache control component first determines whether or not an acceptable version of the stream portion is present in a stream cache. If there is an acceptable version in the stream cache, that version is provided rather than having to request the stream portion of the network. For stream portions received over the network, the cache control component decides whether or not to cache that stream portion. Thus, the cache control component allows the adaptive stream player to work in offline scenarios and also allows the adaptive stream player to have rewind, pause, and other controls that use cached content. | 2011-12-01 |
20110296110 | Critical Word Forwarding with Adaptive Prediction - In an embodiment, a system includes a memory controller, processors and corresponding caches. The system may include sources of uncertainty that prevent the precise scheduling of data forwarding for a load operation that misses in the processor caches. The memory controller may provide an early response that indicates that data should be provided in a subsequent clock cycle. An interface unit between the memory controller and the caches/processors may predict a delay from a currently-received early response to the corresponding data, and may speculatively prepare to forward the data assuming that it will be available as predicted. The interface unit may monitor the delays between the early response and the forwarding of the data, or at least the portion of the delay that may vary. Based on the measured delays, the interface unit may modify the subsequently predicted delays. | 2011-12-01 |
20110296111 | INTERFACE FOR ACCESSING AND MANIPULATING DATA - A system and method for an interface for accessing and manipulating data to allow access to data on a storage module on a network based system. The data is presented as a virtual disk for the local system through a hardware interface that emulates a disk interface. The system and method incorporates features to improve the retrieval and storage performance of frequently access data such as partition information, operating system files, or file system related information through the use of local caching and difference calculations. This system and method may be used to replace some, or all, of the fixed storage in a device. The system and method may provide both online and offline access to the data. | 2011-12-01 |
20110296112 | Reducing Energy Consumption of Set Associative Caches by Reducing Checked Ways of the Set Association - Mechanisms for accessing a set associative cache of a data processing system are provided. A set of cache lines, in the set associative cache, associated with an address of a request are identified. Based on a determined mode of operation for the set, the following may be performed: determining if a cache hit occurs in a preferred cache line without accessing other cache lines in the set of cache lines; retrieving data from the preferred cache line without accessing the other cache lines in the set of cache lines, if it is determined that there is a cache hit in the preferred cache line; and accessing each of the other cache lines in the set of cache lines to determine if there is a cache hit in any of these other cache lines only in response to there being a cache miss in the preferred cache line(s). | 2011-12-01 |
20110296113 | RECOVERY IN SHARED MEMORY ENVIRONMENT - A method, system, and computer usable program product for recovery in a shared memory environment are provided in the illustrative embodiments. A core in a multi-core processor is designated as a user level core (ULC), which executes an instruction to modify a memory while executing an application. A second core is designated as a operating system core (OSC), which manages checkpointing of several segments of the shared memory. A set of flags is accessible to a memory controller to manage a shared memory. A flag in the set of flags corresponds to one segment in the segments of the shared memory. A message or instruction for modification of a segment is received. A cache line tracking determination is made whether a cache line used for the modification has already been used for a similar modification. If not, a part of the segment is checkpointed. The modification proceeds after checkpointing. | 2011-12-01 |
20110296114 | ATOMIC EXECUTION OVER ACCESSES TO MULTIPLE MEMORY LOCATIONS IN A MULTIPROCESSOR SYSTEM - A method and central processing unit supporting atomic access of shared data by a sequence of memory access operations. A processor status flag is reset. A processor executes, subsequent to the setting of the processor status flag, a sequence of program instructions with instructions accessing a subset of shared data contained within its local cache. During execution of the sequence of program instructions and in response to a modification by another processor of the subset of shared data, the processor status flag is set. Subsequent to the executing the sequence of program instructions and based upon the state of the processor status flag, either a first program processing or a second program processing is executed. In some examples the first program processing includes storing results data into the local cache and the second program processing includes discarding the results data. | 2011-12-01 |
20110296115 | Assigning Memory to On-Chip Coherence Domains - A mechanism is provided for assigning memory to on-chip cache coherence domains. The mechanism assigns caches within a processing unit to coherence domains. The mechanism then assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller. If a cache line is not found within the coherence domain, the memory controller accesses the memory to retrieve the cache line. | 2011-12-01 |
20110296116 | System and Method for Aggregating Core-Cache Clusters in Order to Produce Multi-Core Processors - According to one embodiment of the invention, a processor comprises a memory, a plurality of core-cache clusters and a scalability agent unit that operates as an interface between an on-die interconnect and multiple core-cache clusters. The scalability agent operates in accordance with a protocol to ensure that the plurality of core-cache clusters appear as a single caching agent. | 2011-12-01 |
20110296117 | STORAGE SUBSYSTEM AND ITS CONTROL METHOD - Provided is a storage subsystem capable of maintaining the reliability of I/O processing to a host apparatus, even if there is an unauthorized access from a processor core to a switch circuit, by applying a multi-core system to a processor. A multi-core processor is applied to a second logical address space that is different from a first logical address space to be commonly applied to multiple controlled units such as a host interface to be accessed by the processor. The switch circuit determines the processor core that issued an access based on an address belonging to a second address space, and maps an address containing in an access from the processor core to an address of a first address space. | 2011-12-01 |
20110296118 | Dynamic Row-Width Memory - A mechanism is provided for dynamic row-width memory. The memory adapts row width to usage based on memory controller and memory management system software control. The mechanism uses an organization and control of memory array access logic. The memory controller may receive an explicit command using existing column address lines or using a command line into the memory controller. In a first option, the memory controller receives a row width and disables the unused columns and turns off the unused sense amps. In a second option, the memory controller receives a row width and adjusts row count, keeping the number of active cells constant. In a third option, the memory controller receives a row width and adjusts a number of banks. | 2011-12-01 |
20110296119 | Stored Data Reading Apparatus, Method and Computer Apparatus - The present invention proposes a stored data reading device, comprising a first storage module for storing first data, the first storage module has a first reading speed, a second storage module for storing second data, the second data being the same with at least a part of the first data, the second storage module having a second reading speed, and the second reading speed being greater than the first reading speed, a request acquiring module for acquiring a reading request for third data, the third data being the same with at least a part of the first data. With the stored data reading device of the invention, the data access speed can be accelerated while the production cost can be significantly lowered. | 2011-12-01 |
20110296120 | VIRTUAL BUFFER INTERFACE METHODS AND APPARATUSES FOR USE IN WIRELESS DEVICES - Techniques are provided which may be implemented in various methods and/or apparatuses that to provide a virtual buffer interface capability between a plurality of processes/engines and a memory pool. | 2011-12-01 |
20110296121 | DATA WRITING METHOD AND COMPUTER SYSTEM - A data writing method for a storage device includes utilizing the storage device to transmit identification information according to a data writing request from a processing unit, utilizing the processing unit to transmit data writing information corresponding to the identification information according to the identification information, and utilizing the storage device to perform a data writing process according to the data writing information. | 2011-12-01 |
20110296122 | METHOD AND SYSTEM FOR BINARY CACHE CLEANUP - A system and method for clearing data from a cache in a storage device is disclosed. The method may include analyzing the cache for the least recently fragmented logical group, and evicting the entries from the least recently fragmented logical group. Or, the method may also include analyzing compaction history and selecting entries for eviction based on the analysis of the compaction history. The method may also include scheduling of different eviction mechanisms during various operations of the storage device. The system may include a cache storage, a main storage and a controller configured to evict entries associated with a least recently fragmented logical group, configured to evict entries based on analysis of compaction history, or configured to schedule different eviction mechanisms during various operations of the storage device. | 2011-12-01 |
20110296123 | MEMORY ACCESS TABLE SAVING AND RESTORING SYSTEM AND METHODS - A system includes a first memory configured to store a first lookup table (LUT) with first metadata. A second memory is configured to store a second LUT with second metadata, wherein the first metadata includes a first mapping between logical addresses and physical addresses. The second metadata includes a second mapping between the logical addresses and the physical addresses. A control module is configured to update the first metadata. The control module is configured to update segments of the second metadata based on the first metadata at respective predetermined times. Each of the segments refers to a predetermined number of entries of the second LUT. | 2011-12-01 |
20110296124 | PARTITIONING MEMORY FOR ACCESS BY MULTIPLE REQUESTERS - An apparatus comprising a plurality of buffers and a channel router circuit. The buffers may be each configured to generate a control signal in response to a respective one of a plurality of channel requests received from a respective one of a plurality of clients. The channel router circuit may be configured to connect one or more of the buffers to one of a plurality of memory resources. The channel router circuit may be configured to return a data signal to a respective one of the buffers in an order requested by each of the buffers. | 2011-12-01 |
20110296125 | LOW LATENCY HANDOFF TO OR FROM A LONG TERM EVOLUTION NETWORK - A server device in a long term evolution (LTE) network may store, in a memory, context information, associated with a prior communication session between the LTE network and a user device, where the context information permits a communication session to be established within a time period, the time period being less than another time period to initially establish the communication session or to establish the communication session without the context information. The server device may further receive a registration request associated with the user device; determine whether the memory stores the context information; perform, within the time period, an abbreviated registration operation to establish the communication session with the user device, using the context information from the memory, when the memory stores the content information; and perform, within the other time period, a registration operation to establish the communication session when the memory does not store the context information. | 2011-12-01 |
20110296126 | STORAGE SYSTEM - A storage system includes a plurality of storing means and a data processing means configured to store data into the plurality of storing means. The data processing means includes: a storage destination setting means configured to set a journal storing means configured to store a journal showing a data processing status of the storage system from among the plurality of storing means, and set the plurality of storing means other than the set journal storing means as fragment storing means configured to distributedly store a plurality of fragment data forming storage target data, respectively; and a distribution storage controlling means configured to store the journal into the storing means set as the journal storing means by the storage destination setting means, and distribute and store the plurality of fragment data into the plurality of storing means set as the fragment storing means, respectively. | 2011-12-01 |
20110296127 | MULTIPLE CASCADED BACKUP PROCESS - Provided are a method, system, and a computer program product handling a backup process. An instruction is received initiating a new backup from a source volume to a target volume using one of a plurality of backup processes. A determination is made as to whether there is a cascade of volumes using the backup process including the source volume of the new backup. The cascade includes a cascade source volume and at least one cascade target volume, and a write to a storage location in one of the cascade volumes causes a copying of the storage location to be written in the cascade source volume to each of the cascade target volumes in the cascade according to a cascade order in which the at least one cascade target volume and the cascade source volume are linked in the cascade. The cascade, using the backup process of the new backup already including the source volume of the new backup, is modified to include the target volume of the new backup in response to determining that there is the existing cascade. A new cascade using the backup process of the new backup including the source volume and the target volume of the new backup is created in response to determining that there is not the existing cascade. | 2011-12-01 |
20110296128 | RETAINING DISK IDENTIFICATION IN OPERATING SYSTEM ENVIRONMENT AFTER A HARDWARE-DRIVEN SNAPSHOT RESTORE FROM A SNAPSHOT-LUN CREATED USING SOFTWARE-DRIVEN SNAPSHOT ARCHITECTURE - A program, method and system are disclosed for managing a snapshot backup restore through a hardware snapshot interface, i.e. a hardware-driven snapshot restore, based upon a software-driven snapshot backup, e.g. created with software such as volume shadow copy service (VSS). When conventional hardware-driven snapshot restores are performed using a snapshot backup that was created using the VSS-based software such as copy services, data access issues can arise, due to the operating system assigning of a new disk signature to the disk being restored. This problem can be overcome by temporarily storing the original disk signature and then overwriting the new, incorrect disk signature after initializing the restore. This can ensure that the operating system identifies the source LUNs (and accordingly, the drive letter and mount points of the disk) using the same disk signature as before the restore. | 2011-12-01 |
20110296129 | DATA TRANSFER DEVICE AND METHOD OF CONTROLLING THE SAME - A data transfer device that confirms completion of writing into a memory on transferring data to the memory via a bus through which a response indicating completion of data writing in the memory is not sent back includes an inter-memory data transfer control unit performing data transfer between the memories. When the inter-memory data transfer control unit detects switching of a write destination memory from a first memory to a second memory, in order to confirm that writing into the first memory is completed, the inter-memory data transfer control unit performs confirmation of write completion as to the first memory by a procedure different from writing into the memory. When a data transfer with a designated transfer length is completed, in order to confirm that writing is completed as to the write destination memory at the end of the data transfer, the inter-memory data transfer control unit performs confirmation of write completion as to the write destination memory at the end of the transfer by the procedure different from writing into the memory. The inter-memory data transfer control unit notifies the processor of completion of an inter-memory data transfer based on the confirmation of write completion. | 2011-12-01 |
20110296130 | STORAGE SYSTEM AND METHOD OF TAKING OVER LOGICAL UNIT IN STORAGE SYSTEM - A storage apparatus includes a drive unit in which a logical unit is formed, and a controller unit for accessing the logical unit by controlling the drive unit according to an access request sent from a host apparatus. The storage apparatus issues a logical unit takeover request to the other storage apparatuses, allocates a logical unit of another storage apparatus that will accept the transfer of the logical volume to its own logical unit according to a takeover approval sent from other storage apparatuses in response to the takeover request, and thereafter migrates data of the own logical unit to a logical unit of another storage apparatus. Subsequently, the path is switched so that the access request from the host apparatus is given to one of the other storage apparatuses. | 2011-12-01 |
20110296131 | NONVOLATILE MEMORY SYSTEM AND THE OPERATION METHOD THEREOF - A memory controller includes a microprocessor, a queue configured to store a plurality of first commands provided by the microprocessor, a queue management block configured to interpret and control said plurality of first commands, and a command generator configured to provide a plurality of second commands under control of the queue management block. The queue management block may simultaneously perform the plurality of second commands so as to simultaneously access a plurality of non-volatile memory units. | 2011-12-01 |
20110296132 | GARBAGE COLLECTION IN AN IN-MEMORY REPLICATION SYSTEM - Garbage collection in a first node server of an in-memory replication system includes: in response to a garbage collection trigger in the first node server, determining whether identification information for a data object eligible for garbage collection in the first node server has been received by the first node server from at least a second node server in the in-memory replication system; and if the identification information has been received from at least the second node server, performing garbage collection on the data object with the first node server. | 2011-12-01 |
20110296133 | APPARATUS, SYSTEM, AND METHOD FOR CONDITIONAL AND ATOMIC STORAGE OPERATIONS - An apparatus, system, and method are disclosed for implementing conditional storage operations. Storage clients access and allocate portions of an address space of a non-volatile storage device. A conditional storage request is provided, which causes data to be stored to the non-volatile storage device on the condition that the address space of the device can satisfy the entire request. If only a portion of the request can be satisfied, the conditional storage request may be deferred or fail. An atomic storage request is provided, which may comprise one or more storage operations. The atomic storage request succeeds if all of the one or more storage operations are complete successfully. If one or more of the storage operations fails, the atomic storage request is invalidated, which may comprise deallocating logical identifiers of the request and/or invalidating data on the non-volatile storage device pertaining to the request. | 2011-12-01 |
20110296134 | ADAPTIVE ADDRESS TRANSLATION METHOD FOR HIGH BANDWIDTH AND LOW IR CONCURRENTLY AND MEMORY CONTROLLER USING THE SAME - An adaptive memory address translation method includes the following steps. Multiple request instructions are received. A memory address corresponding to each request instruction includes a bank address. The memory addresses corresponding to the request instructions are translated, such that the bank addresses corresponding to at least one part of the any two adjacent request instructions are different. A numerical translation is utilized to translate the memory addresses corresponding to the request instructions, such that the memory addresses corresponding to the any two adjacent request instructions have less different bits. | 2011-12-01 |
20110296135 | SYSTEM AND METHOD FOR FREEING MEMORY - There is provided a computer-executed method of freeing memory. One exemplary method comprises receiving a message from a user process. The message may specify a virtual address for a memory segment. The virtual address may be mapped to the memory segment. The memory segment may comprise a physical page. The method may further comprise identifying the physical page based on the virtual address. Additionally, the method may comprise freeing the physical page without unmapping the memory segment. | 2011-12-01 |
20110296136 | Locking Entries Into Translation Lookaside Buffers - Two translation lookaside buffers may be provided for simpler operation in some embodiments. A hardware managed lookaside buffer may handle traditional operations. A software managed lookaside buffer may be particularly involved in locking particular translations. As a result, the software's job is made simpler since it has a relatively simpler, software managed translation lookaside buffer to manage for locking translations. | 2011-12-01 |
20110296137 | Performing A Deterministic Reduction Operation In A Parallel Computer - A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology. | 2011-12-01 |
20110296138 | FAST REMOTE COMMUNICATION AND COMPUTATION BETWEEN PROCESSORS - A method, system, and computer usable program product for fast remote communication and computation between processors are provided in the illustrative embodiments. A direct core to core communication unit (DCC) is configured to operate with a first processor, the first processor being a remote processor. A memory associated with the DCC receives a set of bytes, the set of bytes being sent from a second processor. An operation specified in the set of bytes is executed at the remote processor such that the operation is invoked without causing a software thread to execute. | 2011-12-01 |
20110296139 | Performing A Deterministic Reduction Operation In A Parallel Computer - Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data. | 2011-12-01 |
20110296140 | RISC processor register expansion method - A RISC processor register expansion method is disclosed to include the steps of: a) designing an instruction format having multiple register fields to have the total bits consumed by the register fields to be designed into two bits combinations respectively corresponding to two register banks, wherein the first bits combination has 8 bits of which the value of the 1 | 2011-12-01 |
20110296141 | PERSISTENT FINITE STATE MACHINES FROM PRE-COMPILED MACHINE CODE INSTRUCTION SEQUENCES - A processor, integrated with re-configurable logic and memory elements, is disclosed which is to be used as part of a shared memory, multiprocessor computer system. The invention utilizes the re-configurable elements to construct persistent finite state machines based on information decoded by the invention from sequences of CISC or RISC type processor machine instructions residing in memory. The invention implements the same algorithm represented by the sequence of encoded instructions, but executes the algorithm consuming significantly fewer clock cycles than would be consumed by the processor originally targeted to execute the sequence of encoded instructions. | 2011-12-01 |
20110296142 | PROCESSOR AND METHOD PROVIDING INSTRUCTION SUPPORT FOR INSTRUCTIONS THAT UTILIZE MULTIPLE REGISTER WINDOWS - A processor including instruction support for large-operand instructions that use multiple register windows may issue, for execution, programmer-selectable instructions from a defined instruction set architecture (ISA). The processor may also include an instruction execution unit that, during operation, receives instructions for execution from the instruction fetch unit and executes a large-operand instruction defined within the ISA, where execution of the large-operand instruction is dependent upon a plurality of registers arranged within a plurality of register windows. The processor may further include control circuitry (which may be included within the fetch unit, the execution unit, or elsewhere within the processor) that determines whether one or more of the register windows depended upon by the large-operand instruction are not present. In response to determining that one or more of these register windows are not present, the control circuitry causes them to be restored. | 2011-12-01 |
20110296143 | PIPELINE PROCESSOR AND AN EQUAL MODEL CONSERVATION METHOD - A pipeline processor which meets a latency restriction on an equal model is provided. The pipeline processor includes a pipeline processing unit to process an instruction at a plurality of stages and an equal model compensator to store the results of the processing of some or all of the instructions located in the pipeline processing unit and to write the results of the processing in a register file based on the latency of each instruction. | 2011-12-01 |
20110296144 | REDUCING DATA HAZARDS IN PIPELINED PROCESSORS TO PROVIDE HIGH PROCESSOR UTILIZATION - A pipelined computer processor is presented that reduces data hazards such that high processor utilization is attained. The processor restructures a set of instructions to operate concurrently on multiple pieces of data in multiple passes. One subset of instructions operates on one piece of data while different subsets of instructions operate concurrently on different pieces of data. A validity pipeline tracks the priming and draining of the pipeline processor to ensure that only valid data is written to registers or memory. Pass-dependent addressing is provided to correctly address registers and memory for different pieces of data. | 2011-12-01 |