Class / Patent application number | Description | Number of patent applications / Date published |
710056000 | Buffer space allocation or deallocation | 61 |
20080228967 | Software driver interconnect framework - A computer program product comprising a computer useable medium including control logic stored therein to transfer data from a data source to a data sink is described herein. The computer program product comprises control logic means for negotiating with a device driver of the data sink to transfer data from the data source to the data sink, control logic means for allocating a first buffer to the data source, control logic means for locking the first buffer so as to enable only the data source to transfer data to the first buffer, control logic means for enabling the data source to transfer a predetermined amount of data to the first buffer, control logic means for signaling availability of data in the first buffer to the device driver of the data sink, control logic means for unlocking the first buffer, control logic means for granting access to the data sink to read from the first buffer, control logic means for allocating a second buffer to the data source, control logic means for locking the second buffer so as to enable only the data source to transfer data to the second buffer and control logic means for enabling the data source to write data to the second buffer while enabling the data sink to read data from the first buffer, thereby pipelining data transfer from the data source to the data sink. | 09-18-2008 |
20080235413 | Apparatus and Method to Maximize Buffer Utilization in an I/O Controller - An apparatus and method for maximizing buffer utilization in an I/O controller using credit management logic contained within the I/O controller. The credit management logic keeps track of the number of memory credits available in the I/O controller and communicates to a chipset connected to the I/O controller the amount of available memory credits. The chipset may then send an amount of data to the I/O controller equivalent to or less than the communicated available amount of memory credits to reduce the occurrence of a “retry” event. The amount of available memory credits is determined by comparing the available memory in each buffer within the I/O controller and designating that the “available” amount of memory for the I/O controller is an amount equivalent to the amount of memory contained in the buffer with the least amount of available memory. This “available” amount of I/O controller memory may then be converted into memory credits and communicated to the chipset. | 09-25-2008 |
20080244118 | METHOD AND APPARATUS FOR SHARING BUFFERS - A computer implemented method, apparatus, and computer usable program product are provided for managing a plurality of buffers in a data processing system. A requester component requests a free buffer of a certain size. A buffer agent determines whether a set of free buffers, whose combined size is equal to or greater than the requested buffer size, is available from a set of donor components. If the set of free buffers is available, the buffer agent combines the free buffers into a combined free buffer of size equal to or greater than the requested size, and removes the free buffers from a free buffer list of a corresponding donor component. The buffer agent then allocates the combined free buffer to the requester component. | 10-02-2008 |
20080288674 | Storage system and storage device - A storage system includes: a plurality of data input and output parts through; a data storing part that stores the data inputted and outputted through the plurality of data input and output parts; a range information storing part that stores range information; first control part controlling the data storing part to read and write the data in accordance with the stored range information, and that rewrites the stored range information to predetermined range information in a case where a prescribed signal is inputted from the data input and output part; and a plurality of second control parts that are provided correspondingly to the plurality of data input and output parts to input and output the data, and that input the prescribed signal to the data input and output parts in a prescribed case. | 11-20-2008 |
20080288675 | HOST DEVICE, INFORMATION PROCESSOR, ELECTRONIC APPARATUS, PROGRAM, AND METHOD FOR CONTROLLING READING - A host device for controlling a storage device controller to access a storage device includes: a command issue controller controlling an issue of a command for allowing the storage device controller to access the storage device; a response detector for detecting a reception of a response from the storage device corresponding to the command; and a buffer data controller controlling reading and writing of a buffer of the storage device controller. The buffer stores one of reading data and writing data of the storage device. The buffer data controller controls one of the reading and the writing at least once in a predetermined data size unit corresponding to the command after the command issue controller issues the command. | 11-20-2008 |
20080301335 | Digital Display System With Media Processor And Wireless Audio - The present invention relates to a media processing system that comprises a bus for communicating digital signals thereon with a media processor connected to the bus, for processing signals supplied thereon. The system further has a display device connected to the bus for displaying digitized images thereon, received from the bus. The system has an audio transmitter connected to the bus, for wirelessly transmitting audio digital signals from the bus. The system further has a connectable memory for connecting to the bus and for supplying signals representing digitized images and audio digital signals to the bus. Finally the system has a receiver to receive encoded digitized images or audio digital signals for supplying the received signals to the bus for storage in the memory. | 12-04-2008 |
20080301336 | DYNAMIC MEMORY ALLOCATION BETWEEN INBOUND AND OUTBOUND BUFFERS IN A PROTOCOL HANDLER - An apparatus and method for dynamically allocating memory between inbound and outbound paths of a networking protocol handler so as to optimize the ratio of a given amount of memory between the inbound and outbound buffers is presented. Dedicated but sharable buffer memory is provided for both the inbound and outbound processors of a computer network. Buffer memory is managed so as to dynamically alter what portion of memory is used to receive and store incoming data packets or to transmit outgoing data packets. Use of the present invention reduces throttling of data rate transmissions and other memory access bottlenecks associated with conventional fixed-memory network systems. | 12-04-2008 |
20080301337 | Memory Systems For Automated Computing Machinery - Memory systems are disclosed that include a memory controller; an outbound link, the memory controller connected to the outbound link, the outbound link comprising a number of conductive pathways that conduct memory signals from the memory controller to memory buffer devices in a first memory layer; and at least two memory buffer devices in a first memory layer, each memory buffer device in the first memory layer connected to the outbound link to receive memory signals from the memory controller. | 12-04-2008 |
20090019197 | INTERFACE CONTROLLER, METHOD FOR CONTROLLING THE INTERFACE CONTROLLER, AND A COMPUTER SYSTEM - An interface controller is connected to a host apparatus and a memory, and receiving multiple responses to one request. The interface controller includes a packet generation unit which adds header data to a request issued by the host apparatus to generate a request packet and outputs the request packet to the memory, a receive buffer which stores a response packet with respect to the request packet, a protocol generation unit which generates a response according to a prescribed protocol based on the response packet stored in the receive buffer, and outputs the response to the host apparatus, a maximum division number calculation unit which calculates a maximum division number of the request issued by the host apparatus, and a request issue control unit which gives a request issue permission to the host apparatus based on the maximum division number calculated by the maximum division number calculation unit, a maximum division number of processed request and a maximum division number of processed response. | 01-15-2009 |
20090031059 | Method, System, and Computer Program Product for Dynamically Selecting Software Buffers for Aggregation According to Current System Characteristics - A method, system, and computer program product in a data processing system are disclosed for dynamically selecting software butters for aggregation in order to optimize system performance. Data to be transferred to a device is received. The data is stored in a chain of software buffers. Current characteristics of the system are determined. Software buffers to be combined are then dynamically selected. This selection is made according to the characteristics of the system in order to maximize performance of the system. | 01-29-2009 |
20090083457 | Memory switching control apparatus using open serial interface, operating method thereof, and data storage device therefor - Provided is a memory switching control apparatus using an open serial interfacing scheme capable of enhancing flexibility, reliability, availability, performance in a data communication processes between a memory and a processing unit and an operating method thereof. The memory switching control apparatus includes: one or more processor interfacing units which perform interfacing with one or more processing units; one or more memory interfacing units which have open-serial-interfacing-scheme memory interfacing ports to interface with data storage devices connected to the memory interfacing ports in a serial interfacing scheme; and a plurality of arbitrating units which are provided corresponding to the memory interfacing units to independently arbitrate usage rights of the processor interfacing units to the memory interfacing units. | 03-26-2009 |
20090113086 | Method for providing a buffer status report in a mobile communication network - A method for providing a buffer status report in a mobile communication network is implemented between a base station and a user equipment. When data arrives to buffers of the user equipment and the priority of a logical channel for the data is higher than those of other logical channels for existing data in the buffers, a short buffer status report associated with the buffer of a logical channel group corresponding to the arrival data is triggered. The user equipment is based on obtained resources allocated by the base station to fill all data of the buffer of the logical channel group in a Protocol Data Unit. If all data of the buffer of the logical channel group corresponding to the arrival data can be completely filled in the Protocol Data Unit, the short buffer status report is canceled. Otherwise, the user equipment transmits the short buffer status report. | 04-30-2009 |
20090132736 | MEMORY BUFFERING SYSTEM THAT IMPROVES READ/WRITE PERFORMANCE AND PROVIDES LOW LATENCY FOR MOBILE SYSTEMS - A memory buffering system is disclosed that arbitrates bus ownership through all arbitration scheme for memory elements in chain architecture. A unified host memory controller arbitrates bus ownership for transfer to a unified memory buffer and other buffers within the chain architecture. The system is used within a communication system with a bus in chain architectures and parallel architectures. | 05-21-2009 |
20090187681 | BUFFER CONTROLLER AND MANAGEMENT METHOD THEREOF - The invention provides a new linked structure for a buffer controller and management method thereof. The allocation and release actions of buffer memory can be more effectively processed when the buffer controller processes data packets. The linked structure enables the link node of the first buffer register to point to the last buffer register. The link node of the last buffer register points to the second buffer register. Each of the link nodes of the rest buffers points to the next buffer register in order until the last buffer register. This structure can effectively release the buffer registers in the used linked list to a free list. | 07-23-2009 |
20090210587 | METHOD AND SYSTEM FOR IMPLEMENTING STORE BUFFER ALLOCATION - A method and system for implementing store buffer allocation for variable length store data operations are provided. The method includes receiving a store address request and at least one store data request and stepping through data operations for each of the store data requests and an address range for the store data requests to determine alignment and data steering information used to select a storage buffer destination for the data in the store data requests. The method further includes determining availability of the storage buffer by maintaining a reservation list for each storage buffer, maintaining a count of the number of available entries for each storage buffer, updating the reservation list to reflect a reservation acceptance for designated available entries, and clearing entries upon completion of the processing of store data operations. The method also includes reserving the selected storage buffer when the number of available entries meets or exceeds the number of entries required for the data. | 08-20-2009 |
20090234989 | SYSTEM AND METHOD FOR REDUCING POWER CONSUMPTION OF MEMORY IN AN I/O CONTROLLER - A memory system for an I/O controller which includes a memory with multiple memory blocks, a supply voltage control circuit providing power to each memory block, and control logic. Each memory block retains stored information with reduced power consumption when receiving a reduced voltage level. The control logic allocates buffers in the memory and controls the supply voltage control circuit to provide the full voltage level to at least one memory block of at least one allocated buffer and to provide the reduced voltage level to remaining memory blocks. Each memory block includes one or more buffers. In various embodiments the control logic fully powers each memory block of a buffer or less than all of the memory blocks. A linked buffer structure may be used to reduce the memory blocks of an allocated buffer receiving full power, such as only one memory block in the buffer. | 09-17-2009 |
20090240851 | USB CONTROLLER AND BUFFER MEMORY CONTROL METHOD - A USB controller according to one aspect of the present invention is a USB controller incorporated in a USB device, the USB controller including a RAM that stores data transferred through a USB port or a CPU bus, and a register that holds a setting for determining to which one of a region for host used for a host function and a region for peripheral used for a peripheral function a part of the RAM is allocated. | 09-24-2009 |
20090248922 | MEMORY BUFFER ALLOCATION DEVICE AND COMPUTER READABLE MEDIUM HAVING STORED THEREON MEMORY BUFFER ALLOCATION PROGRAM - A memory buffer allocation device for allocating a memory buffer in a virtual computer system in which a plurality of virtual operating systems operate in time-sharing on one CPU having the memory buffer, includes a memory buffer division unit which divides the memory buffer into a number (n) of areas and reserves a division unit number (m) of areas out of the n areas as a dedicated memory buffer and the other areas except for the number n of the areas as a shared memory buffer. The device also includes a memory buffer allocation unit which allocates each area of the dedicated memory buffer to a number m of domains and each area of the shared memory buffer to other n-m domains except for the number m of domains, wherein the domains are of the virtual operating systems that are operating in the virtual computer system. | 10-01-2009 |
20090300234 | BUFFER CONTROL METHOD AND STORAGE APPARATUS - A buffer control method stores data to be written on a recording medium or data read from the recording medium to consecutive addresses within a storage region of a buffer, based on a sequential access command which instructs a continuous access to consecutive logical addresses of the recording medium, and variably sets a size of the storage region of the buffer depending on an unused region or a used region in the storage region of the buffer. The buffer is used as a buffer ring. | 12-03-2009 |
20090300235 | Method for the Allocation of Memory in a Buffer Memory - A method for allocation of a buffer memory with three buffers of a module having a processing unit and a bus connection is provided. The module sends or receives data via the bus connection and uses the processing unit to generate data for transmission via the bus connection and process data received via the bus connection. The bus connection and the processing unit function as a producer or consumer in a communication relationship established via the buffer memory. Each buffer assumes one of four statuses—“input area local”, “local”, “input area external” and “external”. Either the bus connection or the processing unit attempt to reserve one of the three buffers by a strategy: when one of the three buffers is already allocated, this buffer is used. Otherwise a buffer with the status “input area external” or “input area local” is used and the status “external” or “local” is assigned. | 12-03-2009 |
20100017548 | BUFFER MANAGEMENT DEVICE, BUFFER MANAGEMENT METHOD, AND INTEGRATED CIRCUIT FOR BUFFER MANAGEMENT - A buffer management apparatus that sequentially receives L (L>1) types of data and transmits the L types of data to an external device, including: a reception unit that receives data; M (M01-21-2010 | |
20100042762 | Efficient Load/Store Buffer Memory Management in a Computer Communications Network Data Transmission Switch - A technique is disclosed for observing the data movement pattern in a peripheral device attached to a computer communications network data transmission switch, in order to arrive at a (statistical) determination of whether the peripheral device is being used as a “load intensive” device or as a “store intensive” device (or as neither type) over a defined time period. This determination is used to dynamically adjust (and re-allocate) the “outbound” and “inbound” buffer memory sizes assigned to a switch transmission port attached to the peripheral device, in cases where the device is operating in either “load intensive” or “store intensive” mode. The invention is applicable for use with all types of communications network switches (i.e. “Bridges”, “Hubs”, “Routers” etc.). | 02-18-2010 |
20100121995 | SYSTEM AND METHOD FOR SUPPORTING TCP OUT-OF-ORDER RECEIVE DATA USING GENERIC BUFFER - A method and system for handling received out-of-order network data using generic buffers for non-posting TCP applications is disclosed. When incoming out-of-order data is received and there is no application buffer posted, a TCP data placement may notify a TCP reassembler to terminate a current generic buffer, allocate a new current generic buffer, and DMA the incoming data into the new current generic buffer. The TCP data placement may notify the TCP reassembler the starting TCP sequence number and the length of the new current generic buffer. Moreover, the TCP data placement may add entries into a TCP out-of-order table when the incoming data creates a new disjoint area. The TCP data placement may adjust an existing disjoint area to reflect any updates. When a TCP application allocates or posts a buffer, then the TCP reassembler may copy data from a linked list of generic buffers into posted buffers. | 05-13-2010 |
20100138571 | METHOD AND SYSTEM FOR A SHARING BUFFER - A system, method, and computer readable article of manufacture for sharing buffer management. The system includes: a predictor module to predict at runtime a transaction data size of a transaction according to history information of the transaction; and a resource management module to allocate sharing buffer resources for the transaction according to the predicted transaction data size in response to beginning of the transaction, to record an actual sharing buffer size occupied by the transaction in response to the successful commitment of the transaction, and to update the history information of the transaction. | 06-03-2010 |
20100161855 | LIGHTWEIGHT INPUT/OUTPUT PROTOCOL - A method and system for offloading I/O processing from a first computer to a second computer, using RDMA-capable network interconnects, are disclosed. The method and system include a client on the first computer communicating over an RDMA connection to a server on the second computer by way of a lightweight input/output (LWIO) protocol. The protocol generally comprises a network discovery phase followed by an I/O processing phase. During the discovery phase, the client and server determine a minimal list of shared RDMA-capable providers. During the I/O processing phase, the client posts I/O requests for offloading to the second machine over a mutually-authenticated RDMA channel. The I/O model is asymmetric, with read operations being implemented using RDMA and write operations being implemented using normal sends. Read and write requests may be completed in polling mode and in interrupt mode. Buffers are managed by way of a credit mechanism. | 06-24-2010 |
20100169519 | Reconfigurable buffer manager - In some embodiments a reconfigurable buffer manager manages an on-chip memory, and dynamically allocates and/or de-allocates portions of the on-chip memory to and/or from a plurality of functional on-chip blocks. Other embodiments are described and claimed. | 07-01-2010 |
20100191878 | METHOD AND APPARATUS FOR ACCOMODATING A RECEIVER BUFFER TO PREVENT DATA OVERFLOW - Methods and apparatuses for preventing overflow at a receiver buffer are provided. Data packets of varying size are received into a receiver buffer and quantified by a byte counter to determine an amount of data in the receiver buffer at a given time. A data capacity status for the receiver buffer is then generated as a function of the amount of data in the receiver buffer. | 07-29-2010 |
20100228898 | APPARATUS, SYSTEM, AND METHOD FOR REAL TIME JOB-SPECIFIC BUFFER ALLOCATION - An apparatus, system, and method are disclosed for dynamically allocating buffers during the execution of a job. A plan module sets a buffer allocation plan for the job using data access history that contains information about the number and nature of data access events in past executions of the same job. A buffer module allocates buffers during the execution of the job, and alters the buffer allocation to improve performance for direct access events for those portions of the job that the buffer allocation plan indicates have historically included predominantly direct access events. The buffer module alters the buffer allocation to improve performance for sequential access events for those portions of the job that the buffer allocation plan indicates have historically included predominantly sequential access events. A history module then collects data access information about the current execution and adds that information to the data access history. | 09-09-2010 |
20110093628 | EFFICIENT LOW-LATENCY BUFFER - An efficient low latency buffer, and method of operation, is described. The efficient low latency buffer may be used as a bi-directional buffer in an audio playback device to buffer both output and input data. The audio buffer includes two modes of operation. The first mode replaces large segments of data at a first rate, and the second mode replaces smaller segments of data at a second rate, higher than the first rate. The first mode may make efficient use of the buffer for the output, data while the second mode may provide low latency for the buffering of the input data. | 04-21-2011 |
20110238872 | Disk Drive System On Chip With Integrated Buffer Memory and Support for Host Memory Access - A circuit for a storage device that communicates with a host device comprises a first high speed interface. A storage controller communicates with the high speed interface. A buffer communicates with the storage controller. The storage device generates storage buffer data during operation. The storage controller is adapted to selectively store the storage buffer data in at least one of the buffer and/or in the host device via the high speed interface. A bridge chip for enterprise applications couples the circuit to an enterprise device. | 09-29-2011 |
20110276732 | PROGRAMMABLE QUEUE STRUCTURES FOR MULTIPROCESSORS - A command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element. A first queue associated with the command is identified based on the first predetermined memory-mapped register. A pointer is atomically read from a first hardware-based queue state register associated with the first queue. Data is atomically accessed at a memory location of the memory based on the pointer. The pointer stored in the first hardware-based queue state register is atomically updated, including incrementing the pointer of the first hardware-based queue state register, reading a queue size of the queue from a first hardware-based configuration register associated with the first queue, and wrapping around the pointer if the pointer reaches an end of the first queue based on the queue size. | 11-10-2011 |
20110296063 | BUFFER MANAGER AND METHODS FOR MANAGING MEMORY - Some of the embodiments of the present disclosure provide a method comprising managing a plurality of buffer addresses in a system-on-chip (SOC); and if a number of available buffer addresses in the SOC falls below a low threshold value, obtaining one or more buffer addresses from a memory, which is external to the SOC, to the SOC. Other embodiments are also described and claimed. | 12-01-2011 |
20110307636 | Method and apparatus for dynamically allocating queue depth by initiator - A method for maximizing I/O requests to a target port is provided. The method includes a storage controller obtaining an initiator allowed queue depth, receiving an I/O request and a current sequence identifier from an initiator logged into the target port, and determining if the initiator allowed queue depth is equal to a first queue depth corresponding to the initiator. If the initiator allowed queue depth is equal to the first queue depth then returning a queue full indication and a maximum sequence identifier equal to the current sequence identifier to the initiator. If the initiator allowed queue depth is not equal to the first queue depth then placing the I/O request on a queue, incrementing the first queue depth, and adjusting the maximum sequence identifier. Adjusting the maximum sequence identifier includes adding the current sequence identifier to the initiator allowed queue depth and subtracting the first queue depth. | 12-15-2011 |
20120047297 | EFFICIENT LOW-LATENCY BUFFER - An efficient low latency buffer, and method of operation, is described. The efficient low latency buffer may be used as a bi-directional memory buffer in an audio playback device to buffer both output and input data. An application processor coupled to the bi-directional memory buffer is responsive to an indication to write data to the bi-directional memory buffer reads a defined size of input data from the bi-directional memory buffer. The input data read from the bi-directional memory buffer is replaced with output data of the defined size. In response to a mode-change signal, the defined size of data is changed that is read and written from and to the bi-directional memory buffer. The buffer may allow the application processor to enter a low-powered sleep mode more frequently. | 02-23-2012 |
20120089754 | HYBRID SERIAL PERIPHERAL INTERFACE DATA TRANSMISSION ARCHITECTURE AND METHOD OF THE SAME - A hybrid serial peripheral interface (SPI) data transmission architecture adapted in a network device for connecting a host and a network is provided. The architecture comprises a RX buffer and RX SPI for maintaining a data receiving process, a TX buffer and TX SPI for maintaining a data transmission process, a configuration and status register and a hybrid SPI processing module. The hybrid SPI processing module makes the RX SPI performs the data transmission process as well when the RX SPI idles and the data transmission process proceeds at the same time and makes the TX SPI to performs the data receiving process as well when the TX SPI idles and the data receiving process proceeds at the same time. A hybrid SPI data transmission method is disclosed herein as well. | 04-12-2012 |
20120110223 | LOCK-LESS BUFFER MANAGEMENT SCHEME FOR TELECOMMUNICATION NETWORK APPLICATIONS - A buffer management mechanism in a multi-core processor for use on a modem in a telecommunications network is described herein. The buffer management mechanism includes a buffer module that provides buffer management services for one or more Layer 2 applications, wherein the buffer module at least provides a user space application interface to application software running in user space. The buffer management mechanism also includes a buffer manager that manages a plurality of separate pools of tokens, wherein the tokens comprise pointers to memory areas in external memory. In addition, the buffer management mechanism includes a custom driver that manages Data Path Acceleration Architecture (DPAA) resources including buffer pools and frame queues to be used for user plane data distributing. | 05-03-2012 |
20120131241 | SIGNAL PROCESSING SYSTEM, INTEGRATED CIRCUIT COMPRISING BUFFER CONTROL LOGIC AND METHOD THEREFOR - A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered. | 05-24-2012 |
20120221751 | EXTENSIONS FOR USB DRIVER INTERFACE FUNCTIONS - In embodiments of extensions for USB driver interface functions, a set of USB driver interfaces are exposed by a USB core driver stack, and the USB driver interfaces include USB driver interface functions to interface with USB client function drivers that correspond to client USB devices. A composite device driver registers itself and requests a function handle for each function of a client USB device. The USB client function drivers are enumerated and the function handles generated for each function of the client USB device. A check first protocol is enforced that directs a USB client function driver to check for availability of a USB driver interface function before interfacing with the USB core driver stack via the USB driver interfaces. A contract version identifier is received that indicates a set of operation rules by which a USB client function driver interfaces with the USB core driver stack. | 08-30-2012 |
20120233363 | QUALITY OF SERVICE MANAGEMENT - A method for measuring latencies caused by processing performed within a common resource is provided. A current latency value representing a time of residency of an IO request in a queue prior to receipt of acknowledgment from the common resource of completion of the IO request is received from a device comprising the queue, which maintains entries for IO requests that have been dispatched to and are pending at the common resource. An average latency value is calculated based in part on the current latency value. An adjusted capacity size for the queue is calculated based in part on the average latency value and the queue's capacity is set to the adjusted capacity size. IO requests are held in a buffer if the queue's capacity is full to reduce the effect of an amount of work transmitted to the common resource on current latency values provided by the device. | 09-13-2012 |
20120233364 | DYNAMIC RESOURCE ALLOCATION FOR DISTRIBUTED CLUSTER-STORAGE NETWORK - An apparatus, method and computer program in a distributed cluster storage network comprises storage control nodes to write data to storage on request from a host; a forwarding layer at a first node to forward data to a second node; a buffer controller at each node to allocate buffers for data to be written; and a communication link between the buffer controller and the forwarding layer at each node to communicate a constrained or unconstrained status indicator of the buffer resource to the forwarding layer. A mode selector selects a constrained mode of operation requiring allocation of buffer resource at the second node and communication of the allocation before the first node can allocate buffers and forward data, or an unconstrained mode of operation granting use of a predetermined resource credit provided by the second to the first node and permitting forwarding of a write request with data. | 09-13-2012 |
20120254485 | MULTI-THREAD FILE INPUT AND OUTPUT SYSTEM AND MULTI-THREAD FILE INPUT AND OUTPUT PROGRAM - A configuration performing processing of dividing a file into a plurality of pieces and transmitting the same even when a size of the file is large in transfer of files (input/output) between computers on a network is provided. A multi-thread file input/output system includes a first module performing processing of reading data from an input file, dividing the data into a plurality of pieces, and transmitting the plurality of pieces to a network by multi-thread processing in a transmitter computer; and a second module performing processing of receiving the plurality of pieces from the network and integrating and writing the same to an output file | 10-04-2012 |
20120331189 | SYSTEM AND METHOD FOR PERFORMING ISOCHRONOUS DATA BUFFERING - A controller for a host system includes an interface and a buffer. The interface receives a plurality of data units isochronously received from a connected device, and the buffer stores the data units and then output a data block upon the occurrence of at least one condition. Each data unit stores data of a first size and the data block includes data of a second size greater than the first size. The connected device may be a Universal Serial Bus (USB) device or another type of device. | 12-27-2012 |
20130013825 | USB DEVICE CONTROLLER AND POWER CONTROL METHOD THEREOF - A device controller, a peripheral device, and a power control method that enable buffers to be used efficiently and that enable power control to be performed on the basis of data amounts accumulated in the buffers are provided. A novel device controller includes an input buffer for accumulating data output from a host device, an output buffer for accumulating data output to the host device, a data communication section for transferring data between the input and output buffers and the host device, and a data buffer control section for modifying buffer allocation amounts to the input and output buffers on the basis of the data amount accumulated in at least one of the input and output buffers. The data buffer control section causes the data communication section to transition from a normal power consumption mode to a low power consumption mode when the data amount reaches a predetermined value. | 01-10-2013 |
20130151740 | AUTONOMIC ASSIGNMENT OF COMMUNICATION BUFFERS BY AGGREGATING SYSTEM PROFILES - A method, system and apparatus for autonomic buffer configuration. In accordance with the present invention, an autonomic buffer configuration method can include monitoring data flowing through buffers in a communications system and recording in at least one buffer profile different data sizes for different ones of the data flowing through the buffers during an established interval of time. An optimal buffer size can be computed based upon a specification of a required percentage of times a buffer must be able to accommodate data of a particular size. Subsequently, at least one of the buffers can be re-sized without re-initializing the at least one resized buffer. | 06-13-2013 |
20130179607 | ADJUSTABLE BUFFER SIZING FOR CONCURRENT WRITING TO TAPE - Data is buffered for concurrent writing to tape. For a magnetic tape drive having a magnetic head with multiple sets of transducers; a drive mechanism configured to pass a magnetic tape past the magnetic head; interfaces from two different hosts; and at least one buffer configured to buffer data; and a control; the buffering comprises receiving data from two different hosts at the interfaces; buffering the received data in separate buffer space of the buffer(s) associated with each host, and adjustably size the separate buffer space for each host in accordance with a data transfer rate of the host associated with the separate buffer space; and concurrently writing data from the separate buffer spaces with the magnetic head to separate partitions of the magnetic tape. | 07-11-2013 |
20130179608 | EFFICIENT LOW-LATENCY BUFFER - An efficient low latency buffer, and method of operation, is described. The efficient low latency buffer may be used as a bi-directional memory buffer in an audio playback device to buffer both output and input data. An application processor coupled to the bi-directional memory buffer is responsive to an indication to write data to the bi-directional memory buffer reads a defined size of input data from the bi-directional memory buffer. The input data read from the bi-directional memory buffer is replaced with output data of the defined size. In response to a mode-change signal, the defined size of data is changed that is read and written from and to the bi-directional memory buffer. The buffer may allow the application processor to enter a low-powered sleep mode more frequently. | 07-11-2013 |
20130205051 | Methods and Devices for Buffer Allocation - Methods and devices for buffer allocation based on priority levels are disclosed to avoid or mitigate conflicts that can degrade performance or otherwise interfere with Quality of Service (QoS) requirements in a multiple channel memory system. In one embodiment, the methods and devices disclosed herein may be used to detect various transactions that have identical priorities and the same or similar QoS requirements and then allocate buffers for different ones of the various detected transactions that are scheduled to occur in a given time interval to different independent memory channels, thereby avoiding or mitigating memory access conflicts in the given time interval. | 08-08-2013 |
20130246672 | Adaptive Multi-Threaded Buffer - An adaptive multi-thread buffer supports multiple writer process and reader processes simultaneously without blocking. Writer processes are assigned a reserved write slot using a writer index that is incremented for each write request. When a reserved write slot is not null, the buffer is resized to make room for new data. Reader processes are assigned a reserved read slot using a reader index that is incremented for each read request. When data is read out to the reader process, the read slot content is set to null. When a writer process attempts to write null data to a write slot, the buffer replaces the null write data with an empty value object so that content of the buffer is null only for empty slots. When an empty value object is read from a slot, the buffer replaces the content with null data to send to the reader process. | 09-19-2013 |
20140013016 | CONFIGURABLE BUFFER ALLOCATION FOR MULTI-FORMAT VIDEO PROCESSING - Systems and methods are described including dynamically configuring a shared buffer to support processing of at least two video read streams associated with different video codec formats. The methods may include determining a buffer write address within the shared buffer in response to a memory request associated with one read stream, and determining a different buffer write address within the shared buffer in response to a memory request associated with the other read stream. | 01-09-2014 |
20140032798 | WIRELESS STATION AND METHOD FOR SELECTING A-MPDU TRANSMISSION CHARACTERISTICS - A dynamic A-MSDU enabling method is disclosed. The method enables the recipient of an aggregate MAC service data unit (A-MSDU) under a block ACK agreement to reject the A-MSDU, The method thus distinguishes between A-MSDU outside of the block ACK. agreement, which is mandatory, from A-MSDU under the block ACK agreement, which is optional. The method thus complies with the IEEE 802.11n specification while enabling the recipient to intelligently allocate memory during block ACK operations. | 01-30-2014 |
20140047141 | BUFFER-RELATED USB COMMUNICATION - According to various embodiments, apparatuses and methods to communicate buffer allocation information are presented. The disclosed apparatuses and methods may include transmitting a buffer message by a wireless USB device to a wireless USB host, which may indicate an available storage space in a buffer of the USB device to store data from the USB host. The buffer message may be transmitted independent of whether or not the USB device has received a request message (e.g., from the USB host) for information relating the available storage space in the buffer. Additionally, the buffer message may be transmitted independent of any data exchange mechanism between the USB host and the USB device. The USB device may receive a data packet from the USB host, and transmit a data packet acknowledgement message including data packet status information, and information regarding the available storage space in the buffer. | 02-13-2014 |
20140108681 | SYSTEM AND METHOD FOR PROVIDING A FLEXIBLE BUFFER MANAGEMENT INTERFACE IN A DISTRIBUTED DATA GRID - A system and method can provide a flexible buffer management interface in a distributed data grid. The buffer manager in the distributed data grid can receive a request from a requester for a buffer in the distributed data grid, wherein the request contains at least one parameter that provides an indication on the size of the requested buffer. Then, the buffer manager can allocate a buffer based on the indication in the request and provide the allocated buffer to the requester, wherein an actual size of the buffer is determined by the buffer manager. | 04-17-2014 |
20140129746 | REAL-TIME DATA MANAGEMENT FOR A POWER GRID - The present disclosure relates to real-time data management for a power grid and presents a real-time data management system, a system, method, apparatus and tangible computer readable medium for accessing data in a power grid, a system, method, apparatus and tangible computer readable medium for controlling a transmission delay of real-time data delivered via a real-time bus, and a system, method, apparatus and tangible computer readable medium for delivering real-time data in a power grid. In the real-time data management system of the present disclosure, a unified data model covering various organizations and various data resource is designed and a management scheme for clustered data is used to provide a transparent and high speed data access. Besides, multi-bus collaboration and bus performance optimization approaches are utilized to improve efficiency and performance of the buses. The real-time data management system may also include an event integration and complex event process component to provide a credible prediction on status of the power grid. With embodiments of the present disclosure, it may efficiently manage the high volume of real-time data and events, provide data transmission with a low latency, provide flexible extension of both the number of data clusters and the number of databases to ensure high volume data storage, and achieve a high speed and transparent data access, Additionally, it also enable the rapid design and development of analytical applications, and support the near real-time enterprise decision-making business. | 05-08-2014 |
20140189171 | OPTIMIZATION OF NATIVE BUFFER ACCESSES IN JAVA APPLICATIONS ON HYBRID SYSTEMS - Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining. | 07-03-2014 |
20140344489 | VARIABLE-SIZED BUFFERS MAPPED TO HARDWARE REGISTERS - An interface includes a first hardware register field to store respective chunks of a command directed to a device and respective chunks of a response to the command from the device. The interface also includes a second hardware register field to store a size of the command and a size of the response. The first and second hardware register fields are accessible by the device and by a processor external to the device that generates the command, in response to memory not being available to buffer the command and the response. | 11-20-2014 |
20150019768 | SOFTWARE INTERFACE FOR A SPECIALIZED HARDWARD DEVICE - Embodiments of the disclosure include methods, systems and computer program products for performing a data manipulation function. The method includes receiving, by a processor, a request from an application to perform the data manipulation function and based on determining that a specialized hardware device configured to perform the data manipulation function is available, the method includes determining if executing the request on the specialized hardware device is viable. Based on determining that the request is viable to execute on the specialized hardware device, the method includes executing the request on the specialized hardware device. | 01-15-2015 |
20150052269 | METHOD OF SAMPLING AND STORING DATA AND IMPLEMENTATION THEREOF - Embodiments of methods that are useful to avoid overflow in fixed-length buffers. In one embodiment, the methods dynamically adjust parameters (e.g., sample time) and reconfigure data in the buffer to allow new data samples to fit in the buffer. These embodiments allow data collection to automatically adapt, e.g., by adjusting the sample rate to allow the data to fit in the limited buffer size. These embodiments can configure hardware and/or software on a valve positioner of a valve assembly to improve data collection for use in on-line valve diagnostics and other data processing techniques. | 02-19-2015 |
20150058504 | DYNAMICALLY CHANGING A BUFFER FLUSH THRESHOLD OF A TAPE DRIVE BASED ON HISTORICAL TRANSACTION SIZE - According to one embodiment, a method for dynamically changing a buffer threshold in a tape drive includes determining that a drive buffer is emptied of data, calculating a write size indicating an amount of data from a transaction size left to be written to a tape prior to a next anticipated sync command, setting a buffer threshold that triggers a back hitch to a smaller value when the transaction size is less than a buffer size, setting the buffer threshold to the smaller value when an absolute difference between the transaction size and the write size is greater than or equal to the buffer size, and setting the buffer threshold to a larger value when the transaction size is not less than the buffer size and/or the absolute difference between the transaction size and the write size is less than the buffer size. | 02-26-2015 |
20150058505 | METHOD FOR OPERATING A BUFFER MEMORY OF A DATA PROCESSING SYSTEM AND DATA PROCESSING SYSTEM - A method of operating a buffer memory of a data processing system on which two or more programs can run in parallel includes the following: a source code is generated for a program to be executed and the needed data of the program to be executed are stored in the buffer memory; at least one address register is simultaneously generated, with the memory content of the address register being addresses of each of the two or more programs in the buffer memory. The two or more programs in the buffer memory are accessed via the at least one address register. | 02-26-2015 |
20150081934 | SYSTEM AND METHOD FOR DATA SYNCHRONIZATION ACROSS DIGITAL DEVICE INTERFACES - A system for synchronizing and re-ordering data transmitted between first and second clock domains associated with first and second device interfaces, respectively, includes a splitter, an arbiter, a transaction manager, and a read data buffer. The splitter receives a parent read request from one or more data input ports of the first device interface and splits it into one or more read requests. The arbiter receives the one or more read requests and selects one of the read requests and transmits it to the transaction manager. The transaction manager allocates an entry to the read request and then the read request is transmitted to the read data buffer. Thereafter, the read data buffer transmits the read request to the second device interface and transmits received response data to the first device interface. | 03-19-2015 |
20150120967 | Communication Device Ingress Information Management System and Method - The components of communication network device ingress systems and methods cooperate to manage information ingress and prevent denial of service attempts. A classifier classifies incoming information. A classification filter filters the information on a classification basis to prevent denial of service. The classification filter includes a classification filter counter for tracking the flow of information associated with the classification filter. A zero value in the classification filter counter indicates that a buffer capacity limit associated with the classification is reached. The counter permits information to flow to a packet buffer if the classification filter counter value is not zero and discards information if the classification filter counter value is zero. In one exemplary implementation the classification filter counter decrements a classification filter counter value when the information is placed in the buffer. The classification filter counter value is incremented when the information is processed out of the buffer. | 04-30-2015 |