25th week of 2009 patent applcation highlights part 65 |
Patent application number | Title | Published |
20090157894 | SYSTEM AND METHOD FOR DISTRIBUTING MULTIMEDIA STREAMING SERVICE REQUEST BASED ON WIDE AREA NETWORK - Provided are a system and a method for distributing multimedia streaming service request based on wide area network which can efficiently support multimedia streaming service in wide area network. The system for distributing multimedia streaming service request based on wide area network includes a user terminal, a wide area server, and a local server. The user terminal requests multimedia streaming service. The wide area server for selecting a local server which is disposed nearest to the user terminal and has node availability and service availability, and providing contents requested by the user terminal to the selected local server. The local server provides the multimedia streaming service to the user terminal using the contents provided from the wide area server. | 2009-06-18 |
20090157895 | Method for synchronizing at least two streams - The invention relates to a method for broadcasting from a local source a local stream suitable for synchronizing with a main stream accompanied by correlated main markers broadcasted from a main source, the method comprising the steps of:
| 2009-06-18 |
20090157896 | TCP OFFLOAD ENGINE APPARATUS AND METHOD FOR SYSTEM CALL PROCESSING FOR STATIC FILE TRANSMISSION - Provided are a TCP offload engine (TOE) apparatus and method for static file transmission. An apparatus for system call processing for static file transmission includes an application program block for generating a file transmission command upon a user's file transmission request, a BSD socket module for converting the file transmission command of a file unit into a division transmission command for division-transmission of a certain size unit, a TOE kernel module for receiving the division transmission command and converting the division transmission command into a TOE control command, and a TOE apparatus module for generating a data packet of the certain size for network transmission in response to the TOE control command and transmitting the data packet to a node having requested file transmission. | 2009-06-18 |
20090157897 | CONTENT PROVISIONING SYSTEM AND METHOD - To implement more appropriate QoS control by compressing contents data distributed via a network taking into consideration meaning of contents and preferences of users. | 2009-06-18 |
20090157898 | Generic Format for Efficient Transfer of Data - Methods, systems and apparatus, including computer program products, for transferring, receiving, and storing multiple element data in a string of characters. Multiple data elements are sent in a string of delimited characters and have respective project identifiers, data types, and index numbers used to extract and store the data elements at a receiving computer. | 2009-06-18 |
20090157899 | CONTENT DELIVERY NETWORK - A content delivery system for providing content from a content delivery network to end users may include a plurality of delivery servers that host one or more content items and an inventory server having an inventory of content. The inventory of content can indicate which of the delivery servers host the content items. The inventory server may receive a request for a content item from an end user system and may access the inventory of content to determine one or more delivery servers that host the content item. In response to this determination, the inventory server may redirect the request for the content item to a selected one of the delivery servers. The selected delivery server can then serve the content item to the end user system. | 2009-06-18 |
20090157900 | Method For Ipv4 Application Transition Over Ipv6 Networks - A network system adopting a first IP protocol is provided. The network system includes an address allocating server and a communication terminal supporting both the first IP protocol and a second IP protocol, wherein the address allocating server dynamically allocates an address of the second IP protocol to the communication terminal. The communication terminal includes a dynamic address manager for acquiring the dynamically allocated address of the second IP protocol of the communication terminal from the address allocating server and a second IP protocol address of the destination of a second IP protocol packet from a second IP protocol application, and an address adapter for encapsulating the second IP protocol packet from the second IP protocol application into a first IP protocol packet, wherein the second IP protocol address of the communication terminal in the header of the second IP protocol packet and the second IP protocol address of the destination are encapsulated into the first IP protocol packet. | 2009-06-18 |
20090157901 | SYSTEM AND METHOD FOR USING ROUTING PROTOCOL EXTENSIONS FOR IMPROVING SPOKE TO SPOKE COMMUNICATION IN A COMPUTER NETWORK - Systems and methods for using routing protocol extensions to improve spoke to spoke communication in a computer network are disclosed. Embodiments provide systems and methods to establish a tunnel between a first spoke and a hub, exchange routing information between the first spoke and the hub using a routing protocol, extend the routing protocol and an associated database to include next hop mapping information, and establish a tunnel between the first spoke and a second spoke according to information in the database. | 2009-06-18 |
20090157902 | Virtual Networks - A virtual network has a plurality of nodes. Each node has the capability to provide a service to another node. Each node maintains a list for storing entries each representing a link to another node; each entry contains the address of the other node and a label identifying a service that that other node may provide. Each node also has a store for storing messages received from other nodes, these messages serving to propose a link and containing the identity of the node originating the message, a label identifying a service that that other node may provide and a label identifying a service that that other node requires. When a node needs a service that it is not itself able to provide, it searches the link list for a link having a label that matches the service needed, and in the event that such a link is found it transmits to the node identified by the link a message requesting the service. If, however, no such link is found, it searches the message store for a message identifying another node where the label identifying a service that that other node may provide matches the service needed and the label identifying a service that that other node requires matches the service that the node needing the service has the capability to provide. In the event that such a message is found it initiates the creation of a corresponding entry in the link list. If no such message is found, the node needing the service generates a message serving to propose a link and containing its own identity, a label identifying a service that it has the capability to provide and a label identifying the service that it needs. | 2009-06-18 |
20090157903 | Methods and Apparatus for Trouble Reporting Management in a Multiple System Service Environment - Systems and techniques for managing and acting on trouble reports. A system comprising a plurality of subsystems maintains a central exchange for mediating problem referrals and resolutions between subsystems. A subsystem receives a trouble report and creates a trouble ticket in its native format. To refer the trouble ticket or a problem, the referral information, is translated to a generic format of the exchange. Appropriate problems are referred to subsystems, translating problem information into the native format of a receiving subsystem. Information generated by a receiving subsystem in its native format, including cause and root cause information for the problem, is translated to the generic format of the central exchange, and used to update the trouble ticket. After the central exchange has finished with the trouble ticket, the ticket is translated to the native format of its originating subsystem and further steps are taken by the originating subsystem as appropriate. | 2009-06-18 |
20090157904 | Analysis tool for intra-node application messaging - A method and apparatus for transforming message events between applications running on a computing device into a form that appears as network events between multiple virtual network access devices. These “network events” may then be processed by known network software protocol analyzers. | 2009-06-18 |
20090157905 | System and Method for Standardizing Clocks in a Heterogeneous Networked Environment - A system and method for standardizing clocks in the heterogeneous networked environment is provided. In one aspect the duration of time that a message takes to travel from a source machine to a destination machine is decomposed into actual duration time T for transmission,—and time difference. C between the source machine and the destination machine. Two T's for each leg of a round trip transmission is determined and t˜ using the two T's is estimated. A measure of each leg of round trip transmission is determined using t˜ and C. An offset for a machine within a known delta is established. | 2009-06-18 |
20090157906 | Information processing device, information processing device controlling method, and computer-readable recording medium - An information processing device includes a plurality of modules, each of the plurality of modules including: a processing unit to receive input data and setting options from an external device, perform a processing of the input data in accordance with the setting options, and return a processing result to the external device; a storing unit to store information indicating setting items which are selectable as the setting options and setting values which are selectable for each setting item; and an information providing unit to transmit, in response to a command, information indicating setting items and setting values stored in the storing unit, to a source unit of transmitting the command. | 2009-06-18 |
20090157907 | Shelf system for rechargeable electronic devices - The invention is a shelf system for rechargeable electronic devices that has an electrical pole with a number of attachment points. Shelves connect to the attachment points and can pivot between a working position and a stored position. The electrical pole has a number of power ports with different features. For instance, one of the power ports is a 12 volt direct current (DC) port, commonly called a cigarette lighter port. Many travelers will have a 12 VDC plug and cable with them to charge their cell phone, MP3 player or computer. The shelf system also includes a standard electrical outlet in the electrical pole. In addition, the electrical port will have a USB port for those devices that are charged through a USB port. A USB hub is provided in the electrical pole to facilitate this feature. | 2009-06-18 |
20090157908 | Software Driver Device - An interface device disposed between a host and peripheral device is disclosed. The device stores drivers, utility software and applicative data for the peripheral device. The interface device appears to first be a CD driver for purposes of loading drivers, software and data into the host. Then it switches to directly couple the host and peripheral device. | 2009-06-18 |
20090157909 | CONFIGURABLE METHOD FOR CONNECTING ONE OR MORE DEVICES TO A MEDIA PROCESS SYSTEM - A software sign on sequence is provided that allows devices to negotiate how they will communicate, what data will be exchanged and how they will mechanically operate, when they are connected to each other. This avoids the necessity of supplying new software programs to each device which is time consuming and expensive. | 2009-06-18 |
20090157910 | INFORMATION PLAYBACK APPARATUS AND INFORMATION PLAYBACK METHOD - According to one embodiment, an information playback apparatus according to one embodiment includes a storage module configured to store a vendor table including a formal vendor ID of a self apparatus and one or more registered vendor IDs, and vendor commands respectively associated with the formal vendor ID and the registered vendor IDs, a connection module configured to connect a partner device so as to transfer video data, audio data, and vendor commands, a detection module configured to detect a vendor ID of the partner device, and a vendor ID control module configured to transmit either one of the formal vendor ID of the self apparatus and a temporary vendor ID of the self apparatus selected from the registered vendor IDs to the partner device based on the vendor ID of the partner device. | 2009-06-18 |
20090157911 | RECORDING CONTROL APPARATUS, RECORDING CONTROL METHOD, AND COMPUTER PROGRAM PRODUCT - A determining unit performs a position determination determining whether a nonvolatile memory is mounted in a right position or in a wrong position. When the determining unit determines that the nonvolatile memory is mounted in the wrong position, a protecting unit protects data stored in the nonvolatile memory. | 2009-06-18 |
20090157912 | IMAGE PROCESSING APPARATUS - An image processing apparatus is capable of communicating data with a plurality of external apparatuses attached to the image processing apparatus. Each of the external apparatuses includes an advisor that advises a user of access to the external apparatus. A display section displays information on the external apparatuses attached to the image processing apparatus. A selecting section selects a desired one external apparatus from among the plurality of external apparatuses displayed on said display section. A transmitter transmits an access command to the desired one external apparatus. When the selected external apparatus receives the access command, the advisor advises the user of the access to the selected external apparatus, emitting flashing light. | 2009-06-18 |
20090157913 | Method for Toggling Non-Adjacent Channel Identifiers During DMA Double Buffering Operations - Disclosed are a method, a system and a computer program product for managing direct memory access (DMA) operations in a double buffering system. During direct memory access operations in a computer system, data is transferred from a source memory location to a destination memory location with minimal use of the computer's processing unit. Double buffering utilizes two separate memory buffers to perform simultaneous DMA operations. Prior to processing a DMA request each buffer in a double buffering system is assigned a channel identification (ID), or tag. When reading, writing, or polling status of data in a buffer, the tag identifies the buffer. A toggle factor is utilized to conveniently switch between each buffer in the double buffering system. Utilizing a toggle factor decreases latencies in DMA operations. | 2009-06-18 |
20090157914 | DISPLAY SYSTEM WITH FRAME REUSE USING DIVIDED MULTI-CONNECTOR ELEMENT DIFFERENTIAL BUS CONNECTOR - A method includes reducing power of a first graphics processor by disabling or not using its rendering engine and leaving a display engine of the same first graphics processor capable of outputting display frames from a corresponding first frame buffer to a display. A display frame is rendered by a second graphics processor while the rendering engine of the first graphics processor is in a reduced power state, such as a non-rendering state. The rendered frame is stored in a corresponding second frame buffer of the second graphics processor, such as a local frame buffer and copied from the second frame buffer to the first frame buffer. The copied frame in the first frame buffer is then displayed on a display while the rendering engine of the first graphics processor is in the reduced power state. Accordingly thermal output and power output is reduced with respect to the first graphics processor since it does not do frame generation using its rendering engine, it only uses its display engine to display frames generated by the second graphics processor. | 2009-06-18 |
20090157915 | METHOD FOR SWITCHING NODE AND AN INFORMATION PROCESSING SYSTEM - In an information processing system including host computers and disk devices, each of an execution-node host and a standby-node host includes an I/O request unit and an access-right change command unit. The I/O request unit issues an I/O request. The access-right change command unit transmits an access-right change command. The access-right change command results from causing I/O-enable/disable information and the host identification information to correspond to each other. The disk device includes an access control table, an access control unit, and an access-right change unit. The access control table stores information of the access-right change commands from the hosts. The access control unit judges the execution enablement/disablement for the I/O requests from the host identification information and the access control table. The access-right change unit, in accordance with the access-right change commands from the hosts, changes in batch the I/O-enable/disable information on each host basis within the access control table. | 2009-06-18 |
20090157916 | Scalable Port Controller Architecture Supporting Data Streams Of Different Speeds - A scalable port controller architecture supporting data streams of different speeds. In an embodiment, a port controller contains high speed receptor units and low speed receptor units, and a port routing logic connecting each external device (on corresponding port) to one of the receptors according to various registers. The port routing logic may connect an external device to one of the receptors, which determines the data rate at which data on a corresponding virtual connection from the external device is being received/sent. If the receptor does not have sufficient capacity (based on the data rate) to communicate with the external device, the connection is moved to other receptors, potentially in another control unit. | 2009-06-18 |
20090157917 | SIGNAL PROCESSING APPARATUS AND CONTROL METHOD THEREOF - A signal processing apparatus includes: a plurality of input terminals to which a plurality of connectors are connected, respectively; a signal processor which includes a plurality of connection units corresponding to a plurality of input signals input through the plurality of connectors, and processes the plurality of input signals received through the plurality of connection units; a switching unit which is provided between the plurality of input terminals and the plurality of connection units, and selectively connects the plurality of input terminals with the plurality of connection units, respectively; an information detecting unit which detects information about the plurality of input signals; and a controller which controls the switching unit to make the plurality of connection units correspond to the plurality of input signals on the basis of the information detected by the information detecting unit. Thus it is easy to apply reconnection to a wrongly-connected input terminal. | 2009-06-18 |
20090157918 | EFFICIENT PROCESSING OF GROUPS OF HOST ACCESS REQUESTS THAT MAY INCLUDE ZERO LENGTH REQUESTS - This is directed to methods and systems for handling access requests from a device to a host. The device may be a device that is part of the host, such as an HBA, an NIC, etc. The device may include a processor which runs firmware and which may generate various host access requests. The host access requests may be, for example, memory access requests, or DMA requests. The device may include a module for executing the host access requests, such as a data transfer block (DXB). The DXB may process incoming host access requests and return notifications of completion to the processor. For various reasons, the processor may from time to time issue null or zero length requests. Embodiments of the present invention ensure that the notifications of completion for all requests, including the zero length requests, are sent to the processor in the same order as the requests. | 2009-06-18 |
20090157919 | READ CONTROL IN A COMPUTER I/O INTERCONNECT - In one embodiment, a method for controlling reads in a computer input/output (I/O) interconnect is provided. A read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size. Then it is determined whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold. If fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold, then the read request is temporarily restricted from being forwarded upstream | 2009-06-18 |
20090157920 | Dynamically Allocating Communication Lanes For A Plurality Of Input/Output ('I/O') Adapter Sockets In A Point-To-Point, Serial I/O Expansion Subsystem Of A Computing System - Methods, systems, and products are disclosed for dynamically allocating communication lanes for a plurality of sockets in a point-to-point, serial I/O expansion subsystem of a computing system, the expansion subsystem including an switch that supports a maximum number of enabled communication lanes, each socket having a same form factor, each socket connected to the switch using a same predefined number of communication lanes, that include: identifying, during a boot process for the computing system, each of the sockets in which an adapter is installed; determining, for each installed adapter, a maximum link width for that adapter; and enabling, for each of the sockets in which an adapter is installed, a set of communication lanes for communications between the adapter installed in that socket and the expansion subsystem switch in dependence upon the maximum link width for each adapter and the maximum number of enabled communication lanes supported by the switch. | 2009-06-18 |
20090157921 | KVM MANAGEMENT SYSTEM AND METHOD - A KVM management system is provided. The KVM management system comprises at least one KVM switch and a KVM recorder. The KVM switch comprises a plurality of ports for connecting to a plurality of computers, respectively. The KVM recorder comprises a storage device, and couples to the KVM switch. The KVM recorder receives and records operations on the KVM switch to the storage device. | 2009-06-18 |
20090157922 | MULTIMEDIA KVM SYSTEM - Multimedia KVM systems are provided. A multimedia KVM system comprises a KVM switch and a local console coupled to the KVM switch. The KVM switch comprises a plurality of first connectors for connecting to a plurality of first multimedia components, respectively. The local console comprises a plurality of second connectors for connecting to a plurality of second multimedia components, respectively. A first user utilizes the first multimedia components, via the local console and the KVM switch, to communicate with a second user utilizing the second multimedia components. | 2009-06-18 |
20090157923 | Method and System for Managing Performance Data - The present invention is directed to a method and system for managing performance data. In accordance with a particular embodiment of the present invention, cache metrics are received. At least one of the cache metrics may be compared with a threshold value. A determination may be made as to whether one or more parameter adjustments are required based upon the comparison. | 2009-06-18 |
20090157924 | METHOD AND APPARATUS FOR CONFIGURING ELECTRONIC DEVICES TO PERFORM SELECTABLE PREDEFINED FUNCTIONS USING DEVICE DRIVERS - A multifunctional mobile telephone handset is connected to a PC using a Universal Serial Bus. During bus enumeration, a device class descriptor is returned by the handset to the PC. The PC's operating system receives information relating to one of the functions of the handset and assigns an appropriate device driver. | 2009-06-18 |
20090157925 | Method for integrating device-objects into an object-based management system for field devices in automation technology - The invention relates to a method for integration of device-objects (DTM | 2009-06-18 |
20090157926 | COMPUTER SYSTEM, CONTROL APPARATUS, STORAGE SYSTEM AND COMPUTER DEVICE - A computer system which enables more efficient use of a storage system shared by plural host computers and optimizes the performance of the whole system including the host computers and storages. A computer device has a first control block which logically partitions computing resources of the computer device and makes resulting partitions run as independent virtual computers. The storage system has a second control block which logically partitions storage resources of the storage system and makes resulting partitions run as independent virtual storage systems. The system also has a management unit incorporating: a first control table which controls computing resources of the computer device; a second control table which controls storage resources of the storage system; and a third control table which controls the relations between the virtual computers and the virtual storage systems. The first control block logically partitions the computing resources according to the first control table; and the second control block logically partitions the storage resources according to the second control table. | 2009-06-18 |
20090157927 | METHOD AND SYSTEM FOR CHIP-TO-CHIP COMMUNICATIONS WITH WIRELINE CONTROL - Aspects of a method and system for chip-to-chip communications with wireline control may include initializing a microwave communication link between a first chip and a second chip via a wireline communication bus, wherein the initializing comprises adjusting beamforming parameters of a first antenna array communicatively coupled to the first chip, and of a second antenna array communicatively coupled to the second chip. The first chip and the second chip may communicate data via said microwave communication link. The microwave communication link may be routed via one or more relay chips, when the first chip and the second chip cannot directly communicate satisfactorily. Control data may be transferred between the first chip, the second chip, and/or the one or more relay chips, which may comprise one or more antennas. The relay chips may be dedicated relay ICs or multi-purpose transmitter/receivers. | 2009-06-18 |
20090157928 | MASTER AND SLAVE DEVICE FOR COMMUNICATING ON A COMMUNICATION LINK WITH LIMITED RESOURCE - A master device for communicating with a number of slave devices through a communication link having a limited resource. The master device comprises a transceiver adapted for communicating with the slave devices on the communication link and a controller adapted for detecting the number of slave devices. The controller is adapted for determining an individual resource associated with a slave device to be consumed from the communication link, wherein a sum of the individual resources of all slave devices is lower than the limited resource and wherein the transceiver is adapted for assigning the individual resources to the associated slave devices. | 2009-06-18 |
20090157929 | DATA ARBITRATION ON A BUS TO DETERMINE AN EXTREME VALUE - A system includes a master device and a plurality of slave devices. The master device initiates a bus transaction having an arbitration data field for processing by a subset of the slave devices. Each slave device of the subset arbitrates a corresponding data value for the arbitration data field via the multiple-access bus such that an extreme data value of the data values of the slave devices of the subset is transmitted via the multiple-access bus for the arbitration data field. The slave device can arbitrate its data value by providing the data value for serial transmission via a data line of the multiple-access bus and monitoring the data line. In response to determining that a bit value of the data value being provided does not match the state of the data line, the slave device terminates provision of the data value, thereby ceasing arbitration of its data value. | 2009-06-18 |
20090157930 | MULTI-CHANNEL COMMUNICATION CIRCUIT - A multi-channel communication circuit includes a master device, a plurality of slave devices, and a multiplexer (MUX). A transmitting pin and a receiving pin of a serial interface of the master device are respectively connected to two data input pins of the MUX. Two control pins of the serial interface of the master device are connected to two selecting pins of the MUX. Four pins of the serial interface of the master device are connected to a power pin of the MUX. A transmitting pin and a receiving pin of a serial interface of each slave device are respectively connected to two data output pins of the MUX, the master device communicates with one slave device via transmitting a corresponding selecting signal to the two selecting pins of the MUX to select one slave device. | 2009-06-18 |
20090157931 | IIC BUS COMMUNICATION SYSTEM, SLAVE DEVICE, AND METHOD FOR CONTROLLING IIC BUS COMMUNICATION - Multiple master devices and multiple slave devices are connected in parallel to two bus lines including a SCL line | 2009-06-18 |
20090157932 | IIC BUS COMMUNICATION SYSTEM, SLAVE DEVICE, AND METHOD FOR CONTROLLING IIC BUS COMMUNICATION - Multiple master devices and multiple slave devices are connected in parallel to two bus lines including a SCL line | 2009-06-18 |
20090157933 | Communication bus power state management - Methods and apparatus to manage communication bus power states are described. In one embodiment, an apparatus comprises a bus including a master node and at least a first slave node, logic to transmit a first power state change request from the master node to the first slave node, logic to receive the first power state change request in the first slave node, and logic to designate the first slave node as the master node when the first slave node denies the first power state change request. | 2009-06-18 |
20090157934 | Data Processing Unit and Bus Arbitration Unit - An effective bus arbitration unit is described in which it is possible to reduce, as much as possible, the waiting time until a bus master obtain bus ownership and improve the rate of operating the bus while improving the throughput of data transfer. A bus master issues a size signal (for example, signal “CDSZ”) indicative of the size of data to be read or written. A state machine | 2009-06-18 |
20090157935 | Efficient interrupt message definition - An efficient interrupt system for a multi-processor computer. Devices interrupt a processor or group of processors using pre-defined message address and data payload communicated with a memory write transaction over a PCI, PCI-X, or PCI Express bus. The devices are configured with messages that each targets a processor. Upon receiving a command to perform an operation, the device may receive an indication of a preferred message to use to interrupt a processor upon completion of that operation. The efficiency with which each interrupt is handled and the overall efficiency of operation of the computer is increased by defining messages for the devices within the computer so that each device contains messages targeting processors distributed across groups of processors, with each group representing processors in close proximity. In selecting target processors for messages, processors are selected to spread processing across the processor groups and across processors within each group. | 2009-06-18 |
20090157936 | INTERRUPT MORPHING AND CONFIGURATION, CIRCUITS, SYSTEMS, AND PROCESSES - An electronic configuration circuit includes a processing circuit ( | 2009-06-18 |
20090157937 | Modular Data Transmission System with Separate Energy Supply for Each Connected Module - The invention pertains to a modular data transmission system ( | 2009-06-18 |
20090157938 | ELECTRONIC DEVICES USING DIVIDED MULTI-CONNECTOR ELEMENT DIFFERENTIAL BUS CONNECTOR - In one example an electronic device includes a housing that includes an A/C input or DC input, and at least one circuit substrate that includes electronic circuitry, such as graphics processing circuitry that receives power based on the A/C input or DC input. The electronic device also includes a divided multi-connector element differential bus connector that is coupled to the electronic circuitry. The divided multi-connector element differential bus connector includes a single housing that connects with the circuit substrate and the connector housing includes therein a divided electronic contact configuration comprised of a first group of electrical contacts divided from an adjacent second group of mirrored electrical contacts wherein each group of electrical connects includes a row of at least lower and upper contacts. In one example, the electronic device housing includes air flow passages, such as grills, adapted to provide air flow through the housing. The electronic device housing further includes a passive or active cooling mechanism such as a fan positioned to cool the circuitry during normal operation. In one example, the electronic device does not include a host processor and instead a host processor is in a separate electronic device that communicates with the graphics processing circuitry through the divided multi connector element differential bus connector. In another example, a CPU (or one or more CPUs) is also co-located on the circuit substrate with the circuitry to provide a type of parallel host processing capability with an external device. | 2009-06-18 |
20090157939 | Multiple module computer system and method - A computer system for multi-processing purposes. The computer system has a console comprising a first coupling site and a second coupling site. Each coupling site comprises a connector. The console is an enclosure that is capable of housing each coupling site. The system also has a plurality of computer modules, where each of the computer modules is coupled to a connector. Each of the computer modules has a processing unit, a main memory coupled to the processing unit, a graphics controller coupled to the processing unit, and a mass storage device coupled to the processing unit. Each of the computer modules is substantially similar in design to each other to provide independent processing of each of the computer modules in the computer system. | 2009-06-18 |
20090157940 | Techniques For Storing Data In Multiple Different Data Storage Media - A data storage system comprises a first data storage medium and a second data storage medium. The first and the second data storage media are different types of data storage media. The data storage system assigns a first range of logical block addresses to physical addresses in the first data storage medium. The data storage system is configured to dynamically reassign the first range of logical block addresses to physical addresses in the second data storage medium. Alternatively, the data storage system can assign a first range of logical block addresses to physical addresses in the first data storage medium and to physical addresses in the second data storage medium. The data storage system stores data associated with the first range of logical block addresses in both of the first and the second data storage media. One of the data storage media can be NAND Flash memory. | 2009-06-18 |
20090157941 | Managing Virtual Addresses Of Blade Servers In A Data Center - Methods, apparatus, and products for managing virtual addresses of blade servers in a data center are disclosed that include storing by a blade server management module (‘BSMM’), in non-volatile memory of a blade server, a parameter block, the parameter block including one or more virtual addresses for communications adapters of the blade server and one or more action identifiers, each action identifier representing a type of address modification; detecting, by a basic input-output system (‘BIOS’) module of the blade server upon powering on the blade server, the parameter block; and modifying, by the BIOS module of the blade server in dependence upon the one or more action identifiers of the parameter block, an address of at least one communications adapter of the blade server. | 2009-06-18 |
20090157942 | Techniques For Data Storage Device Virtualization - A data storage device comprises virtual storage devices that are each assigned to a subset of data sectors in a non-volatile memory of the data storage device. The data storage device receives configuration metadata for configuring each of the virtual storage devices from a host operating system. The configuration metadata is received in a standard format that is file system independent. The configuration metadata comprises a range of logical block addresses and a virtual storage device number assigned to each of the virtual storage devices. Each of the virtual storage device numbers is a unique identifier used by the data storage device to differentiate between the virtual storage devices. The data storage device uses the virtual storage device numbers and logical block addresses to identify data sectors in the virtual storage devices that are accessible by virtual machine operating systems. | 2009-06-18 |
20090157943 | TRACKING LOAD STORE ORDERING HAZARDS - A method and system for processing data. In one embodiment, the method includes receiving a plurality of stores into a store queue, where each store is a result from a processor, and where the plurality of stores are destined for at least one memory address. The method also includes marking a most recent store of the plurality of stores for each unique memory address, comparing a load request against the store queue, and identifying only the most recent store for each unique memory address for the purpose of handling load-hit-store ordering hazards. | 2009-06-18 |
20090157944 | TRACKING STORE ORDERING HAZARDS IN AN OUT-OF-ORDER STORE QUEUR - A method and system for processing data. In one embodiment, the method includes receiving a first store and receiving a second store subsequent to the first store. The method also includes generating a pointer that points to the last store that needs to retire before the second store retires, where the pointer is associated with the second store, and the last store that needs to retire is the first store. | 2009-06-18 |
20090157945 | Enhanced Processor Virtualization Mechanism Via Saving and Restoring Soft Processor/System States - A method and system are disclosed for saving soft state information, which is non-critical for executing a process in a processor, upon a receipt of a process interrupt by the processor. The soft state is transmitted to a memory associated with the processor via a memory interface. Preferably, the soft state is transmitted within the processor to the memory interface via a scan-chain pathway within the processor, which allows functional data pathways to remain unobstructed by the storage of the soft state. Thereafter, the stored soft state can be restored from memory when the process is again executed. | 2009-06-18 |
20090157946 | MEMORY HAVING IMPROVED READ CAPABILITY - In the present invention, a memory, and in particular, a NOR emulating memory comprises a memory controller having a non-volatile memory for storing program code to initiate the operation of the memory controller. The controller has a first bus for receiving address signals from a host device and a second bus for interfacing with a RAM memory, and a third bus for interfacing with a NAND memory. A volatile RAM memory is connected to the second bus. A NAND memory is connected to the third bus. The controller receives commands and a first address from the first bus, and maps the first address to a second address in the NAND memory, and operates the NAND memory in response thereto. The RAM memory serves as cache for data to or from the NAND memory. The controller also maintains data coherence between the data stored in the RAM memory as cache and the data in the NAND memory. The invention further has a first buffer for storing data from the NAND memory in response to a read command to be written to the RAM memory, and a second buffer for storing data from the RAM memory to be written to the NAND memory. In the event of a read operation, if the data from the specified address is in the RAM memory, then the data is read from the RAM memory completing the read operation. In the event of a read operation, and if the data from the specified address is not in the RAM memory, and if there is sufficient space in the RAM memory to store an entire page of data from the NAND memory, then the entire page is read from the NAND memory, stored in the first buffer and then stored in the RAM memory, and from the specified address is read out, completing the read operation. Finally, in the event of a read operation, and if the data from the specified address is not in the RAM memory, and if there is insufficient space in the RAM memory to store an entire page of data from the NAND memory, then an entire page from the RAM memory is first stored in the second buffer, then an entire page is read from the NAND memory, stored in the first buffer, and from the first buffer, stored in the now freed RAM memory and data from the specified address is read out, completing the read operation. The page of data from the second buffer is subsequently stored back into the NAND memory after the completion of the read operation thereby reducing read latency. | 2009-06-18 |
20090157947 | Memory Apparatus and Method of Evenly Using the Blocks of a Flash Memory - A memory apparatus and a method of evenly using the blocks of a flash memory are provided. The memory apparatus comprises a flash memory and a controller. The flash memory comprises a data region with a plurality of data blocks and a spare region with a plurality of spare blocks. The controller is configured to receive data corresponding to the first data block, select a spare block, program data into the spare block when the erase count corresponding to the spare block is less than the predetermined value or to select a second data block and program data stored in the second data block into the spare block when the erased count corresponding to the spare block reaches the predetermined value. As a result, the blocks of the flash memory are used evenly. | 2009-06-18 |
20090157948 | INTELLIGENT MEMORY DATA MANAGEMENT - Systems and/or methods that facilitate data management on a memory device are presented. A data management component can log and tag data creating data tags. The data tags can comprise static metadata, dynamic metadata or a combination thereof. The data management component can perform file management to allocate placement of data and data tags to the memory or to erase data from the memory. Allocation and erasure are based in part on the characteristics of the data tags, and can follow embedded rules, an intelligent component or a combination thereof. The data management component can provide a search activity that can utilize the characteristics of the data tags and an intelligent component. The data management component can thereby optimize the useful life, increase operating speed, improve accuracy and precision, improve efficiency of non-volatile (e.g., flash) memory and provide improved functionality to memory devices. | 2009-06-18 |
20090157949 | ADDRESS TRANSLATION BETWEEN A MEMORY CONTROLLER AND AN EXTERNAL MEMORY DEVICE - In one or more embodiments, address translation is performed over a dedicated serial bus between a non-volatile memory controller and a memory device that is external from the non-volatile memory device. The memory controller accesses memory address translation data in the external memory device to determine a physical address that corresponds to a logical memory address. The controller can then use the physical memory address to generate memory signals for the non-volatile memory array. | 2009-06-18 |
20090157950 | NAND flash module replacement for DRAM module - An electronic memory module according to the invention provides non-volatile memory that can be used in place of a DRAM module without battery backup. An embodiment of the invention includes an embedded microprocessor with microcode that translates the FB-DIMM address and control signals from the system into appropriate address and control signals for NAND flash memory. Wear-leveling, bad block management, garbage collection are preferably implemented by microcode executed by the microprocessor. The microprocessor, additional logic, and embedded memory provides the functions of a flash memory controller. The microprocessor memory preferably contains address mapping tables, free page queue, and garbage collection information. | 2009-06-18 |
20090157951 | INFORMATION RECORDING DEVICE AND INFORMATION RECORDING METHOD - An information recording device includes a table showing a physical address in first and second areas and a number of rewriting at the physical address in a correspondence manner, the first area being a writing destination in a recording medium configured to be consumed by rewriting, the second area being not a writing destination in the recording medium, an instructor configured to instruct to change the first and second tables based on the first table and the physical address of the writing destination, a changer configured to change the table based on the instruction, and a writer configured to write data to the recording medium based on the physical address in the first area. | 2009-06-18 |
20090157952 | Semiconductor memory system and wear-leveling method thereof - Disclosed is a semiconductor memory system and wear-leveling method thereof. The semiconductor memory system is comprised of a nonvolatile memory including a plurality of logic blocks each of which is divided into a plurality of entries, a file system detecting a type of data to be stored and allocating the logic block or the entry for storing the data in accordance with the data type, and a translation layer leveling wearing degrees over the logic blocks or the entries in accordance with the data type. The semiconductor memory system is improved in performance and lifetime by managing wearing degrees over the logic block or the entries in accordance with the data type. | 2009-06-18 |
20090157953 | Data line disturbance free memory block divided flash memory and microcomputer having flash memory therein - A semiconductor device having an electrically erasable and programmable nonvolatile memory, for example, a rewritable nonvolatile memory including memory cells arranged in rows and columns and disposed to facilitate both flash erasure as well as selective erasure of individual units of plural memory cells. The semiconductor device which functions as a microcomputer chip also has a processing unit and includes an input terminal for receiving an operation mode signal for switching the microcomputer between a first operation mode in which the flash memory is rewritten under control of a processing unit and a second operation mode in which the flash memory is rewritten under control of separate writing circuit externally connectable to the microcomputer. | 2009-06-18 |
20090157954 | CACHE MEMORY UNIT WITH EARLY WRITE-BACK CAPABILITY AND METHOD OF EARLY WRITE BACK FOR CACHE MEMORY UNIT - A cache memory unit includes: a cache memory; an early write-back condition checking unit for checking whether an early write-back condition has been satisfied; and an early write-back execution unit for monitoring a memory bus connecting the cache memory unit and an external memory unit, and in response to the memory bus being idle and the early write-back condition being satisfied, for causing dirty data in the cache memory to be written back to the external memory unit using the memory bus. | 2009-06-18 |
20090157955 | PREALLOCATED DISK QUEUING - A method, system and computer program product for managing preallocated disk space are presented. The method includes placing a plurality of requests for preallocated disk space on a disk space request queue, wherein each preallocated disk space is preallocated for a fixed amount of disk space and a fixed length of time, and wherein an application using an issued preallocated disk space for more than the fixed length of time results in the application being barred from further current use of the issued preallocated disk space. The requests are sorted in the disk space request queue according to a priority algorithm that establishes a priority level for each of the requests, and preallocated disk space is allocated to requesters according to the priority level established by the priority algorithm. | 2009-06-18 |
20090157956 | SYSTEM AND METHOD FOR MANAGING DISK SPACE IN A THIN-PROVISIONED STORAGE SUBSYSTEM - A system and method for managing disk space in a thin-provisioned storage subsystem. If a number of free segments in a free segment pool at a storage subsystem is detected as below a desired minimum, one or more of the following is performed: selecting and adding logical devices (LDEVs) from an internal storage as free segments to the free segment pool, transitioning LDEVs to a virtual device (VDEV), and/or selecting and adding LDEVs from an external storage as free segments to the free segment pool. The transitioning includes identifying partially used or completely used LDEVs and transitioning these to the VDEV. Data migration may also occur by: selecting a source segment at a VDEV for migration, reading data from the source segment, writing the data to a target segment, the target segment being a free segment from the free segment pool, and assigning the target segment to the VDEV. | 2009-06-18 |
20090157957 | APPARATUS WITH DISC DRIVE THAT USES CACHE MANAGEMENT POLICY TO REDUCE POWER CONSUMPTION - Data blocks are loaded in multi-block fetch units from a disc. Cache management policy is selects data blocks for non-retention in cache memory so as to reduce the number of fetch units that must be fetched. Use is made of the large multi-block fetch unit size to profit from the possibility to load additional blocks essentially without additional power consumption when a fetch unit has to be fetched to obtain a block. Selection of data blocks for non-retention is biased toward combinations of data blocks that can be fetched together for a next use in one fetch unit. Between fetching of fetch units the disc drive is switched from a read mode to a power saving mode, wherein at least part of the disc drive is deactivated, so that energy consumption is reduced. Retention is managed at a granularity of data blocks, that is, below the level of the fetch units. If a combination of blocks from the same fetch unit can be fetched together at one go before their next use, these blocks are not retained if as a result other blocks, from a plurality of other fetch units, can be retained in place of the combination of blocks. | 2009-06-18 |
20090157958 | CLUSTERED STORAGE NETWORK - A data storage network is provided. The network includes a client connected to the data storage network; a plurality nodes on the data storage network, wherein each data node has two or more RAID controllers, wherein a first RAID controller of a first node is configured to receive a data storage request from the client and to generate RAID parity data on a data set received from the client, and to store all of the generated RAID parity data on a single node of the plurality of nodes. | 2009-06-18 |
20090157959 | STORAGE MEDIUM CONTROL DEVICE, STORAGE MEDIUM MANAGING SYSTEM, STORAGE MEDIUM CONTROL METHOD, AND STORAGE MEDIUM CONTROL PROGRAM - To provide a storage medium control device capable of preventing decrease in the reliability of data saving with a non-redundant structure. Provided is a storage medium control device capable of communicating with a higher-order device, for managing/controlling an information storage device main body configured with physical storage media to be capable of storing information with a non-redundant structure. The device includes: a region allotment processing device for allotting each physical recording medium to a user useable region and to a substitute sector region, respectively; a fault sector detecting device for checking sectors of the user useable region allotted by the region allotment processing device in initialization processing of the non-redundant structure to detect a fault sector from which information cannot be read out; and a fault sector exchange processing device for exchanging the detected fault sector of the user useable region with a normal sector of the substitute sector region. | 2009-06-18 |
20090157960 | INFORMATION PROCESSING APPARATUS AND START-UP METHOD OF THE APPARATUS - An information processing apparatus on which a non-volatile storage device is mountable is provided. The information processing apparatus comprises: a volatile storage unit; a mount unit that mounts the device; an acquisition unit configured to acquire information of the device; an estimation unit that estimates a resume time from hibernation using the device; a first control unit that controls to store the data stored in the volatile storage unit to the device if the resume time is shorter than a predetermined time and to control not to store the data stored in the volatile storage unit to the device if the resume time is longer than the predetermined time; and a second control unit that controls to read the data from the non-volatile storage unit to the volatile storage unit if the data is stored in the non-volatile storage unit. | 2009-06-18 |
20090157961 | TWO-SIDED, DYNAMIC CACHE INJECTION CONTROL - A method, system, and computer program product for two-sided, dynamic cache injection control are provided. An I/O adapter generates an I/O transaction in response to receiving a request for the transaction. The transaction includes an ID field and a requested address. The adapter looks up the address in a cache translation table stored thereon, which includes mappings between addresses and corresponding address space identifiers (ASIDs). The adapter enters an ASID in the ID field when the requested address is present in the cache translation table. IDs corresponding to device identifiers, address ranges and pattern strings may also be entered. The adapter sends the transaction to one of an I/O hub and system chipset, which in turn, looks up the ASID in a table stored thereon and injects the requested address and corresponding data in a processor complex when the ASID is present in the table, indicating that the address space corresponding to the ASID is actively running on a processor in the complex. The ASIDs are dynamically determined and set in the adapter during execution of an application in the processor complex. | 2009-06-18 |
20090157962 | CACHE INJECTION USING CLUSTERING - A method and system for cache injection using clustering are provided. The method includes receiving an input/output (I/O) transaction at an input/output device that includes a system chipset or input/output (I/O) hub. The I/O transaction includes an address. The method also includes looking up the address in a cache block indirection table. The cache block indirection table includes fields and entries for addresses and cluster identifiers (IDs). In response to a match resulting from the lookup, the method includes multicasting an injection operation to processor units identified by the cluster ID. | 2009-06-18 |
20090157963 | Contiguously packed data - Data for data elements (e.g., pixels) can be stored in an addressable storage unit that can store a number of bits that is not a whole number multiple of the number of bits of data per data element. Similarly, a number of the data elements can be transferred per unit of time over a bus, where the width of the bus is not a whole number multiple of the number of bits of data per data element. Data for none of the data elements is stored in more than one of the storage units or transferred in more than one unit of time. Also, data for multiple data elements is packaged contiguously in the storage unit or across the width of the bus. | 2009-06-18 |
20090157964 | EFFICIENT DATA STORAGE IN MULTI-PLANE MEMORY DEVICES - A method for data storage includes initially storing a sequence of data pages in a memory that includes multiple memory arrays, such that successive data pages in the sequence are stored in alternation in a first number of the memory arrays. The initially-stored data pages are rearranged in the memory so as to store the successive data pages in the sequence in a second number of the memory arrays, which is less than the first number. The rearranged data pages are read from the second number of the memory arrays. | 2009-06-18 |
20090157965 | Method and Apparatus for Active Software Disown of Cache Line's Exlusive Rights - Software indicates to hardware of a processing system that its storage modification to a particular cache line is done, and will not be doing any modification for the time being. With this indication, the processor actively releases its exclusive ownership by updating its line ownership from exclusive to read-only (or shared) in its own cache directory and in the storage controller (SC). By actively giving up the exclusive rights, another processor can immediately be given exclusive ownership to that said cache line without waiting on any processor's explicit cross invalidate acknowledgement. This invention also describes the hardware design needed to provide this support. | 2009-06-18 |
20090157966 | CACHE INJECTION USING SPECULATION - A method, system, and computer program product for cache injection using speculation are provided. The method includes creating a cache line indirection table at an input/output (I/O) hub, which includes fields and entries for addresses, processor ID, and cache type and includes cache level line limit fields. The method also includes setting cache line limits to the CLL fields and receiving a stream of contiguous addresses at the table. For each address in the stream, the method includes: looking up the address in the table; if the address is present in the table, inject the cache line corresponding to the address in the processor complex; if the address is not present in the table, search limit values from the lowest level cache to the highest level cache; and inject addresses not present in the table to the cache hierarchy of the processor last injected from the contiguous address stream. | 2009-06-18 |
20090157967 | Pre-Fetch Data and Pre-Fetch Data Relative - A prefetch data machine instruction having an M field performs a function on a cache line of data specifying an address of an operand. The operation comprises either prefetching a cache line of data from memory to a cache or reducing the access ownership of store and fetch or fetch only of the cache line in the cache or a combination thereof. The address of the operand is either based on a register value or the program counter value pointing to the prefetch data machine instruction. | 2009-06-18 |
20090157968 | Cache Memory with Extended Set-associativity of Partner Sets - A cache memory including a plurality of sets of cache lines, and providing an implementation for increasing the associativity of selected sets of cache lines including the combination of providing a group of parameters for determining the worthiness of a cache line stored in a basic set of cache lines, providing a partner set of cache lines, in the cache memory, associated with the basic set, applying the group of parameters to determine the worthiness level of a cache line in the basic set and responsive to a determination of a worthiness in excess of a predetermined level, for a cache line, storing said worthiness level cache line in said partner set. | 2009-06-18 |
20090157969 | BUFFER CACHE MANAGEMENT TO PREVENT DEADLOCKS - A method, computer program product, and data processing system for managing a input/output buffer cache for prevention of deadlocks are disclosed. In a preferred embodiment, automatic buffer cache resizing is performed whenever the number of free buffers in the buffer cache diminishes to below a pre-defined threshold. This resizing adds a pre-defined number of additional buffers to the buffer cache, up to a pre-defined absolute maximum buffer cache size. To prevent deadlocks, an absolute minimum number of free buffers are reserved to ensure that sufficient free buffers for performing a buffer cache resize are always available. In the event that the buffer cache becomes congested and cannot be resized further, threads whose buffer demands cannot be immediately satisfied are blocked until sufficient free buffers become available. | 2009-06-18 |
20090157970 | METHOD AND SYSTEM FOR INTELLIGENT AND DYNAMIC CACHE REPLACEMENT MANAGEMENT BASED ON EFFICIENT USE OF CACHE FOR INDIVIDUAL PROCESSOR CORE - Determining and applying a cache replacement policy for a computer application running in a computer processing system is accomplished by receiving a processor core data request, adding bits on each cache line of a plurality of cache lines to identify a core ID of an at least one processor core that provides each cache line in a shared cache, allocating a tag table for each processor core, where the tag table keeps track of an index of processor core miss rates, and setting a threshold to define a level of cache usefulness, depending on whether or not the index of processor core miss rates exceeds the threshold. Checking the threshold and when the threshold is not exceeded, then a shared cache standard policy for cache replacement is applied. When the threshold is exceeded, then the cache line from the processor core running the application is evicted from the shared cache. | 2009-06-18 |
20090157971 | Integration of Secure Data Transfer Applications for Generic IO Devices - Techniques are presented for sending an application instruction from a hosting digital appliance to a portable medium, where the instruction is structured as one or more units whose size is a first size, or number of bytes. After flushing the contents of a cache, the instruction is written to the cache, where the cache is structured as logical blocks having a size that is a second size that is larger (in terms of number of bytes) than the first size. In writing the instruction (having a command part and, possibly, a data part), the start of the instruction is aligned with one of the logical block boundaries in the cache and the instruction is padded out with dummy data so that it fills an integral number of the cache blocks. When a response from a portable device to an instruction is received at a hosting digital appliance, the cache is similarly flushed prior to receiving the response. The response is then stored to align with a logical block boundary of the cache. | 2009-06-18 |
20090157972 | Hash Optimization System and Method - A computer implemented method, apparatus and program product automatically optimizes hash function operation by recognizing when a first hash function results in an unacceptable number of cache misses, and by dynamically trying another hash function to determine which hash function results in the most cache hits. In this manner, hardware optimizes hash function operation in the face of changing loads and associated data flow patterns. | 2009-06-18 |
20090157973 | Storage controller for handling data stream and method thereof - A storage controller for handling data stream having data integrity field (DIF) and method thereof. The storage controller comprises a host-side I/O controller for receiving a data stream from a host entity, a host-side I/O controller for connecting to a physical storage device, and, a central processing circuitry having at least one DIF I/O interface for handling DIF data so as to reduce the number of memory access to the main memory of the storage controller. | 2009-06-18 |
20090157974 | System And Method For Clearing Data From A Cache - A system and method for clearing data from a cache is disclosed. The method may include the steps of receiving data at a cache of a self-caching storage device, determining a cost-effectiveness of flushing a logical block from the cache and, if the current available capacity of the cache is greater than a minimum capacity parameter, only flushing the logical block if a predetermined criteria is met, regardless of whether the storage device is idle. The system may include a cache storage, a main storage and a controller configured to only flush a logical block from the cache if a determined cost effectiveness meets a predetermined criteria when the current available capacity of the cache is greater than a minimum capacity parameter. | 2009-06-18 |
20090157975 | Memory-centric Page Table Walker - The page table walker is moved from its conventional location in the memory management unit associated with the data processor to a location in main memory i.e. the main memory controller. As a result, an implementation is provided wherein the processing of requests for data could selectively avoid or bypass cumbersome caches associated with the data processor. | 2009-06-18 |
20090157976 | Network on Chip That Maintains Cache Coherency With Invalidate Commands - A network on chip (‘NOC’) that maintains cache coherency with invalidate commands, the NOC comprising integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controller, each IP block adapted to a router through a memory communications controller and a network interface controller, the NOC also including a port on a router of the network through which is received an invalidate command, the invalidate command including an identification of a cache line, the invalidate command representing an instruction to invalidate the cache line, the router configured to send the invalidate command to an IP block served by the router; the router further configured to send the invalidate command horizontally and vertically to neighboring routers if the port is a vertical port; and the router further configured to send the invalidate command only horizontally to neighboring routers if the port is a horizontal port. | 2009-06-18 |
20090157977 | DATA TRANSFER TO MEMORY OVER AN INPUT/OUTPUT (I/O) INTERCONNECT - A method, system, and computer program product for data transfer to memory over an input/output (I/O) interconnect are provided. The method includes reading a mailbox stored on an I/O adapter in response to a request to initiate an I/O transaction. The mailbox stores a directive that defines a condition under which cache injection for data values in the I/O transaction will not be performed. The method also includes embedding a hint into the I/O transaction when the directive in the mailbox matches data received in the request, and executing the I/O transaction. The execution of the I/O transaction causes a system chipset or I/O hub for a processor receiving the I/O transaction, to directly store the data values from the I/O transaction into system memory and to suppress the cache injection of the data values into a cache memory upon presence of the hint in a header of the I/O transaction. | 2009-06-18 |
20090157978 | TARGET COMPUTER PROCESSOR UNIT (CPU) DETERMINATION DURING CACHE INJECTION USING INPUT/OUTPUT (I/O) ADAPTER RESOURCES - A method, system, and computer program product for target computer processor unit (CPU) determination during cache injection using input/output (I/O) adapter resources are provided. The method includes storing locations of cache lines for pinned or affinity scheduled processes in a table on an input/output (I/O) adapter. The method also includes setting a cache injection hint in an input/output (I/O) transaction when an address in the I/O transaction is found in the table. The cache injection hint is set for performing direct cache injection. The method further includes entering a central processing unit (CPU) identifier and cache type in the I/O transaction, and updating a cache by injecting data values of the I/O transaction into the cache as determined by the CPU identifier and the cache type associated with the address in the table. | 2009-06-18 |
20090157979 | TARGET COMPUTER PROCESSOR UNIT (CPU) DETERMINATION DURING CACHE INJECTION USING INPUT/OUTPUT (I/O) HUB/CHIPSET RESOURCES - A method, system, and computer program product for target computer processor unit (CPU) determination during cache injection using I/O hub/chipset resources are provided. The method includes creating a cache injection indirection table on the input/output (I/O) hub or chipset. The cache injection indirection table includes fields for address or address range, CPU identifier, and cache type. In response to receiving an input/output (I/O) transaction, the hub/chipset reads the address in an address field of the I/O transaction, looks up the address in the cache injection indirection table, and injects the address and data of the I/O transaction to a target cache associated with a CPU as identified in the CPU identifier field when, in response to the look up, the address is present in the address field of the cache injection indirection table. | 2009-06-18 |
20090157980 | Memory controller with write data cache and read data cache - A memory controller | 2009-06-18 |
20090157981 | COHERENT INSTRUCTION CACHE UTILIZING CACHE-OP EXECUTION RESOURCES - A multiprocessor system maintains cache coherence among processors in a coherent domain. Within the coherent domain, a first processor can receive a command to perform a cache maintenance operation. The first processor can determine whether the cache maintenance operation is a coherent operation. For coherent operations, the first processor sends a coherent request message for distribution to other processors in the coherent domain and can cancel execution of the cache maintenance operation pending receipt of intervention messages corresponding to the coherent request. The intervention messages can reflect a global ordering of coherence traffic in the multiprocessor system and can include instructions for maintaining a data cache and an instruction cache of the first processor. Cache maintenance operations that are determined to be non-coherent can be executed at the first processor without sending the coherent request. | 2009-06-18 |
20090157982 | MULTIPLE MISS CACHE - Presented herein are system(s) and method(s) for a multiple miss cache. In one embodiment, there is presented a cache system for storing data. The cache comprises a plurality of data words, a plurality of first bits, and a plurality of second bits. The plurality of data words store data. The plurality of first bits correspond to particular ones of the plurality of data words, each of the plurality of bits indicating whether the data word corresponding thereto stores valid data. The plurality of second bits correspond to particular ones of the plurality of data words, each of the plurality of bits for indicating whether a cache miss has occurred with the data word corresponding thereto. | 2009-06-18 |
20090157983 | METHOD AND APPARATUS FOR USING A VARIABLE PAGE LENGTH IN A MEMORY - A controller, a memory device including a memory array, and a method for accessing the memory device. The method includes, during a first access, activating a first page of the memory array corresponding to a first row address and accessing data from the first page with a first column address. The method further includes, during a second access, activating a first sub-page of the memory array corresponding to a second row address and accessing data from the first sub-page with a second column address. The activated first sub-page of the memory array is smaller than the first page of the memory array. The method further includes activating a second sub-page without receiving a separate activate command. | 2009-06-18 |
20090157984 | Avoiding use of an inter-unit network in a storage system having multiple storage control units - A storage system provides virtual ports, and is able to transfer the virtual ports among physical ports located on multiple storage control units making up the storage system. The storage system is able to manage logical volumes and/or virtual volumes and virtual ports as a group when considering whether to move logical/virtual volumes and/or virtual ports to another storage control unit in the storage system. When the storage system is instructed to transfer volumes, virtual ports, or a group of volumes and virtual ports among the storage control units, the storage system determines whether an inter-unit network will be required to be used following the transfer. When the storage system determines that the inter-unit network will be required if the transfer takes place, the storage system determines and presents an alternate storage control unit for the transfer to avoid use of the inter-unit network, thereby avoiding degraded performance. | 2009-06-18 |
20090157985 | Accessing memory arrays - A memory controller for controlling access to a memory, said memory comprising at least one memory array, said at least one memory array comprising a plurality of rows and a plurality of columns, access to an element within said memory array being performed by opening a row comprising said element and then accessing a column comprising said element, said at least one memory array being adapted to have no more than one row in said at least one memory array open at a time; said memory controller being responsive to a memory access request to access an element within said memory and following said access to determine if said row comprising said accessed element should be closed or should remain open in dependence upon a property of said memory access request. | 2009-06-18 |
20090157986 | MEMORY CONTROLLER - A memory controller includes an digitally programmable delay unit having a selectable delay time receiving a read-enable signal and outputting a delayed read-enable signal. The delay time is selected in response to an externally applied delay-control signal. A sampling unit in the memory controller outputs data received from a separate memory, in synchronization with the delayed enable signal. The delay time may be a multiple of the period of a clock signal. | 2009-06-18 |
20090157987 | System and Method for Creating Self-Authenticating Documents Including Unique Content Identifiers - One embodiment of a method for creating a self-authenticating document includes receiving a request to retrieve a data element identified by a content identifier, identifying a storage location associated with the content identifier, retrieving a data element stored at the storage location, calculating a second content identifier of the retrieved data element, comparing the content identifier and the second content identifier, if the content identifier and the second content identifier match, creating an image of the retrieved data element, creating a representation of the stored content identifier, creating a representation of metadata associated with the retrieved data element, and creating a document that includes the image of the retrieved data element, the representation of the stored content identifier, and the representation of metadata. The representation of the stored content identifier may be an alphanumeric string or a graphical representation derived from the stored content identifier. | 2009-06-18 |
20090157988 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - Disclosed is a data processing apparatus that includes a plurality of ports inputting and outputting a clip including a plurality of types of essence, a memory storing the clip when recording or playing back of the clip from a recording medium, and a generator storing types of essence in separate regions of the memory, and generate identification information identifying the types of essence, while generating linking information indicating an association between regions of the memory storing one of the types of essence as a master essence and regions of the memory storing the remaining types of essence. The apparatus further includes a control unit outputting the master essence in the regions and the remaining essence in the regions associated therewith via linking information from the designated ports when the master essence in the clip of the video data subjected to playback request designating the ports is stored in the memory. | 2009-06-18 |
20090157989 | Distributing Metadata Across Multiple Different Disruption Regions Within an Asymmetric Memory System - Metadata that corresponds to application data is distributed across different disruption regions of an asymmetric memory component such that metadata is written in the same disruption region as the application data to which it corresponds. A first block of application data is written to a first disruption region and a second block of application data is written to a second disruption region. A first block of metadata corresponding to the first block of application data and a second block of metadata corresponding to the second block of application data both are generated. The first block of metadata is written to the first disruption region and the second block of metadata is written to the second disruption region such that the first and second blocks of metadata are written to the same disruption regions as the blocks of application data to which they correspond. | 2009-06-18 |
20090157990 | Backing-up apparatus, backing-up method, and backing-up program - A backing-up apparatus, upon receiving an instruction to execute backing up, allocates a storage area to store a snapshot to be produced to each time point indicated by the instruction. When the original data is updated after the time point indicated by the instruction, it is checked that the original data corresponding to the place to which the updating has been executed at a time point immediately before the time point indicated by the instruction is stored in the storage area allocated as the storage area to store the latest snapshot produced for the immediately previous time point. When it has been confirmed that the original data is not stored, the original data immediately before the updating corresponding to the place to which the updating has been executed is stored in only the storage area to store the latest snapshot. | 2009-06-18 |
20090157991 | Reliable storage of data in a distributed storage system - The present invention relates to the reliable storage of data within a distributed storage system. A method and system for storing a data unit within a distributed storage system is disclosed, wherein the distributed storage system comprises a plurality of storage elements of unspecified system reliability, a public network interconnecting the plurality of storage elements and a reliability index control unit measuring a plurality of storage element reliability indexes associated with the plurality of storage elements. The data unit is stored following the steps of receiving a request to store the data unit according to a data unit reliability index and storing replicated copies of the data unit in at least one storage element, such that the data unit reliability index is achieved. | 2009-06-18 |
20090157992 | Docbase management system and implementing method thereof - The present invention discloses a docbase management system, including a first module, adapted to parse a received invocation from an application and generate an execution plan which comprises operations on physical storage; a second module, adapted to execute the execution plan to schedule a third module to execute the operations on physical storage in the execution plan; and the third module, adapted to execute the operations on physical storage in the execution plan under the scheduling of the second module. Since the implementation of the docbase management system is divided into hierarchies, and the hierarchies are independent of each other, the docbase management system is well extendable, scalable and maintainable. | 2009-06-18 |
20090157993 | MECHANISM FOR ENABLING FULL DATA BUS UTILIZATION WITHOUT INCREASING DATA GRANULARITY - A memory is disclosed comprising a first memory portion, a second memory portion, and an interface, wherein the memory portions are electrically isolated from each other and the interface is capable of receiving a row command and a column command in the time it takes to cycle the memory once. By interleaving access requests (comprising row commands and column commands) to the different portions of the memory, and by properly timing these access requests, it is possible to achieve full data bus utilization in the memory without increasing data granularity. | 2009-06-18 |