42nd week of 2010 patent applcation highlights part 58 |
Patent application number | Title | Published |
20100268810 | Information communication system, information processing apparatus, information communication program, and information communication method - An information communication system includes: a communication line which connects a first information processing apparatus and a second information processing apparatus to each other; a transmission unit which is included in the first information processing apparatus and transmits identity information of the first information processing apparatus without passing through the communication line; a reception unit which is included in the second information processing apparatus and receives the identity information of the first information processing apparatus transmitted from the transmission unit without passing through the communication line; and an information transmission unit which is included in the second information processing apparatus and transmits information to the first information processing apparatus via the communication line by using the identity information received by the reception unit. | 2010-10-21 |
20100268811 | METHOD AND SYSTEM FOR DISCOVERING MANAGED SYSTEMS IN A NETWORK - A method for discovering managed systems in a network including classifying a first managed system associated with a first active Internet Protocol (IP) address in the network using a plurality of network protocols, identifying a set of drivers using the classification, where the set of drivers are configured to obtain first management information about the managed system, obtaining a first set of drivers, populating a data model with the first management information obtain using at least one of the first set of drivers, and managing the first managed system using the data model. | 2010-10-21 |
20100268812 | System and Method of Migrating Virtualized Environments - A system and method migrating virtualized environments is disclosed. According to an aspect of the disclosure, the information handling system can include a migration monitor configured to initiate migration of a remote virtualized environment operating on a first remote system. The information handling system can also include a trusted platform module including a local memory storing a plurality of access keys configured to enable use of a plurality of virtualized environments. According to an aspect, the plurality of access keys can include a first access key configured to be used with a first remote system. The information handling system can also include a secure communication channel configured to enable a mapping of the first access key to a second remote system upon the migration monitoring determining the second remote system is capable of satisfying an operating characteristic of the remote virtualized environment. | 2010-10-21 |
20100268813 | SYSTEM AND METHOD FOR HANDLING REMOTE DRAWING COMMANDS - Examples of systems and methods are provided for handling remote drawing commands. A system may comprise a buffer module configured to receive, at the system from a remote server system over a remote access connection between the system and the remote server system during a remote connection session, remote drawing commands, according to a drawing command rate, of a remote application running on the remote server system. The buffer module may be configured to store the remote drawing commands. The system may comprise a timer module configured to facilitate sending at least some of the remote drawing commands in the buffer module to a graphics module according to a refresh rate that is less than the drawing command rate. The timer module may be application agnostic. | 2010-10-21 |
20100268814 | Intercept Device for Providing Content - Described are computerized methods and apparatuses, including computer program products, for network virtualization. An intercept device receives a DNS response message from a DNS server. The DNS response includes a domain name, a network address associated with the domain name, and a destination address of a first network device. The intercept device determines whether the domain name satisfies a DNS intercept criterion. If the domain name satisfies the DNS intercept criterion, then a request intercept criterion is updated to include the network address associated with the domain name. The DNS response message is transmitted on to the first network device by the intercept server. | 2010-10-21 |
20100268815 | LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL (LDAP) COLLISION DETECTION MECHANISM AND METHOD - A LDAP collision detection mechanism and a method are described herein that allow a LDAP client to detect and avoid an update operation collision on an entry within a LDAP directory. The method includes the steps of: (a) reading data from the entry in the directory; (b) processing the retrieved data; (c) sending a request to modify the data in the entry in the directory, wherein the client is assured that the requested modification will not be performed by the directory if another client had previously performed a modification on the data within the entry that was originally read by the client. There are several different embodiments of the LDAP collision detection mechanism and the method described herein. | 2010-10-21 |
20100268816 | PERFORMANCE MONITORING SYSTEM, BOTTLENECK DETECTION METHOD AND MANAGEMENT SERVER FOR VIRTUAL MACHINE SYSTEM - A performance monitoring system, comprising: a server; a storage system; and a management server, the management server the management server is configured to: obtain the gathered time-sequential data from the server; judge whether at least one bottleneck has occurred in the logical resource of a specified one of the plurality of virtual machines at each time of the obtained time-sequential data, judge whether at least one bottleneck causing large influence on the specified one of the plurality of virtual machines has occurred; and notify that at least one large bottleneck has occurred in the specified one of the plurality of virtual machines. | 2010-10-21 |
20100268817 | HIERARCHICAL TREE-BASED PROTECTION SCHEME FOR MESH NETWORKS - In a hierarchical tree-based protection scheme, a node in a mesh network is designated as a root node of a spanning hierarchical protection tree and subsequently invites each adjacent node to become its child within the tree. If the inviting node provides a more capacious protection path to the root node than is currently enjoyed by the invitee, the invitee designates the inviting node as its primary parent and assumes a new tree position. Otherwise, the invitee designates the inviting node as a backup parent. A node assuming a new tree position invites all adjacent nodes except its parent to become its child. The invitations propagate throughout the network until a spanning hierarchical protection tree is formed. Upon a subsequent failure of a straddling link, the tree may be used to re-route data. Further, given a tree link failure, protection switching is quickly achieved at a disconnected node through use of a backup parent as the new primary parent. Dynamic tree reconfiguration in the event of network topology changes may be limited to the network area surrounding the change. | 2010-10-21 |
20100268818 | SYSTEMS AND METHODS FOR FORENSIC ANALYSIS OF NETWORK BEHAVIOR - Systems and methods monitor and manage computer network traffic and identify a status of normality or consistency of the traffic on a per user, per interne protocol address or MAC address basis. More specifically, the systems and methods determine, with degrees of significance, the abnormality or inconsistency of network traffic from a user, IP address or MAC address based on a comparison of said network traffic to previous network traffic from the same location. Moreover, the systems and methods monitor and manage the network traffic whereby, after an anomaly has occurred, network traffic is tagged as suspicious and thereafter is flagged for forensic study and placed in storage. In addition, the systems and methods report tagged traffic and alert administrators of a breach or violation in the computer network. | 2010-10-21 |
20100268819 | EVENT PROBLEM REPORT BUNDLES IN XML FORMAT - A network device may include logic configured to detect that an event has occurred in the network device, determine an XML document structure based on the detected event, and generate an XML document with the determined structure including information relating to the detected event. | 2010-10-21 |
20100268820 | USER DATA SERVER SYSTEM, METHOD AND APPARATUS - A user data server system includes: a data storage node, which stores user data, registers the user data in a Distributed Hash Table (DHT) network by using a key, and receives and processes user data operation requests; a DHT index node, which creates and maintains DHT routing information according to a DHT algorithm and stores information of data storage nodes where user data is stored according to the key and searches for information of a data storage node where target user data is stored according to the key; a DHT super maintenance node, which manages and optimizes the DHT network; and a front end node capable of protocol processing and service processing, which obtains a key associated with a target user and obtains information of a data storage node where the target user data is stored by querying the DHT index node via the DHT network according to the key, and performs operations on the user data stored in the data storage node where the target user data is stored. With the technical solution provided by the present invention, a user data server is not centralized and is highly scalable and highly reliable with a high cost-effectiveness. | 2010-10-21 |
20100268821 | SEQUENCED TRANSMISSION OF DIGITAL CONTENT ITEMS - The disclosure provides a system and method for managing and sequencing the transmission of digital content items from a network-accessible content service to a portable digital content device. The content service includes a cache management subsystem and provides storage for a plurality of playlists which are variously associated with user accounts and which each contain one or more digital content items. The cache management subsystem is configured to sequence transmission of digital content items to a given portable device based on attributes associated with the playlists containing the digital content items to be transmitted to the device. | 2010-10-21 |
20100268822 | SYSTEM AND METHOD FOR DETERMINING A MAXIMUM PACKET DATA UNIT (PDU) PAYLOAD TRANSMISSION SIZE FOR COMMUNICATING IN A MANAGED COMPUTER NETWORK SYSTEM - A system and method for substantially preventing firewall generated communication losses in regard to communications by authorized users in a managed computer network system is provided. The method comprises transmitting one or more status inquiry commands to at least one node in the managed computer network system, wherein the status inquiry command requests a first quantity of objects from the at least one node; receiving a non-zero quantity of objects response from the at least one node; and limiting communications through the firewall in the managed computer network system with the at least one node to a message size substantially equivalent to the received non-zero quantity of objects response from the at least one node, thereby substantially preventing firewall generated communication losses in the managed computer network system. | 2010-10-21 |
20100268823 | BROADBAND WIRELESS NETWORK - Disclosed herein are methods and apparatus for operating and deploying a broadband wireless network having at least one data transmission node and a plurality of CPE units, wherein there is a wireless data link at least in part between the data transmission node and the CPE units, and further wherein the management and configuration of the network is managed centrally and at least one of authorization, authentication, data stream prioritization or queuing is accomplished through the operation of the CPE units. According to one embodiment there is provided a user group manager that provides a user interface for at least one local service provider to manage information about end users served by the local service provider. In another embodiment, management and configuration of the network is managed using a device that communicates with CPE units and the data transmission nodes. The system and method further provides a data transmission node that includes routing capability, wherein the data transmission node is located with at least one CPE unit. In another embodiment, a network supervision and management device holds an original configuration file for each CPE unit wherein each CPE unit further includes a configuration file that includes an address reference to one or more of the supervision and management devices thereby providing for connectivity to the supervision and management devices and capability of redundancy when more than one supervision and management device is referenced. | 2010-10-21 |
20100268824 | SYSTEM AND METHOD FOR CROSS-AUTHORITATIVE CONFIGURATION MANAGEMENT - A system and method for cross-authoritative, user-based network configuration management is provided. Users log-in to a network using any device coupled to the network, and an identity manager may provide the user with a custom computing environment by verifying the user's identity and identifying content, assignments, and other configuration information associated with the user. For instance, the identity manager may retrieve a unique identifier assigned to the user, query one or more authoritative source domains based on the unique identifier, and deliver a computing environment assigned to the user. By seamlessly integrating multiple authoritative sources, administrators can make assignments to users across multiple authoritative source domains, and queries to the sources will always be up-to-date without having to perform synchronization processes. | 2010-10-21 |
20100268825 | SCHEDULING METHOD AND SCHEDULING INFORMATION SYNCHRONIZING METHOD IN WIRELESS AD HOC NETWORK - A decentralized scheduling method in a wireless ad hoc network is provided which includes grouping nodes in the network cluster by cluster, determining a cluster head of each cluster, and sequentially performing scheduling cluster by cluster. Accordingly, it is possible to provide an efficient cluster-based scheduling method which is quickly adapt to changes and reduces power consumption. | 2010-10-21 |
20100268826 | METHOD AND APPARATUS FOR USE IN A COMMUNICATIONS NETWORK - A Serving Call Session Control Function, (S-CSCF) of an IP Multimedia Subsystem (IMS) determines charging capabilities of an Application Server (AS) of the IMS providing the service for each of a plurality of services being administered to users by the S-CSCF. The S-CSCF cooperates with the AS in the generation of charging information relating to the service. In this way, the S-CSCF is able to cooperate with the AS to avoid an unnecessary duplication of charging information between the S-CSCF and the AS. The S-CSCF is also able to cooperate with the AS to avoid an unnecessary loss of charging information between the S-CSCF and the AS. | 2010-10-21 |
20100268827 | Method And System For Providing Dynamic Hosted Service Management Across Disparate Accounts/Sites - A hosted service provider for the Internet is operated so as to provide dynamic management of hosted services across disparate customer accounts and/or geographically distinct sites. | 2010-10-21 |
20100268828 | METHOD AND APPARATUS FOR TRANSFERRING REMOTE SESSION DATA - Examples of systems and methods are provided for communication and for forwarding display data related to a remote session between a client device and a remote server to a host device. The system may facilitate establishing the remote session with the remote server. The system may facilitate establishing a trusted relationship between the client device and the host device. The system may filter out data related to local graphical user interface (GUI) and selectively forward from the client device to the host device display data related to the remote session established between the client device and the remote server. | 2010-10-21 |
20100268829 | Selecting proxies from among autodiscovered proxies - Network devices include proxies and where multiple proxies are present on a network, they can probe to determine the existence of other proxies. Where more than two proxies are present and thus different proxy pairings are possible, the proxies are programmed to determine which proxies should form a proxy pair. Marked probe packets are used by proxies to discover each other and probing is done such a connection can be eventually formed even if some probe packets fail due to the marking Asymmetric routing can be detected and proxies configured for connection forwarding as necessary. | 2010-10-21 |
20100268830 | WEIGHTING SOCIAL NETWORK RELATIONSHIPS BASED ON COMMUNICATIONS HISTORY - A method may include receiving or retrieving session information associated with one or more electronic communications that occurred outside of the social network site and included users of the social network site, comparing the session information with information associated with the users of the social network site, selecting one of the users as a user of the social network site and one or more of the users as one or more connections of the selected user of the social network site, based on the comparing, generating a weighted social graph for the selected user of the social network site, based on the session information, generating social network information based on the weighted social graph, and providing the social network information to at least one of the selected user of the social network site or the selected one or more connections of the user of the social network site. | 2010-10-21 |
20100268831 | Thin Client Session Management - Thin client session management is described. In embodiments, a thin client device senses a usage context for the thin client device, and a process analyses the usage context to automatically select a session for the thin client device to connect to. Embodiments describe how the sensed usage context can indicate a location of the thin client device, movement of the thin client device, swapping of thin client devices or an identity of a user of the thin client device. Embodiments also describe how the thin client can be automatically authorized to access a selected session, based on the usage context. In other embodiments, a thin client device comprises a sensing device that can indicate a usage context for the thin client. Embodiments describe how the sensing device can determine that the thin client device is located in a docking station, and identify the docking station. | 2010-10-21 |
20100268832 | SYSTEMS AND METHODS FOR ESTABLISHING CONNECTIONS BETWEEN DEVICES COMMUNICATING OVER A NETWORK - Systems and methods are described for establishing a connection between a client and a server that are each communicating via a network. The methods and techniques may be used, for example, to establish a media streaming connection between a media player and a placeshifting device when a firewall or other impediment to direct network connections exists. A relay server receives connection requests from the client and from the server via the network. In response to receiving the requests, a first connection is established between the relay server and the client and a second connection between the relay server and the server. Data received by the relay server on each of the first and second connections is relayed to the other of the first and second connections to thereby establish the connection between the client and the server via the relay server. | 2010-10-21 |
20100268833 | COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND COMMUNICATION SESSION CENTRALIZING APPARATUS - A communication system which causes a terminal apparatus to access a server apparatus via a network includes a communication session centralizing apparatus between the network and at lest one terminal apparatus. The communication session centralizing apparatus performs, for each user of the terminal apparatus, processing of establishing a session for a communication partner terminal on behalf of the user via a control apparatus of the network using a predetermined signaling protocol to obtain a use permission of the network. | 2010-10-21 |
20100268834 | Method For Embedding Meta-Commands in Normal Network Packets - A method for synchronizing different components of a computer network system using meta-commands embedded in normal network packets. The data communication channel between different components of a computer network system can be used to transport meta-commands piggybacked in normal network packets, without modifying or compromising the validity of the protocol message. Embodiments of the method can be used for embedding test synchronization and control commands into the network packets sent through a device or system under test. The device or system under test can be an edge device, with the data communication channel carrying normal packets containing meta-commands embedded in the packets to synchronize the test control of the test clients and the test servers connected to the edge device. | 2010-10-21 |
20100268835 | Methods and Systems for Substituting Programs in Multiple Program MPEG Transport Streams - Provided are methods and systems for substituting programs within an existing multi program transport stream (MPTS). | 2010-10-21 |
20100268836 | Method and apparatus for delivery of adapted media - A method of transmitting media to a client by an infrastructure device in a packet-switched network includes receiving a media stream at the infrastructure device. The method also includes determining an adaptation strategy according to at least one of one or more pieces of network information associated with the packet-switched network, one or more pieces of client information associated with the client, or one or more policies. The method further includes adapting the media stream according to the adaptation strategy to produce an output media stream. | 2010-10-21 |
20100268837 | METHOD FOR TUNNEL MAPPING - The present invention discloses a method for tunnel mapping involved with the field of the next generation network. The method of the present invention comprises: according to a service data stream resource information request received, selecting, by a transport resource control function entity (TRC-FE), a corresponding label switch path (LSP) tunnel, and after completing allocation of the service data stream, instructing a transport resource enforcement function entity (TRE-FE) to update a stream label mapping table; and completing, by the TRE-FE, the update of the stream label mapping table, and according to mapping information in the table, mapping the service data stream to the LSP tunnel designated. The present invention solves the problem that a mapping between a service data stream and LSP tunnel resource in a NGN based on MPLS-TE can't be implemented according to current standards, and fills a gap in implementation of resource allocation in a bearer network. | 2010-10-21 |
20100268838 | METHOD AND EQUIPMENT FOR MULTI MEDIA APPLICATION MANAGEMENT USING MULTI STREAMING OF SCTP AND TIMED RELIABILITY OF PR-SCTP - Disclosed is a method for multimedia application management using a multi-streaming function of a Stream Control Transmission Protocol (SCTP) and a timed reliability function of a Partial Reliable (PR)-SCTP. The method includes calculating a number of connection objects required for transmitting at least one multimedia information, generating at least one association and stream of the SCTP to correspond to the number of the calculated connection objects, classifying the multimedia information according to purpose, granting a lifetime value to the classified multimedia information; and transmitting, via an SCTP corresponding to the lifetime value from among the SCTPs, the multimedia information to which the lifetime value is granted. | 2010-10-21 |
20100268839 | METHOD AND APPARATUS FOR PROVIDING AN AUDIOVISUAL STREAM - An audio stream server ( | 2010-10-21 |
20100268840 | Method and System for Data Transmission - A method, program and system for transmitting a data stream to a group of recipient nodes from a source node via an intermediate node over a communication network, wherein the data stream is associated with a first unique identifier to identify the content of the data stream. The method includes the source node generating a second identifier, the second identifier distinct from the first unique identifier, and associating the second identifier with the data stream to identify that the data stream is to be received by the group of recipient nodes; transmitting routing information comprising the second identifier to the intermediate node; transmitting the data stream from the source node to the intermediate node; and responsive to receiving the data stream at the intermediate node, reading the second identifier and routing the data stream to the group of recipient nodes in accordance with the routing information. | 2010-10-21 |
20100268841 | USING HIGHER LAYER INFORMATION TO FACILITATE COEXISTENCE IN WIRELESS NETWORKS - A system composed of a node configured to transmit a first data stream to a first device using a first protocol and a second data stream to a second device using a second protocol. The system is also composed of a controller in communication with the node. The controller is configured to prioritize a first packet of the first data stream prior to transmission of the first packet. The prioritization is based on application layer information of the first packet. If the application layer information of the first packet indicates that the priority of the first packet is lower than the priority of a second packet of the second data stream, the controller causes the node to transmit the second packet. | 2010-10-21 |
20100268842 | SYSTEM AND METHOD FOR PROVIDING STREAMING-BASED PORTABLE APPLICATION - Provided are a system and method for providing a streaming-based portable application, which can add and update a portable application in one click, without separate procedures, by using advantages of application streaming while maintaining advantages of a portable application. In the system, a streaming server stores an application execution code provided at the inside of the system. A client provides a virtualization of an execution code necessary to execute an application process, streams an execution code from the streaming server through a network, and manages application streamed images. | 2010-10-21 |
20100268843 | AUTOMATED REAL-TIME DATA STREAM SWITCHING IN A SHARED VIRTUAL AREA COMMUNICATION ENVIRONMENT - Switching real-time data stream connections between network nodes sharing a virtual area is described. In one aspect, the switching involves storing a virtual area specification. The virtual area specification includes a description of one or more switching rules each defining a respective connection between sources of a respective real-time data stream type and sinks of the real-time data stream type in terms of positions in the virtual area. Real-time data stream connections are established between network nodes associated with respective objects each of which is associated with at least one of a source and a sink of one or more of the real-time data stream types. The real-time data stream connections are established based on the one or more switching rules, the respective sources and sinks associated with the objects, and respective positions of the objects in the virtual area. | 2010-10-21 |
20100268844 | SYSTEM AND METHODS FOR ASYNCHRONOUS SYNCHRONIZATION - Aspects of the invention provide for information to be synchronized in an asynchronous manner among two or more computing devices. | 2010-10-21 |
20100268845 | ROUTING INSTANCES FOR NETWORK SYSTEM MANAGEMENT AND CONTROL - A network system uses a management routing instance to route management information between elements involved in management of the system. The system registers each element in the management routing instance when the element comes on line. Based on the management routing instance, the system creates management forwarding tables. The system then uses the management forwarding tables to route management information between the elements. Multiple systems, for example systems connected by a network, may exchange management routing instance information to allow elements in different systems to communicate management information with each other. | 2010-10-21 |
20100268846 | FORMATTED DATA FILE COMMUNICATION - Methods and systems are provided. A sending device offers to provide a graphic image to a receiving device by way of a network. The receiving device responds by providing desired parameters including an overall file size to the sending device. A data file formatted according to the desired parameters is provided to the receiving device. The sending device or another entity can be the source of the data file. | 2010-10-21 |
20100268847 | Personalized account migration system and method - A method for migrating information, and a migrator for migrating information, are disclosed. The method may include extracting organizational information from at least two service providers, accessing a first at least one of the at least two service providers upon selection of a migration selection interface by the user, receiving of a first plurality of information related to the user from one of the service providers, accessing a second at least one of the at least two service providers, and writing the first plurality of information to the second at least one of the at least two service providers. The migrator includes an importer in communicative connection with at least one migrate-from service provider, a normalizer that receives a first plurality of information from the importer and converts the first plurality to a standard format, a denormalizer that receives the standard format from the normalizer and converts the standard format to a second plurality of information, and an exporter communicatively connected to a migrate-to service provider, which exporter receives the second plurality of information from the denormalizer and sends the second plurality to the migrate-to service provider. | 2010-10-21 |
20100268848 | CONTENT ACCESS FROM A COMMUNICATIONS NETWORK USING A HANDHELD COMPUTER SYSTEM AND METHOD - A handheld computer including a wireless communications link with a wireless server is disclosed. The wireless communications link allows browsing of information provided through the wireless server which is coupled to a communications network. A user of the handheld computer may generate a request for content, for example, by selecting a link to content. The request is communicated to the wireless server which requests the content from the content source. When the content is received by the wireless server, a plug-in mechanism or other type of software program is used to convert the particular content type into a format easily communicated and used by the handheld computer. The handheld computer receives the formatted content, and using a compatible plug-in mechanism or software program, is able to display content using the handheld computer. | 2010-10-21 |
20100268849 | METHOD AND SYSTEM FOR REGISTERING EVENTS IN WIND TURBINES OF A WIND POWER SYSTEM - The invention relates to a method of registering events in a wind power system comprising at least two data processors, wherein the data processors of said wind power system are mutually time synchronized, wherein events are registered in said at least two data processors, wherein the timing of said events registered in different of said at least two data processors is established according to said time synchronization. According to an advantageous embodiment of the invention, events may be registered and preferably analyzed according to a common timing. This analyzing makes it possible to establish an analysis where events of different wind turbines are basically interrelated and where information regarding such interrelation is important or crucial for establishment of control or fault detection based on correctly timed events from different wind turbines. | 2010-10-21 |
20100268850 | Modular I/O System With Automated Commissioning - A modular input/output system has a power supply unit, a central processing unit, and at least a pair of input/output module units. Each input/output unit has a base and an input/output module. The base includes a backplane printed circuit board having an input/output module interface. The base has a pair of connectors connected to the printed circuit board and capable of connecting to other units. The base has a field connection terminal block for connecting to external sensors and controls. The input/output module includes a circuit for interfacing with external sensors and controls and the CPU unit. The modular input/output system allows installation and removal of input/output modules of one of the input/output module units without affecting other input/output modules. The system can have multiple power supply units for powering the I/O modules. | 2010-10-21 |
20100268851 | Management of Redundant Physical Data Paths in a Computing System - A redundancy manager manages commands to peripheral devices in a computer system. These peripheral devices have multiple pathways connecting it to the computer system. The redundancy manager determines the number of independent pathways connected to the peripheral device, presents only one logical device to the operating system and any device driver and any other command or device processing logic in the command path before the redundancy manager. For each incoming command, the redundancy manager determines which pathways are properly functioning and selects the best pathway for the command based at least partly upon a penalty model where a path may be temporarily penalized by not including the pathway in the path selection process for a predetermined time. The redundancy manager further reroutes the command to an alternate path and resets the device for an alternate path that is not penalized or has otherwise failed. | 2010-10-21 |
20100268852 | Replenishing Data Descriptors in a DMA Injection FIFO Buffer - Methods, apparatus, and products are disclosed for replenishing data descriptors in a Direct Memory Access (‘DMA’) injection first-in-first-out (‘FIFO’) buffer that include: determining, by a messaging module on an origin compute node, whether a number of data descriptors in a DMA injection FIFO buffer exceeds a predetermined threshold, each data descriptor specifying an application message for transmission to a target compute node; queuing, by the messaging module, a plurality of new data descriptors in a pending descriptor queue if the number of the data descriptors in the DMA injection FIFO buffer exceeds the predetermined threshold; establishing, by the messaging module, interrupt criteria that specify when to replenish the injection FIFO buffer with the plurality of new data descriptors in the pending descriptor queue; and injecting, by the messaging module, the plurality of new data descriptors into the injection FIFO buffer in dependence upon the interrupt criteria. | 2010-10-21 |
20100268853 | APPARATUS AND METHOD FOR COMMUNICATING WITH SEMICONDUCTOR DEVICES OF A SERIAL INTERCONNECTION - A system controller communicates with devices in a serial interconnection. The system controller sends a read command, a device address identifying a target device in the serial interconnection and a memory location. The target device responds to the read command to read data in the location identified by the memory location. Read data is provided as an output signal that is transmitted from a last device in the serial interconnection to a data receiver of the controller. The data receiver establishes acquisition instants relating to clocks in consideration of a total flow-through latency in the serial interconnection. Where each device has a clock synchronizer, a propagated clock signal through the serial interconnection is used for establishing the acquisition instants. The read data is latched in response to the established acquisition instants in consideration of the flow-through latency, valid data is latched in the data receiver. | 2010-10-21 |
20100268854 | SYSTEM AND METHOD FOR UTILIZING PERIPHERAL FIRST-IN-FIRST-OUT (FIFO) RESOURCES - A system and method for sharing peripheral first-in-first-out (FIFO) resources is disclosed. In one embodiment, a system for utilizing peripheral FIFO resources includes a processor, a first peripheral FIFO controller and a second peripheral FIFO controller coupled to the processor for controlling buffering of first data and second data associated with the processor respectively. Further, the system includes a merge module coupled to the first peripheral FIFO controller and the second peripheral FIFO controller for merging a first FIFO channel associated with the first peripheral FIFO controller and a second FIFO channel associated with the second peripheral FIFO controller based on an operational state of the first FIFO channel and an operational state of the second FIFO channel respectively. Also, the system includes a first FIFO and a second FIFO coupled to the merge module via the first FIFO channel and the second FIFO channel respectively. | 2010-10-21 |
20100268855 | ETHERNET PORT ON A CONTROLLER FOR MANAGEMENT OF DIRECT-ATTACHED STORAGE SUBSYSTEMS FROM A MANAGEMENT CONSOLE - A system and device for central bios level management of direct-attached storage subsystems is disclosed. A system includes a plurality of DAS subsystems, with each DAS subsystem including a host bus adapter (HBA) having a local area network (LAN) port and a LAN communication module for providing a LAN communication based on an internet protocol (IP) address of the HBA. The system further includes a management console coupled to the plurality of DAS subsystems using the LAN port for managing the plurality of DAS subsystems by directly communicating with the HBA of said each DAS subsystem using the IP address of the HBA. The system also includes a network switch for controlling data traffic between the plurality of DAS subsystems and the management console. | 2010-10-21 |
20100268856 | FORMATTING MEMORY IN A PERIPHERAL DEVICE - A system for formatting memory in a peripheral device. The system includes a peripheral device comprising the memory communicatively coupled with a controller. A host is communicatively coupled with the peripheral device via a communication path. An interface is communicatively coupled with the controller and the host computer. The controller is configured to receive a first command from the host computer. The controller is further configured to format at least a portion of the memory based on the first command. The host computer sends a second command to the peripheral device via the communication path to complete the format. | 2010-10-21 |
20100268857 | Management of Redundant Physical Data Paths in a Computing System - A redundancy manager manages commands to peripheral devices in a computer system. These peripheral devices have multiple pathways connecting it to the computer system. The redundancy manager determines the number of independent pathways connected to the peripheral device, presents only one logical device to the operating system and any device driver and any other command or device processing logic in the command path before the redundancy manager. For each incoming command, the redundancy manager determines which pathways are properly functioning and selects the best pathway for the command based at least partly upon a penalty model where a path may be temporarily penalized by not including the pathway in the path selection process for a predetermined time. The redundancy manager further reroutes the command to an alternate path and resets the device for an alternate path that is not penalized or has otherwise failed. | 2010-10-21 |
20100268858 | SATA data connection device with raised reliability - There is disclosed a data connection device, particularly to a SATA data connection device with raised plug-in stability and reliability. The SATA data connection device mainly comprises a SATA data connection seat and a SATA component terminal. A seat body of the SATA data connection seat is provided therein with a slot. On each of two short perimeters of the seat body, there is additionally provided a laterally extending support frame having a snap-fit groove. Moreover, a shell layer of the SATA component terminal is additionally provided at the lower edge of each of two short perimeters thereof with a snap fitting. A snap hook, which may be pressed to tilt, is provided at the bottom end of the snap fitting. When a SATA data connector of the SATA component terminal is insertedly connected into the slot of the SATA data connection seat, the snap hook of the snap fitting may be also snapped into the snap-fit groove. Thus, not only the plug-in stability and reliability of the data connection device may be enhanced significantly, but also the accuracy of high-speed data transmission may be assured. | 2010-10-21 |
20100268859 | Server - A server includes a motherboard, a central processing unit (CPU) and a riser card. The CPU is mounted on the motherboard. The riser card is inserted in the motherboard. The riser card includes a first circuit board extending parallel to the CPU, such that the CPU positioned between the motherboard and the first circuit board. At least one memory is inserted in the first circuit board. | 2010-10-21 |
20100268860 | Methods for Generating Display Signals in an Information Handling System - An information handling system (IHS) is provide for generating display signals associated with an alternative display protocol. The system may include a display protocol receptacle operable to receive a display protocol plug and a display bus switch in communication with the display protocol receptacle. The system may also include a display converter in communication with the IHS. The display converter may include a first end having a display connector associated with an alternative display protocol and a second end having a display protocol plug. Moreover, upon receipt of the display protocol plug by the display protocol receptacle, the display bus switch may output display signals associated with the alternative display protocol. | 2010-10-21 |
20100268861 | USB DRIVE - A universal serial bus (USB) drive includes a control button, a control signal generating circuit, a flash memory, a connection port, a processor, and an indication lamp. The USB drive generates a removal instruction from a connection device, according to a user operation thereon. The USB drive determines whether the flash memory is in a working state and controls the connection port to disconnect from the connection device, if the flash memory is not in the working state. The indication lamp on the USB drive provides a notification that the connection port is disconnected from the connection device. | 2010-10-21 |
20100268862 | RECONFIGURABLE PROCESSOR AND METHOD OF RECONFIGURING THE SAME - A technology for controlling a reconfigurable processor is provided. The reconfigurable processor dynamically loads configuration data from a peripheral memory to a configuration memory while a program is being executed, in place of loading all compiled configuration data in advance into the configuration memory when booting commences. Accordingly, a reduction in capacity of a configuration memory may be achieved. | 2010-10-21 |
20100268863 | INFORMATION PROCESSING APPARATUS - According to one embodiment, if a nonvolatile memory which stores format information of an HDD, a CD/DVD, an FDD and a USB storage device, and the USB storage device are connected, the drive letter of the USB storage device is virtually assigned as FDD or HDD on the basis of the format information. | 2010-10-21 |
20100268864 | Logical-to-Physical Address Translation for a Removable Data Storage Device - A method for making memory more reliable involves accessing data stored in a removable storage device by translating a logical memory address provided by a host digital device to a physical memory address in the device. A logical memory address is received from the host digital device. The logical memory address corresponds to a location of data stored on the removable storage device. A physical memory address corresponding to the local address is determined by accessing a lookup table corresponding to the logical zone. | 2010-10-21 |
20100268865 | Static Wear Leveling - Methods for extending the service life of a data storage device and devices operable to perform those methods are presented. A master lookup table block may comprise lookup table blocks and store an erase count indicator for each lookup table block. Each lookup table block may be associated with a logical zone of a memory and comprise entries. Each entry may be associated with a logical block and comprise an erase count for a physical block corresponding to that logical block. A physical block erasure may be performed on a first physical block in the memory. The physical block erasure may be tracked by incrementally increasing a first erase count. An actual erase count may be determined for the first physical block. The entry for a logical block corresponding to the first physical block may be exchanged with another entry within a different lookup table block when the actual erase count for the first physical block exceeds a threshold. The different lookup table block may have a lower erase count indicator relative to that of the lookup table block comprising the entry for the logical block corresponding to the first physical block. | 2010-10-21 |
20100268866 | SYSTEMS AND METHODS FOR OPERATING A DISK DRIVE - System and methods for storing data to a storage device are provided. In embodiments, the storage device may include a disk drive with a solid-state memory for storing certain frequently updated information. In some embodiments, the solid-state memory may be used to store journaling information. | 2010-10-21 |
20100268867 | METHOD AND APPARATUS FOR UPDATING FIRMWARE AS A BACKGROUND TASK - A method comprising storing data in a first memory that includes a first portion that has read-only access during a normal mode of operation; and during a update mode of operation: copying at least one data structure from the first memory to a second memory where it is available for use during the update mode; and updating data in the first portion of the first memory. | 2010-10-21 |
20100268868 | FLASH STORAGE DEVICE AND OPERATING METHOD THEREOF - The invention also provides a flash storage device. In one embodiment, the flash storage device is coupled to a host, and comprises a random access memory and a controller. The random access memory stores a plurality of link tables therein, wherein each of the link tables corresponds to one of a plurality of management units of at least one flash memory, and the link tables store corresponding relationships between logical addresses and physical addresses of the corresponding management units. The controller receives an access logical address from the host, determines an access physical address corresponding to the access logical address according to the link tables stored in the random access memory, and accesses data from the flash memory according to the access physical address. | 2010-10-21 |
20100268869 | MEMORY SYSTEM COMPRISING NONVOLATILE MEMORY DEVICE AND CONTROLLER - A memory system comprises a nonvolatile memory device and a controller. The controller comprises a working memory and is configured to control the nonvolatile memory device. The nonvolatile memory device is configured to store drive data required to access the nonvolatile memory device. When an initialization operation of the memory system is performed, the controller activates an operation standby signal after loading a portion of the drive data stored in the nonvolatile memory device into the working memory. | 2010-10-21 |
20100268870 | DATA STORAGE DEVICE AND DATA STORAGE SYSTEM INCLUDING THE SAME - A data storage device includes a flash memory including a plurality of data blocks and a flash translation layer that divides the plurality of data blocks into a data block of a first group and a data block of a second group, and that records the data signal to a data block of the first group or a data block of the second group which is extended from a data block of the first group. | 2010-10-21 |
20100268871 | NON-VOLATILE MEMORY CONTROLLER PROCESSING NEW REQUEST BEFORE COMPLETING CURRENT OPERATION, SYSTEM INCLUDING SAME, AND METHOD - A non-volatile memory controller, system and method capable of processing a next request as an interrupt before completing a current operation are disclosed. The non-volatile memory system includes a first memory storing meta data loaded from a flash memory; a second memory storing the meta data copied from the first memory; and a flash memory controller copying the meta data from the first memory to the second memory, changing the meta data in the second memory, and then re-copying the changed meta data from the second memory to the first memory during a first-type operation that requires changes in the meta data. | 2010-10-21 |
20100268872 | DATA STORAGE SYSTEM COMPRISING MEMORY CONTROLLER AND NONVOLATILE MEMORY - A data storage system comprising a storage device comprising at least one nonvolatile memory, and a controller connected to the storage device through a channel. The memory controller sends part or all of a command, address and data for a next operation to the nonvolatile memory while the nonvolatile memory device is in a busy state. The memory controller then performs a background operation while the nonvolatile memory device remains in the busy state. | 2010-10-21 |
20100268873 | FLASH MEMORY CONTROLLER UTILIZING MULTIPLE VOLTAGES AND A METHOD OF USE - A Flash memory controller is disclosed. The Flash memory controller comprises a host interface, a Flash memory interface, controller logic coupled between the host interface, the controller logic handling a plurality of voltages. The controller also includes a mechanism for allowing a multiple voltage host to interface with a high voltage or a multiple voltage Flash memory. A multiple voltage Flash memory controller in accordance with the present invention provides the following advantages over conventional Flash memory controllers: (1) a voltage host is allowed to interface with multiple Flash memory components that operate at different voltages in any combination; (2) power consumption efficiency is improved by integrating the programmable voltage regulator, and voltage comparator mechanism with the Flash memory controller; (3) External jumper selection is eliminated for power source configuration; and (4) Flash memory controller power source interface pin-outs are simplified. | 2010-10-21 |
20100268874 | METHOD OF CONFIGURING NON-VOLATILE MEMORY FOR A HYBRID DISK DRIVE - A system, method and machine-readable medium are provided to configure a non-volatile memory (NVM) including a plurality of NVM modules, in a system having a hard disk drive (HDD) and an operating system (O/S). In response to a user selection of a hybrid drive mode for the NVM, the plurality of NVM modules are ranked according to speed performance. Boot portions of the O/S are copied to a highly ranked NVM module, or a plurality of highly ranked NVM modules, and the HDD and the highly ranked NVM modules are assigned as a logical hybrid drive of the computer system. Ranking each of the plurality of NVM modules can include carrying out a speed performance test. This approach can provide hybrid disk performance using conventional hardware, or enhance performance of an existing hybrid drive, while taking into account relative performance of available NVM modules. | 2010-10-21 |
20100268875 | RAID LEVEL MIGRATION FOR SPANNED ARRAYS - A method for a redundant array of independent disks (RAID) controller for migrating a RAID level in spanned arrays is disclosed. In one embodiment, a method for a RAID controller for migrating a RAID level in spanned arrays includes receiving a command for a RAID level migration from a first RAID level in spanned arrays to a second RAID level. The method further includes initializing a number of pointers which correspond to a number of the spanned arrays in the first RAID level, and transferring at least one data block of the first RAID level in the spanned arrays using the number of pointers to form the second RAID level. | 2010-10-21 |
20100268876 | SLIDING-WINDOW MULTI-CLASS STRIPING - A sequence of storage devices of a data store may include one or more stripesets for storing data stripes of different lengths and of different types. Each data stripe may be stored in a prefix or other portion of a stripeset. Each data stripe may be identified by an array of addresses that identify each page of the data stripe on each included storage device. When a first storage device of a stripeset becomes full, the stripeset may be shifted by removing the full storage device from the stripeset, and adding a next storage device of the data store to the stripeset. A class variable may be associated with storage devices of a stripeset to identify the type of data that the stripeset can store. The class variable may be increased (or otherwise modified) when a computer stores data of a different class in the stripeset. | 2010-10-21 |
20100268877 | SECURING DATA IN A DISPERSED STORAGE NETWORK USING SHARED SECRET SLICES - A data element can be encoded into multiple encoded data elements using an encoding algorithm that includes an encoding function and one or more encoder constant. The encoded data elements can be organized into multiple pillars, each having a respective pillar number. Each of the pillars is sent to a different storage unit of a distributed storage network. To recover the original data element, the encoded data elements are retrieved from storage, and the encoder constant is recovered using multiple encoded data elements. Recovering the encoder constant allows the encoding algorithm originally used to encode the data elements to be determined, and used to recover the original data element. The security of the stored data is enhanced, because an encoded data element from a single pillar is insufficient to identify the encoder constant. | 2010-10-21 |
20100268878 | Keeping File Systems or Partitions Private in a Memory Device - Disclosed is a method and apparatus for allowing a user to select, from a plurality of partitions on a memory device, which partitions may be visible to hosts connecting to the memory device. | 2010-10-21 |
20100268879 | SECURE DIGITAL MUSIC ALBUM FORMAT - A Removable Memory Device and method of use is disclosed. Instead of exchanging the data associated with multimedia information from one media carrier to another, the media carrier itself is transferred from one player to another. The media is integrated, easy to use, and is possible to apply both to the low quality as well as the high quality audio environment. A new format for the memory card is provided. The new format includes but is not limited to music, as well as a booklet, cover, text information, video and photo gallery. The new format does not limit the Removable Memory Device/SD card to only a media carrier but constitutes, rather, a dedicated and controlled interface to Internet contents. At the same time, there are be a range of players that are created for personal, portable, car audio, as well as hi-fi and hi-end. | 2010-10-21 |
20100268880 | Dynamic Runtime Modification of Array Layout for Offset - Disclosed are a method, a system and a computer program product for operating a cache system. The cache system can include multiple cache lines, and a first cache line of the multiple of cache lines can include multiple cache cells, and a bus coupled to the multiple cache cells. In one or more embodiments, the bus can include a switch that is operable to receive a first control signal and to split the bus into first and second portions or aggregate the bus into a whole based on the first control signal. When the bus is split, a first cache cell and a second cache cell of the multiple cache cells are coupled to respective first and second portions of the bus. Data from the first and second cache cells can be selected through respective portions of the bus and outputted through a port of the cache system. | 2010-10-21 |
20100268881 | CACHE REGION CONCEPT - A method to associate a storage policy with a cache region is disclosed. In this method, a cache region associated with an application is created. The application runs on virtual machines, and where a first virtual machine has a local memory cache that is private to the first virtual machine. The first virtual machine additionally has a shared memory cache that is shared by the first virtual machine and a second virtual machine. Additionally, the cache region is associated with a storage policy. Here, the storage policy specifies that a first copy of an object to be stored in the cache region is to be stored in the local memory cache and that a second copy of the object to be stored in the cache region is to be stored in the shared memory cache. | 2010-10-21 |
20100268882 | LOAD REQUEST SCHEDULING IN A CACHE HIERARCHY - A system and method for tracking core load requests and providing arbitration and ordering of requests. When a core interface unit (CIU) receives a load operation from the processor core, a new entry in allocated in a queue of the CIU. In response to allocating the new entry in the queue, the CIU detects contention between the load request and another memory access request. In response to detecting contention, the load request may be suspended until the contention is resolved. Received load requests may be stored in the queue and tracked using a least recently used (LRU) mechanism. The load request may then be processed when the load request resides in a least recently used entry in the load request queue. CIU may also suspend issuing an instruction unless a read claim (RC) machine is available. In another embodiment, CIU may issue stored load requests in a specific priority order. | 2010-10-21 |
20100268883 | Information Handling System with Immediate Scheduling of Load Operations and Fine-Grained Access to Cache Memory - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. The control logic determines the size requirement of each load operation or store operation. When the cache memory system performs a store operation or load operation, the memory system accesses the portion of a cache line it needs to perform the operation instead of accessing an entire cache line. | 2010-10-21 |
20100268884 | Updating Partial Cache Lines in a Data Processing System - A processing unit for a data processing system includes a processor core having one or more execution units for processing instructions and a register file for storing data accessed in processing of the instructions. The processing unit also includes a multi-level cache hierarchy coupled to and supporting the processor core. The multi-level cache hierarchy includes at least one upper level of cache memory having a lower access latency and at least one lower level of cache memory having a higher access latency. The lower level of cache memory, responsive to receipt of a memory access request that hits only a partial cache line in the lower level cache memory, sources the partial cache line to the at least one upper level cache memory to service the memory access request. The at least one upper level cache memory services the memory access request without caching the partial cache line. | 2010-10-21 |
20100268885 | SPECIFYING AN ACCESS HINT FOR PREFETCHING LIMITED USE DATA IN A CACHE HIERARCHY - A system and method for specifying an access hint for prefetching limited use data. A processing unit receives a data cache block touch (DCBT) instruction having an access hint indicating to the processing unit that a program executing on the data processing system may soon access a cache block addressed within the DCBT instruction. The access hint is contained in a code point stored in a subfield of the DCBT instruction. In response to detecting that the code point is set to a specific value, the data addressed in the DCBT instruction is prefetched into an entry in the lower level cache. The entry may then be updated as a least recently used entry of a plurality of entries in the lower level cache. In response to a new cache block being fetched to the cache, the prefetched cache block is cast out of the cache. | 2010-10-21 |
20100268886 | SPECIFYING AN ACCESS HINT FOR PREFETCHING PARTIAL CACHE BLOCK DATA IN A CACHE HIERARCHY - A system and method for specifying an access hint for prefetching only a subsection of cache block data, for more efficient system interconnect usage by the processor core. A processing unit receives a data cache block touch (DCBT) instruction containing an access hint and identifying a specific size portion of data to be prefetched. Both the access hint and a value corresponding to an amount of data to be prefetched are contained in separate subfields of the DCBT instruction. In response to detecting that the code point is set to a specific value, only the specific size of data identified in a sub-field of the DCBT and addressed in the DCBT instruction is prefetched into an entry in the lower level cache. | 2010-10-21 |
20100268887 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH DUAL DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory. | 2010-10-21 |
20100268888 | PROCESSING A DATA STREAM BY ACCESSING ONE OR MORE HARDWARE REGISTERS - Disclosed are a method, a system, and a program product for processing a data stream by accessing one or more hardware registers of a processor. In one or more embodiments, a first program instruction or subroutine can associate a hardware register of the processor with a data stream. With this association, the hardware register can be used as a stream head which can be used by multiple program instructions to access the data stream. In one or more embodiments, data from the data stream can be fetched automatically as needed and with one or more patterns which may include one or more start positions, one or more lengths, one or more strides, etc. to allow the cache to be populated with sufficient amounts of data to reduce memory latency and/or external memory bandwidth when executing an application which accesses the data stream through the one or more registers. | 2010-10-21 |
20100268889 | COMPILER BASED CACHE ALLOCATION - Techniques a generally described for creating a compiler determined map for the allocation of memory space within a cache. An example computing system is disclosed having a multicore processor with a plurality of processor cores. At least one cache may be accessible to at least two of the plurality of processor cores. A compiler determined map may separately allocate a memory space to threads of execution processed by the processor cores. | 2010-10-21 |
20100268890 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH SINGLE DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory. | 2010-10-21 |
20100268891 | Allocation of memory space to individual processor cores - Techniques are generally described for a multi-core processor with a plurality of processor cores. At least one cache is accessible to at least two of the plurality of processor cores. The multi-core processor can be configured for separately allocating a memory space within the cache to the individual processor cores accessing the cache. | 2010-10-21 |
20100268892 | Data Prefetcher - In an embodiment, a processor comprises a data cache and a prefetch unit coupled to the data cache. The prefetch unit is configured to detect one or more prefetch streams corresponding to load operations that miss the data cache, and comprises a memory configured to store data corresponding to potential prefetch streams. The prefetch unit is configured to confirm a prefetch stream in response to N or more demand accesses to addresses in the prefetch stream, where N is a positive integer greater than one and is dependent on a prefetch pattern being detected. The prefetch unit comprises a plurality of stream engines, each stream engine configured to generate prefetches for a different prefetch stream assigned to that stream engine. The prefetch unit is configured to assign the confirmed prefetch stream to one of the plurality of stream engines. | 2010-10-21 |
20100268893 | Data Prefetcher that Adjusts Prefetch Stream Length Based on Confidence - In an embodiment, a processor comprises a data cache and a prefetch unit coupled to the data cache. The prefetch unit is configured to identify a prefetch stream in cache misses from the data cache, and the prefetch unit is configured to issue prefetches predicted by the prefetch stream to prefetch data into the data cache. More particularly, the prefetch unit implements one or more stream engines that generate prefetches for respective prefetch streams. Each stream engine is configured to maintain limit data that indicates a number of prefetches that are permitted to be outstanding beyond a most recent demand access. The stream engine is configured to increase the limit responsive to the number of demand accesses that consume prefetched data at least equaling the limit. | 2010-10-21 |
20100268894 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 2010-10-21 |
20100268895 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. | 2010-10-21 |
20100268896 | TECHNIQUES FOR CACHE INJECTION IN A PROCESSOR SYSTEM FROM A REMOTE NODE - A technique for performing cache injection in a processor system includes monitoring, by a cache, addresses on a bus. Input/output data associated with an address of a data block stored in the cache is then requested from a remote node, via a network controller. Ownership of the input/output data is acquired by the cache when an address on the bus that is associated with the input/output data corresponds to the address of the data block stored in the cache. | 2010-10-21 |
20100268897 | MEMORY DEVICE AND MEMORY DEVICE CONTROLLER - A memory device controller interposed between a memory device and a host device includes a data communication unit configured to transfer data to and from the memory device in synchronization with a clock signal. The data communication unit supports a single edge synchronization mode in which data is transferred in synchronization with either one of a rising edge and a falling edge of the clock signal, and a double edge synchronization mode in which data is transferred in synchronization with both the rising edge and the falling edge. The data communication unit transfers data in the double edge synchronization mode when data is transferred by the memory device operating as a bus master. | 2010-10-21 |
20100268898 | SCHEDULED RETRIEVAL, STORAGE AND ACCESS OF MEDIA DATA - A system and method automates a scheduled retrieval, storage, and access of media data. Media data is retrieved from an external source and downloaded to an end user media device storage for subsequent playback at the end user media device. Media data is accessible from the end user media device storage based upon criteria including a selection of the end user, rules regulating the media data, and whether a playback time of the media data is sufficient to retrieve additional media data. The system performs regularly scheduled dynamic controls to determine whether additional media data is required for continuous and uninterrupted access of the media data. | 2010-10-21 |
20100268899 | MEMORY CONTROLLER, NONVOLATILE STORAGE DEVICE, DATA PROCESSING DEVICE, NONVOLATILE STORAGE DEVICE SYSTEM, AND METHOD - A memory controller ( | 2010-10-21 |
20100268900 | METHOD FOR TRACKING OF NON-RESIDENT PAGES - Embodiments of the present invention provide methods and systems for efficiently tracking evicted or non-resident pages. For each non-resident page, a first hash value is generated from the page's metadata, such as the page's mapping and offset parameters. This first hash value is then used as an index to point one of a plurality of circular buffers. Each circular buffer comprises an entry for a clock pointer and entries that uniquely represent non-resident pages. The clock pointer points to the next page that is suitable for replacement and moves through the circular buffer as pages are evicted. In some embodiments, the entries that uniquely represent non-resident pages are a hash value that is generated from the page's inode data. | 2010-10-21 |
20100268901 | RECONFIGURABLE MEMORY SYSTEM DATA STROBES - In a reconfigurable data strobe-based memory system, data strobes may be re-tasked in different modes of operation. For example, in one mode of operation a differential data strobe may be used as a timing reference for a given set of data signals. In a second mode of operation, one of the components of the differential data strobe may be used as a timing reference for a first portion of the set of data signals and the other component used as a timing reference for a second portion of the set of data signals. Different data mask-related schemes also may be invoked for different modes of operation. For example, in a first mode of operation a memory controller may generate a data mask signal to prevent a portion of a set of data from being written to a memory array. Then, in a second mode of operation the memory controller may invoke a coded value replacement scheme or a data strobe transition inhibition scheme to prevent a portion of a set of data from being written to a memory array. | 2010-10-21 |
20100268902 | ASYNCHRONOUS DISTRIBUTED OBJECT UPLOADING FOR REPLICATED CONTENT ADDRESSABLE STORAGE CLUSTERS - A method is performed by two or more devices of a group of devices in a distributed data replication system. The method includes receiving, at the two or more devices, a group of chunks having a same unique temporary identifier, where the group of chunks comprises an object to be uploaded; creating an entry for the object in a replicated index, where the entry is keyed by the unique temporary identifier, and where the replicated index is replicated at each of the two or more devices; and determining, by an initiating device of the two or more devices, that a union of the group of chunks contains all data of the object. The method also includes calculating a content-based identifier to the object; creating another entry for the object in the replicated index, where the other entry is keyed by the content-based identifier; and updating the replicated index to point from the unique temporary identifier to the content-based identifier. | 2010-10-21 |
20100268903 | COMPUTER SYSTEM COMPRISING STORAGE OPERATION PERMISSION MANAGEMENT - The system of the present invention enhances the security of settings and operations in a storage device, and copes with numerous changes of the operational status of work executed within a computer system. When it becomes necessary to issue an operating command to the storage, storage operation propriety is determined on the basis of the operational status of the work and definition of operation permission for each work operation state. | 2010-10-21 |
20100268904 | APPARATUS AND METHODS FOR REGION LOCK MANAGEMENT ASSIST CIRCUIT IN A STORAGE SYSTEM - Apparatus and methods for improved region lock management in a storage controller. A region lock management circuit coupled with a memory is provided for integration in a storage controller. One or more I/O processor circuits of the storage controller transmit requests to the region lock management circuit to request a temporary lock for a region of storage on a volume of the storage system. The region lock management circuit determines whether the requested lock may be granted or whether it conflicts with other presently locked regions. Presently locked regions and regions to be locked are represented by region lock data structures. In one exemplary embodiment, the region lock data structures for each logical volume may be stored as a tree data structure. A tree assist circuit may also be provided to aid the region lock management circuit in managing the region lock tree data structures. | 2010-10-21 |
20100268905 | MEMORY MAPPING SYSTEM, REQUEST CONTROLLER, MULTI-PROCESSING ARRANGEMENT, CENTRAL INTERRUPT REQUEST CONTROLLER, APPARATUS, METHOD FOR CONTROLLING MEMORY ACCESS AND COMPUTER PROGRAM PRODUCT - A memory mapping system is connectable to a multi-processing arrangement. The multi-processing arrangement includes a first processing unit and a second processing unit. The memory mapping system includes a main memory to which the second processing unit does not have write access, the main memory including a first memory section and a second memory section. An associated memory is associated with the second memory section. The associated memory includes a memory section to which the second processing unit has write access. A consistency control unit can maintaining consistency between data stored in the associated memory and data stored in the second memory section. | 2010-10-21 |
20100268906 | HIGH BANDWIDTH MEMORY INTERFACE - This invention describes an improved high bandwidth chip-to-chip interface for memory devices, which is capable of operating at higher speeds, while maintaining error free data transmission, consuming lower power, and supporting more load. Accordingly, the invention provides a memory subsystem comprising at least two semiconductor devices; a main bus containing a plurality of bus lines for carrying substantially all data and command information needed by the devices, the semiconductor devices including at least one memory device connected in parallel to the bus; the bus lines including respective row command lines and column command lines; a clock generator for coupling to a clock line, the devices including clock inputs for coupling to the clock line; and the devices including programmable delay elements coupled to the clock inputs to delay the clock edges for setting an input data sampling time of the memory device. | 2010-10-21 |
20100268907 | Selecting A Target Number of Pages for Allocation to a Partition - In an embodiment, a target number of discretionary pages for a first partition is calculated as a function of a number of physical page table faults, a number of sampled page faults, a number of shared physical page pool faults, a number of re-page-ins, and a ratio of pages. If the target number of discretionary pages for the first partition is less than a number of the discretionary pages that are allocated to the first partition, a result page is found that is allocated to the first partition and the result page is deallocated from the first partition. If the target number of discretionary pages for the first partition is greater than the number of the discretionary pages that are allocated to the first partition, a free page is allocated to the first partition. | 2010-10-21 |
20100268908 | DATA STORAGE METHOD, DEVICE AND SYSTEM AND MANAGEMENT SERVER - The present invention relates to a data storage method, device and system and a management server. The data storage method includes: constituting a data pool from all of n data storage devices; when there is data for storage, polling all the devices in the data pool to select a group of m devices, and storing the data onto each of the selected group of m devices, where m is larger than one and smaller than n. The embodiments of the invention can address the problems of an existing data storage approach that a failing node causes an increased load on and instability of another node and that each node in the existing data storage approach has a low utilization ratio and poor predictability, so as to achieve uniform loads on the devices and high reliability of the nodes despite any failing node and improve the resource utilization ratio and predictability of the nodes. | 2010-10-21 |
20100268909 | STORAGE SYSTEM AND CONTROLLING SYSTEM AND METHOD THEREOF - A controlling system is used in a storage system. The storage system includes a host and at least one storage device connected to the host in series. The controlling system includes a detecting unit and a partitioning unit. The detecting unit is operable to detect the number of the at least one storage device. The partitioning unit is operable to partition the at least one storage device and generate a partition table and at least one partition information table. The partition table records partition information of the at least one storage device. Each partition information table is stored in a corresponding storage device and records storage information of the corresponding storage device. | 2010-10-21 |