15th week of 2014 patent applcation highlights part 51 |
Patent application number | Title | Published |
20140101311 | Method of Determining an Attribute of a Server - A method of determining an operational attribute of a server executed on a first execution platform and providing a service, the method comprising: performing a measurement indicative of an operational attribute of the server, wherein the measurement is performed by a platform observer system executed on said first execution platform; communicating a result of said measurement to an external observer system; wherein the communicating comprises protecting secrecy of the communicated result; verifying, by the external observer system, that the received measurement result is indicative of a measurement performed on said server. | 2014-04-10 |
20140101312 | ACCESS ALLOCATION IN HETEROGENEOUS NETWORKS - Radio resource management and access allocation is provided in heterogeneous networks. Sequential access allocation in heterogeneous networks is facilitated based on multiple scheduling priorities. In addition, clustering and grouping during access allocation is allowed for heterogeneous networks. | 2014-04-10 |
20140101313 | Cloud-Based Dynamic Session License Control - A method for controlling session access within a cloud-based network license zone (NLZ) includes registering one or more virtual machines, modifying a zone-wide session license based upon the registration step and transmitting the modified license to the plurality of virtual machines. The method also includes periodically receiving a network access message from each of the other virtual machines, each network access message including a count of active sessions enumerated by service type currently processed by the other virtual machine, determining a summation of active sessions, the summation based in part on the network access messages and a count of active sessions currently processed by the virtual machine, and enforcing a total count of active sessions, each virtual machine configured to reject new session requests received at the virtual machine when the total count of active sessions exceeds a predetermined number of active sessions as defined in the modified license. | 2014-04-10 |
20140101314 | METHOD AND APPARATUS FOR CONNECTING TO SERVER USING TRUSTED IP ADDRESS OF DOMAIN - An apparatus for connecting to an update server includes an update unit configured to connect to the update server over a network using a pre-stored domain name address of the update server and an IP address acquisition unit configured to acquire an IP address of the connected update server. The IP address acquired by the IP address acquisition unit is stored as a trusted IP address in a storage unit. The apparatus further includes a reconnection processing unit configured to fetch the trusted IP address of the update server and try connecting to the update server using the trusted IP address in the case of failure to connect to the update server using the pre-stored domain name address. | 2014-04-10 |
20140101315 | INSERTING USER TARGETED WEB RESOURCES INTO BROWSER NAVIGATION MEMORY - An apparatus for inserting user targeted web resources into browser navigation memory includes a storage device storing machine-readable code and a processor executing the machine-readable code. The machine-readable code includes a determination module determining whether a web resource is a user targeted web resource. The web resource is loaded in a web browser. The machine-readable code includes an insertion module inserting a record of the web resource into navigation memory of the web browser in response to the determination module determining that the web resource is a user targeted web resource. | 2014-04-10 |
20140101316 | APPARATUS AND METHOD FOR PROVISIONING - A provisioning management apparatus for a cloud data center collects cloud resource information including at least resource form information and performance measurement information of a cloud, determines a present resource state of the cloud data center using the collected cloud resource information, calculates a theoretical optimal resource reservation based on the present resource state, configures a resource of the cloud data center when the theoretical optimal resource reservation accepts a user request, and verifies the resource configuration. | 2014-04-10 |
20140101317 | INTEGRATED VPN MANAGEMENT AND CONTROL APPARATUS AND METHOD - Disclosed are an integrated virtual private network (VPN) management and control apparatus and method. The integrated VPN management and control apparatus according to an embodiment of the present invention manages and controls a plurality of VPNs between a client and a cloud center through communication with a cloud management system, and manages and controls connection between a VPN and a VPN edge device according to a VPN setting, change, or deletion request. | 2014-04-10 |
20140101318 | MANAGEMENT OF VIRTUAL APPLIANCES IN CLOUD-BASED NETWORK - Embodiments relate to instantiating and operating a virtual appliance monitor in a network cloud environment. A method includes receiving, by a virtual appliance monitor executing in a network cloud system, appliance state information representing an execution state of a virtual appliance of a set of virtual appliances instantiated in the network cloud system that the virtual appliance monitor is to monitor, wherein the virtual appliance monitor is instantiated by a cloud management server device managing the network cloud system, managing, by the virtual appliance monitor, the set of virtual appliances in view of the receiving appliance state information, and terminating, by the cloud management server device, the virtual appliance monitor and the set of virtual appliances monitored by the virtual appliance monitor when a subscription period for the virtual appliance monitor and the set of virtual appliances expires. | 2014-04-10 |
20140101319 | METHOD FOR UNIFORM NETWORK ACCESS - According to some embodiments, a registry is displayed. The registry may, for example, indicate resources available from a plurality of remote network access devices via a communications network. Moreover, a personal network address May be associated with each available resource, the personal network address including an destination address portion and an application program identifier portion. A direct communications link may then be established between a first network access device hosting an available resource and a second network address device using the personal network address associated with the resource. | 2014-04-10 |
20140101320 | INFORMATION PROCESSING SYSTEM, CONTROL METHOD, MANAGEMENT APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM - A management apparatus of a sub-system receives, from a main system, management information including information about each resource of processor resource, storage resource, network resource, which are used by a user with the main system. The management apparatus of the sub-system reserves, in the sub-system, the storage resource for storing data of the user held by the main system and the minimum network resource for receiving the data, on the basis of the management information received. The management apparatus of the sub-system uses the reserved network resource to receive the data from the main system, and stores the data to the reserved storage resource. The management apparatus of the sub-system reserves, in the sub-system, the network resource and the processor resource used by the user with the main system, on the basis of the received management information when the user is allowed to use the second system. | 2014-04-10 |
20140101321 | Redirecting of Network Traffic for Application of Stateful Services - Techniques are presented herein for redirection between any number of network devices that are distributed to any number of sites. A first message of a flow is received from a network endpoint at a first network device. A relationship between the endpoint and the first network device is registered in a directory that maps endpoints for network devices. A state for the flow is stored at the first network device. A second message is received for the flow which is indicative of the first endpoint at a second network device. It is determined that the second network device does not store the flow state for the flow. Querying is performed to receive information indicative of the relationship between the endpoint and the first network device. The received information is stored in a cache at the second network device. Services are applied to the second message according to the stored information. | 2014-04-10 |
20140101322 | MANAGING MID-DIALOG SESSION INITIATION PROTOCOL (SIP) MESSAGES - Processing mid-dialog SIP messages by receiving a mid-dialog SIP message from a SIP user agent client, creating a new SIP session, associating the new SIP session with the mid-dialog SIP message, identifying an application that is associated with the mid-dialog SIP message, providing to the application the mid-dialog SIP message in the context of the new SIP session, receiving an acknowledgement from the application that the application will accept the mid-dialog SIP message, and responsive to receiving the acknowledgement, providing to the application the mid-dialog SIP message in the context of the new SIP session. | 2014-04-10 |
20140101323 | MANAGING MID-DIALOG SESSION INITIATION PROTOCOL (SIP) MESSAGES - Processing mid-dialog SIP messages by receiving a mid-dialog SIP message from a SIP user agent client, creating a new SIP session, associating the new SIP session with the mid-dialog SIP message, identifying an application that is associated with the mid-dialog SIP message, providing to the application the mid-dialog SIP message in the context of the new SIP session, receiving an acknowledgement from the application that the application will accept the mid-dialog SIP message, and responsive to receiving the acknowledgement, providing to the application the mid-dialog SIP message in the context of the new SIP session. | 2014-04-10 |
20140101324 | DYNAMIC VIRTUAL PRIVATE NETWORK - Various embodiments establish a virtual private network (VPN) between a remote network and a private network. In one embodiment, a first system in the remote network establishes a connection with a central system through a public network. The central system is situated between the first system and a second system in the private network. The first system receives, from the central system and based on establishing the connection, a set of VPN information associated with at least the second system. The first system disconnects from the central system and establishes a VPN directly with the second system through the public network based on the set of VPN information. | 2014-04-10 |
20140101325 | DYNAMIC VIRTUAL PRIVATE NETWORK - Various embodiments establish a virtual private network (VPN) between a remote network and a private network. In one embodiment, a first system in the remote network establishes a connection with a central system through a public network. The central system is situated between the first system and a second system in the private network. The first system receives, from the central system and based on establishing the connection, a set of VPN information associated with at least the second system. The first system disconnects from the central system and establishes a VPN directly with the second system through the public network based on the set of VPN information. | 2014-04-10 |
20140101326 | DATA CLIENT - Facilitating the distribution of content is disclosed. A request for content is received from a requesting peer. A peer type compatibility criteria is applied to an allocation process that allocates at least one sending peer to deliver the content to the requesting peer. The peer type compatibility criteria ensures that a lightweight peer is paired with a regular peer. | 2014-04-10 |
20140101327 | SERVER DEVICE AND INFORMATION PROCESSING METHOD - There is provided a server device including a streaming processing unit configured to generate a frame image in real time, encode the frame image to generate encoded data, and transmit the encoded data to a client device over a network, the client device being configured to decode the encoded data and output the frame image, and a controller configured to receive information related to an output timing of the frame image in the client device from the client device and control a process timing of the frame image in the streaming processing unit so that a predetermined relationship is maintained between the output timing and the process timing. | 2014-04-10 |
20140101328 | SYSTEM AND METHOD FOR OPTIMIZING A COMMUNICATION SESSION BETWEEN MULTIPLE TERMINALS INVOLVING TRANSCODING OPERATIONS - System and method for optimizing a transcoding session between multiple terminals are disclosed. The method determines properties of the transcoding session, including a number of terminals participating in the transcoding session, media characteristics supported by each terminal, a measure of performance of the transcoding session to be optimized, and optionally a proportion of time involved in the transcoding session for each terminal. Then a cost function characterizing the measure of performance of the transcoding session and depending on the above properties of the transcoding session is built, followed by optimizing the cost function with respect to said measure of performance to determine an optimal measure of performance for the transcoding session and optimal values for the media characteristics for each terminal. In one embodiment, codecs used by multiple terminals and computational complexity of the transcoding session are optimized. A corresponding system for optimizing the transcoding session is also provided. | 2014-04-10 |
20140101329 | APPARATUS, SYSTEM, AND METHOD FOR MULTI-BITRATE CONTENT STREAMING - An apparatus for multi-bitrate content streaming includes a receiving module configured to capture media content, a streamlet module configured to segment the media content and generate a plurality of streamlets, and an encoding module configured to generate a set of streamlets. The system includes the apparatus, wherein the set of streamlets comprises a plurality of streamlets having identical time indices and durations, and each streamlet of the set of streamlets having a unique bitrate, and wherein the encoding module comprises a master module configured to assign an encoding job to one of a plurality of host computing modules in response to an encoding job completion bid. A method includes receiving media content, segmenting the media content and generating a plurality of streamlets, and generating a set of streamlets. | 2014-04-10 |
20140101330 | METHOD AND APPARATUS FOR STREAMING MULTIMEDIA CONTENTS - A method for streaming a multimedia content from at least one sender peer to a receiver peer, comprising: obtaining periodically a target downloading rate of the multimedia content from the at least one sender peer to the receiver peer, according to a playback rate of the multimedia content and a buffer occupancy level of the receiver peer; determining a downloading rate from each of the at least one sender peer to the receiver peer, according to the data transmission situation from each respective sender peer of the at least one sender peer to the receiver peer and the obtained target downloading rate; and streaming the multimedia content from the at least one sender peer to the sender peer at the respective determined downloading rate. | 2014-04-10 |
20140101331 | Method and System for Managing, Optimizing, and Routing Internet Traffic from a Local Area Network (LAN) to Internet Based Servers - A method and system for optimizing internet traffic from a Local Area Network (LAN) to an internet based server utilizes a specific gamer private network (GPN) for the classified latency sensitive internet data. The method includes the steps of creating a gateway computer or a master-slaver computer (device) system within a local area network (LAN), and making this gateway computer control the internet data from any device within the LAN to an outside internet based server. The gateway computer sorts the internet data into various categories, including latency sensitive, bandwidth sensitive and exclusion that is neither latency sensitive nor bandwidth sensitive. Based on these classification results, the internet data within various categories are sent out via the respective routes, so as to achieve a smooth and efficient internet data transmission. | 2014-04-10 |
20140101332 | METHOD AND SYSTEM FOR ACCESS POINT CONGESTION DETECTION AND REDUCTION - A method and system for detecting and reducing data transfer congestion in a wireless access point includes determining a round-trip-time value for an internet control message protocol (ICMP) packet transmitted from a source computing device to a first computing device of a plurality of computing devices via the wireless access point. A data rate for data transmissions from the source computing device is increased to a value no greater than a peak data rate value if the round-trip-time is less than a first threshold value. The data rate is decreased if the round-trip-time value is greater than a second threshold value. Additionally, the peak data rate value may also be decreased if the round-trip-time value is greater than the second threshold value. | 2014-04-10 |
20140101333 | SYSTEM AND METHOD FOR SUPPORTING MESSAGING IN A FULLY DISTRIBUTED SYSTEM - A system and method can support messaging in a fully distributed system. The fully distributed system includes a plurality of agents. An agent in the plurality of agents operates to determine an address for a message, wherein said address is determined at least partially according to a content of the message. Then, said agent can select a path to transmit the message according to said address, and send the message according to said path directly to said address. | 2014-04-10 |
20140101334 | METHOD FOR CONTINUOUS, FRAME-SPECIFIC CLICK-STREAM RECORDING - A method for tracking a user's movements between network addresses can include, subsequent to a request for a (current) network address from a user, receiving the network address and an identifier for a region associated with the network address. The method can also include locating a record that contains the identifier for the region and a time that immediately precedes the request for the network address from the user. The record may further contain a prior network address. The method can further include generating an entry for a table that includes the identifier for the region, the current network address, and the prior network address. A server computer or a client computer can generate the entry. Improved accountability and improved user profile accuracy can be obtained with the method. A data processing system readable medium can comprise code that includes instructions for carrying out the method. | 2014-04-10 |
20140101335 | Identifying, Translating and Filtering Shared Risk Groups in Communications Networks - A method, apparatus, and computer-readable storage medium for processing shared risk group (SRG) information in communications networks are disclosed. The method includes receiving network information comprising SRG information from a second domain at a first domain, obtaining at least one SRG identifier by processing the SRG information, and processing the at least one SRG identifier, the processing using processing criteria. The apparatus includes a network interface adapted to receive network information comprising shared risk group information, a processor coupled to the network interface and configured to execute one or more processes, and a memory coupled to the processor and adapted to obtain at least one SRG identifier by processing the SRG information and to process the at least one SRG identifier using processing criteria. The computer-readable storage medium is configured to store program instructions that when executed are configured to cause the processor to perform the method. | 2014-04-10 |
20140101336 | SYSTEM AND METHOD FOR IMPLEMENTING A MULTILEVEL DATA CENTER FABRIC IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes determining whether a first network element with which a second network element is attempting to establish an adjacency is a client type element. If the first network element is determined to be a client type element, the method further includes determining whether the first and second network elements are in the same network area. If the first network element is a client type element and the first and second network elements are determined to be in the same network area, the adjacency is established. Subsequent to the establishing, a determination is made whether the first network element includes an inter-area forwarder (IAF). | 2014-04-10 |
20140101337 | SYSTEMS AND METHODS FOR A DIALOG SERVICE INTERFACE SWITCH - Systems and methods for a dialog service interface switch are provided. In at least one embodiment, a system comprises a plurality of networks configured to enable communication transmissions from the mobile communication system to an end node, wherein at least two networks of the plurality of networks transports information through different protocol stacks that implement different protocol suites and a dialog service interface switch coupled to the plurality of networks. The system also comprises an application interface coupled to the dialog service interface switch, wherein the dialog service interface switch comprises a network selector that determines a network of the plurality of networks through which the mobile communication system will communicate and switches between different networks of the plurality of networks, wherein the application interface provides data to at least one application, executing in the application layer, in the same format for each network of the plurality of networks. | 2014-04-10 |
20140101338 | REDIRECTION COMMUNICATION - A method and system of communicating data to or from a remote computer. The remote computer is accessed by a CPU as though it were a local IDE controller attached to a local IDE device. A peripheral device distinct from the CPU provides a set of virtual IDE device registers and an IDE controller to the central processing unit. The peripheral device receives data written to the set of virtual IDE device registers, and transmits the data into a network, addressed for reception by the remote computer. The remote computer receives the data, interprets it, and performs operations upon a mirror set of device data. The remote computer then responds, and transmits its response across the network to the peripheral device. The peripheral device communicates the response to the CPU in a fashion identical to an physical IDE controller attached to a physical IDE device. | 2014-04-10 |
20140101339 | Efficient Scheduling of Read and Write Transactions in Dynamic Memory Controllers - Data-transfer transactions in the read and write directions may be balanced by taking snapshots of the transactions stored in a buffer, and executing transactions in the same direction back-to-back for each snapshot. | 2014-04-10 |
20140101340 | Efficient Scheduling of Transactions from Multiple Masters - Data-transfer transactions from multiple masters may be balanced by taking snapshots of the transactions stored in a buffer, and executing transactions from each master back-to-back. | 2014-04-10 |
20140101341 | METHOD AND APPARATUS FOR DECREASING PRESENTATION LATENCY - Aspects of the present disclosure describe automatically changing an output mode of an output device from a first output mode to a latency reduction mode. An initiation signal and the output data may be received from a client device platform or a signal distributor. Upon receiving the initiation signal, the output device may change the output mode from the first output mode to the latency reduction mode. Thereafter, the output device may receive an end latency reduction mode signal. The output device may then revert back to the first output mode. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 2014-04-10 |
20140101342 | METHOD AND APPARATUS FOR IMPROVING DECREASING PRESENTATION LATENCY - Aspects of the present disclosure describe automatically changing an output mode of an output device from a first output mode to a latency reduction mode. An initiation signal and the output data may be received from a client device platform or a signal distributor. Upon receiving the initiation signal, the output device may change the output mode from the first output mode to the latency reduction mode. Thereafter, the output device may receive an end latency reduction mode signal. The output device may then revert back to the first output mode. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 2014-04-10 |
20140101343 | Dynamic Selection of Operating Modes - A dock connects to a user's host device and provides video output to a display. The host device is a computing device that executes one or more applications. One or more controllers are peripheral devices that can be used to control applications on the host device. A service module provides support for additional communication profiles that are more versatile than the communication profiles supported by the operating system on the host device. The service module establishes a unidirectional connection between the host device and the peripheral devices as well as a bidirectional connection. A control scheme identifying an operating mode associated with a peripheral device is retrieved from a server. The peripheral device is configured to send data to the host device in a format recognizable by one or more applications based on the operating scheme. | 2014-04-10 |
20140101344 | EXTENDED INPUT/OUTPUT MEASUREMENT WORD FACILITY FOR OBTAINING MEASUREMENT DATA IN AN EMULATED ENVIRONMENT - An Extended Input/output (I/O) measurement word facility is provided. Provision is made for emulation of the Extended I/O measurement word facility. The facility provides for storing measurement data associated with a single I/O operation in an extended measurement word associated with an I/O response block. In a further aspect, the stored data may have a resolution of approximately one-half microsecond. | 2014-04-10 |
20140101345 | UNIVERSAL SERIAL BUS (USB) PLUG-IN EVENT DETECTION SYSTEM AND ASSOCIATED METHOD - Universal serial bus (USB) plug-in event detection systems and methods are disclosed herein. An exemplary USB system includes a USB interface and a USB capacitive-sensing detection module coupled with a data line of the USB interface. The USB capacitance-sensing detection module monitors a change in capacitance on the data line to detect USB plug-in events. USB capacitance-sensing detection module can detect a USB plug-in event when the USB interface is in a powered-down state. The USB system can be configured to power up the USB interface upon detecting the USB plug-in event. The USB system can further include a USB host. The USB host can be in a standby or hibernation mode (minimum power state) when the USB capacitive-sensing detection module detects the USB plug-in event, and the USB system can be configured to wake-up the USB host from the standby or hibernation mode upon detecting the USB plug-in event. | 2014-04-10 |
20140101346 | REMOTELY CONTROLLABLE ELECTRICAL SOCKETS WITH PLUGGED APPLIANCE DETECTION AND IDENTIFICATION - Embodiments of the present invention provide for a remotely controllable electrical socket. Such sockets may include an electrical conductor for receiving a plug of an electrical device. The plug may be associated with a tag for receiving identifying information that corresponds to the electrical device. Exemplary sockets may further include a tag reader for obtaining identifying information from the tag, a sensor for detecting if the plug is inserted in the outlet, and a communications interface for wirelessly sending information to a computing device regarding the identifying information and whether the plug is inserted in the outlet. The communications interface may also receive operational instructions from the computing device (e.g., to turn the power to the plug/electrical device ON or OFF). | 2014-04-10 |
20140101347 | Isochronous Data Transfer Between Memory-Mapped Domains of a Memory-Mapped Fabric - Techniques for isochronous data transfer between different memory-mapped domains in a distributed system. A method includes configuring an isochronous engine with an isochronous period. The method further includes transferring data over a memory-mapped fabric from a first memory to a second memory during a specified portion of a cycle of the isochronous period. The first memory is comprised in a first device in a first memory-mapped domain of the memory-mapped fabric and the second memory is comprised in a second device in a second memory-mapped domain of the memory-mapped fabric. The method may further comprise translating one or more addresses related to the transferring. The memory-mapped fabric may be a PCI-Express fabric. The transferring may be performed by a DMA controller. A non-transparent bridge may separate the first and the second memory-mapped domains and may perform the translating. | 2014-04-10 |
20140101348 | HARD DISK DRIVE WITH INTEGRATED ETHERNET INTERFACE - An integrated circuit of a hard disk drive includes an Ethernet network interface module configured to transmit and receive data packets via an Ethernet connection. The data packets respectively include packet headers and at least one of small computer system interface (SCSI) commands and SCSI data requests. A processor is configured to process the data packets transmitted and received by the Ethernet network interface module. A hard disk control module is configured to control, based on the at least one of the SCSI commands and the SCSI data requests, writing of data to a hard disk and reading of the data from the hard disk. Each of the hard disk control module, the processor, and the network interface module is located in the integrated circuit. | 2014-04-10 |
20140101349 | CONFIGURABLE SERIAL INTERFACE - Method and system for configuring a serial interface. The system includes one or more input nodes each coupled to a corresponding serial bus. One or more output nodes are coupled to a respective serial bus, each output node having a respective driver. A voltage detection circuit determines the voltage at a configuration node. Mode of serial bus operation is based on the voltage level detected at the configuration node. In at least one mode of serial bus operation, the configuration node is used as a mode select input and power source for at least one output driver. | 2014-04-10 |
20140101350 | METHOD FOR FINDING STARTING BIT OF REFERENCE FRAMES FOR AN ALTERNATING-PARITY REFERENCE CHANNEL - The present invention discloses a method for locating the reference frames of the reference lane on the transmitting data bus. The present invention addresses this object by disclosing a method whereby the relationship between the size of the reference frame transmitted over the reference lane and the width of the data bus is such that the reference frame is bit-shifted automatically until it is aligned with the data bus. | 2014-04-10 |
20140101351 | TWO-WIRE COMMUNICATION PROTOCOL ENGINE - In an example embodiment, a two-wire communication protocol engine manages control and data transmissions in a bi-directional, multi-node bus system where each node is connected over a twisted wire pair bus to another node. Some embodiments include a state machine that allows for synchronized updates of configuration data across the system, a distributed interrupt system, a synchronization pattern based on data coding used in the system, and data scrambling applied to a portion of the data transmitted over the twisted wire pair bus. The multi-node bus system comprises a master node and a plurality of slave nodes. The slave nodes can be powered over the twisted wire pair bus. | 2014-04-10 |
20140101352 | INTERRUPT CONTROLLER, APPARATUS INCLUDING INTERRUPT CONTROLLER, AND CORRESPONDING METHODS FOR PROCESSING INTERRUPT REQUEST EVENT(S) IN SYSTEM INCLUDING PROCESSOR(S) - An interrupt controller coupled to a plurality of processors is provided to rout at least one interrupt request event to at least one of the processors. The interrupt controller includes a receiving circuit and a controlling circuit. The receiving circuit receives at least one interrupt input, and the controlling circuit, generates the at least one interrupt request event based on the received at least one interrupt input and routes the at least one interrupt request event generated to the at least one of the processors. The plurality of processors including at least a first processor and a second processor, the first and second processors arranged to process interrupt request event(s), and the controlling circuit is arranged to withdraw/cancel assertion of an interrupt request event that has been transmitted to the first processor. | 2014-04-10 |
20140101353 | MULTI-PROCESSOR DEVICE - The present invention intends to provide a high-performance multi-processor device in which independent buses and external bus interfaces are provided for each group of processors of different architectures, if a single chip includes a plurality of multi-processor groups. A multi-processor device of the present invention comprises a plurality of processors including first and second groups of processors of different architectures such as CPUs, SIMD type super-parallel processors, and DSPs, a first bus which is a CPU bus to which the first processor group is coupled, a second bus which is an internal peripheral bus to which the second processor group is coupled, independent of the first bus, a first external bus interface to which the first bus is coupled, and a second external bus interface to which the second bus is coupled, over a single semiconductor chip. | 2014-04-10 |
20140101354 | MEMORY ACCESS CONTROL MODULE AND ASSOCIATED METHODS - First and second data interfaces provide data transfer to and from a plurality of memory banks. The first data interface uses a first bus size and a first clock frequency. The second data interface uses a second bus size and a second clock frequency. The second bus size is an integer multiple of the first bus size. The first clock frequency is an integer multiple of the second clock frequency. A channelizer module segments data from the second data interface into data segments of the first bus size and transmits them to addressed ones of the plurality of memory banks using the first clock frequency. The channelizer module also receives data in accordance with the first bus size and first clock frequency from the plurality of memory banks, combines this data into the second bus size, and transmits the data to the second data interface using the second clock frequency. | 2014-04-10 |
20140101355 | VIRTUALIZED COMMUNICATION SOCKETS FOR MULTI-FLOW ACCESS TO MESSAGE CHANNEL INFRASTRUCTURE WITHIN CPU - A message channel optimization method and system enables multi-flow access to the message channel infrastructure within a CPU of a processor-based system. A user (pcode) employs a virtual channel to submit message channel transactions, with the message channel driver processing the transaction “behind the scenes”. The message channel driver thus allows the user to continue processing without having to block other transactions from being processed. Each transaction will be processed, either immediately or at some future time, by the message channel driver. The message channel optimization method and system are useful for tasks involving message channel transactions as well as non-message channel transactions. | 2014-04-10 |
20140101356 | TRANSMISSION DEVICE, TRANSMISSION SYSTEM, AND CONTROL METHOD FOR TRANSMISSION DEVICE - A transmission device includes a plurality of transmitting units that transmit data to an opposing device via different paths, a determining unit that compares a first speed of an operation clock for the opposing device with a second speed of an operation clock for the transmission device, and an inserting unit that inserts, when the first speed is same as the second speed, first difference absorbing data that has a predetermined data length into the data to be transmitted by the transmitting units, that inserts, when the first speed is higher, second difference absorbing data that has a data length smaller than the predetermined data length into the data, and that inserts, when the second speed is higher, third difference absorbing data that has a data length greater than the predetermined data length into the data. | 2014-04-10 |
20140101357 | METHOD AND PROTOCOL FOR HIGH-SPEED DATA CHANNEL DETECTION CONTROL - A system capable of bi-directional data transfer, the system including a host configured to send downstream data to a peripheral and to receive upstream data from the peripheral, a main link coupled to the host and configured to transfer the downstream data from the host to the peripheral, and an auxiliary link coupled to the host and including a first auxiliary link lane for transferring the upstream data from the peripheral to the host in a first mode, and for transferring the downstream data from the host to the peripheral in a second mode, wherein the host is configured to engage in one or more handshake processes with the peripheral to cause the auxiliary link to switch between the first and second modes. | 2014-04-10 |
20140101358 | BYTE SELECTION AND STEERING LOGIC FOR COMBINED BYTE SHIFT AND BYTE PERMUTE VECTOR UNIT - Exemplary embodiments of the present invention disclose a method and system for executing data permute and data shift instructions. In a step, an exemplary embodiment encodes a control index value using the recoding logic into a 1-hot-of-n control for at least one of a plurality of datum positions in the one or more target registers. In another step, an exemplary embodiment conditions the 1-hot-of-n control by a gate-free logic configured for at least one of the plurality of datum positions in the one or more target registers for each of the data permute instructions and the at least one data shift instruction. In another step, an exemplary embodiment selects the 1-hot-of-n control or the conditioned 1-hot-of-n control based on a current instruction mode. In another step, an exemplary embodiment transforms the selected 1-hot-of-n control into a format applicable for the crossbar switch. | 2014-04-10 |
20140101359 | ASYMMETRIC CO-EXISTENT ADDRESS TRANSLATION STRUCTURE FORMATS - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration. This facilitates provision of guest access in virtualized operating systems, and/or the mixing of translation formats to better match the data access patterns being translated. | 2014-04-10 |
20140101360 | ADJUNCT COMPONENT TO PROVIDE FULL VIRTUALIZATION USING PARAVIRTUALIZED HYPERVISORS - A system configuration is provided with a paravirtualizing hypervisor that supports different types of guests, including those that use a single level of translation and those that use a nested level of translation. When an address translation fault occurs during a nested level of translation, an indication of the fault is received by an adjunct component. The adjunct component addresses the address translation fault, at least in part, on behalf of the guest. | 2014-04-10 |
20140101361 | SYSTEM SUPPORTING MULTIPLE PARTITIONS WITH DIFFERING TRANSLATION FORMATS - A system configuration is provided with multiple partitions that supports different types of address translation structure formats. The configuration may include partitions that use a single level of translation and those that use a nested level of translation. Further, differing types of translation structures may be used. The different partitions are supported by a single hypervisor. | 2014-04-10 |
20140101362 | SUPPORTING MULTIPLE TYPES OF GUESTS BY A HYPERVISOR - A system configuration is provided that includes multiple partitions that have differing translation mechanisms associated therewith. For instance, one partition has associated therewith a single level translation mechanism for translating guest virtual addresses to host physical addresses, and another partition has a nested level translation mechanism for translating guest virtual addresses to host physical addresses. The different translation mechanisms and partitions are supported by a single hypervisor. Although the hypervisor is a paravirtualized hypervisor, it provides full virtualization for those partitions using nested level translations. | 2014-04-10 |
20140101363 | SELECTABLE ADDRESS TRANSLATION MECHANISMS WITHIN A PARTITION - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration. For a system configuration that includes partitions, the translation mechanism to be used for a partition or a portion thereof is selectable and may be different for different partitions or even portions within a partition. | 2014-04-10 |
20140101364 | SELECTABLE ADDRESS TRANSLATION MECHANISMS WITHIN A PARTITION - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration. For a system configuration that includes partitions, the translation mechanism to be used for a partition or a portion thereof is selectable and may be different for different partitions or even portions within a partition. | 2014-04-10 |
20140101365 | SUPPORTING MULTIPLE TYPES OF GUESTS BY A HYPERVISOR - A system configuration is provided that includes multiple partitions that have differing translation mechanisms associated therewith. For instance, one partition has associated therewith a single level translation mechanism for translating guest virtual addresses to host physical addresses, and another partition has a nested level translation mechanism for translating guest virtual addresses to host physical addresses. The different translation mechanisms and partitions are supported by a single hypervisor. Although the hypervisor is a paravirtualized hypervisor, it provides full virtualization for those partitions using nested level translations. | 2014-04-10 |
20140101366 | WRITING MEMORY BLOCKS USING CODEWORDS - A generator matrix is provided to generate codewords from messages of write operations. Rather than generate a codeword using the entire generator matrix, some number of bits of the codeword are determined to be, or designated as, stuck bits. One or more submatrices of the generator matrix are determined based on the columns of the generator matrix that correspond to the stuck bits. The submatrices are used to generate the codeword from the message, and only the bits of the codeword that are not the stuck bits are written to a memory block. By designating one or more bits as stuck bits, the operating life of the bits is increased. Some of the submatrices of the generator matrix may be pre-computed for different stuck bit combinations. The pre-computed submatrices may be used to generate the codewords, thereby increasing the performance of write operations. | 2014-04-10 |
20140101367 | CONTROLLING METHOD FOR CONNECTOR, CONNECTOR AND MEMORY STORAGE DEVICE - A controlling method for connector is provided, which includes: receiving a first signal stream under a condition that a squelch detector is turned-off; determining whether the first signal stream contains a burst signal under a first operating frequency; if the first signal stream contains the burst signal, turning on the squelch detector and determining by the squelch detector under a second operating frequency whether a second signal stream is a waking signal, wherein the second signal stream is received after receiving the first signal stream and the second operating frequency is greater than the first operating frequency. The controlling method further includes: if the second signal stream is the waking signal, changing an operating state of the connector to an active state. In this way, the power consumption of the connector is reduced. | 2014-04-10 |
20140101368 | Binding microprocessor to memory chips to prevent re-use of microprocessor - A processor is provided that binds itself to a circuit such that the processor cannot be subsequently reused in other circuits. On a first startup of the processor, a memory segment of an external volatile memory device is read to obtain information prior to initialization of the memory segment. An original/initial identifier may be generated from the information read from the memory segment. The original/initial identifier may then be stored in a non-volatile storage of the processor. On subsequent startups of the processor, it verifies that the processor is still coupled to the same external volatile memory device by using the stored identifier. For instance, on a subsequent startup, the processor again reads the same memory segment of the external memory device and generates a new identifier. If the identifier matches the previously stored identifier, then the processor may continue its operations; otherwise the processor is disabled/halted. | 2014-04-10 |
20140101369 | METHODS, DEVICES AND SYSTEMS FOR PHYSICAL-TO-LOGICAL MAPPING IN SOLID STATE DRIVES - A data storage device comprises a plurality of non-volatile memory devices storing physical pages, each stored at a predetermined physical location. A controller may be coupled to the memory devices and configured to access data stored in a plurality of logical pages (L-Pages), each associated with an L-Page number that enables the controller to logically reference data stored in the physical pages. A volatile memory may comprise a logical-to-physical address translation map that enables the controller to determine a physical location, within the physical pages, of data stored in each L-Page. The controller may be configured to maintain, in the memory devices, journals defining physical-to-logical correspondences, each journal covering a predetermined range of physical pages and comprising a plurality of entries that associate one or more physical pages to each L-Page. The controller may read the journals upon startup and rebuild the address translation map from the read journals. | 2014-04-10 |
20140101370 | APPARATUS AND METHOD FOR LOW POWER LOW LATENCY HIGH CAPACITY STORAGE CLASS MEMORY - A method and a storage system are provided for implementing enhanced solid-state storage class memory (eSCM) including a direct attached dual in line memory (DIMM) card containing dynamic random access memory (DRAM), and at least one non-volatile memory, for example, Phase Change memory (PCM), Resistive RAM (ReRAM), Spin-Transfer-Torque RAM (STT-RAM), and NAND flash chips. An eSCM processor controls selectively allocating data among the DRAM, and the at least one non-volatile memory primarily based upon a data set size. | 2014-04-10 |
20140101371 | SYSTEMS AND METHODS FOR NONVOLATILE MEMORY PERFORMANCE THROTTLING - Systems and methods for nonvolatile memory (“NVM”) performance throttling are disclosed. Performance of an NVM system may be throttled to achieve particular data retention requirements. In particular, because higher storage temperatures tend to reduce the amount of time that data may be reliably stored in an NVM system, performance of the NVM system may be throttled to reduce system temperatures and increase data retention time. | 2014-04-10 |
20140101372 | MEMORY SYSTEM AND READ RECLAIM METHOD THEREOF - A memory system includes a nonvolatile memory device including a first memory area formed of memory blocks which store n-bit data per cell and a second memory area formed of memory blocks which store m-bit data per cell, where n and m are different integers, and a memory controller configured to control the nonvolatile memory device. The memory controller is configured to execute a read operation, and to execute a read reclaim operation in which valid data of a target memory block of the second memory area is transferred to one or more memory blocks of the first memory area, the target memory block selected during the read operation. The read reclaim operation is processed as complete when all the valid data of the target memory block is transferred to the one or more memory blocks of the first memory area. | 2014-04-10 |
20140101373 | METHOD OF MANAGING DATA STORAGE DEVICE AND DATA STORAGE DEVICE - A method of managing a data storage device including a memory controller and a memory device includes: calculating a first sequential and consecutive write cost (SCWC) according to a garbage collection (GC) write operation policy, a second SCWC according to a slack space recycling (SSR) write operation policy and a third SCWC according to an in-place updating (IPU) write operation policy respectively, in response to a write request in the memory controller; determining a write operation policy which has a minimum cost of the first through third SCWCs; and writing data in a selected segment in the memory device according to the determined write operation policy. | 2014-04-10 |
20140101374 | TRACKING A LIFETIME OF WRITE OPERATIONS TO A NON-VOLATILE MEMORY STORAGE - A method, device, and system are disclosed. In one embodiment method begins by incrementing a count of a total number of write operations to a non-volatile memory storage for each write operation to the non-volatile memory storage. The method then receives a request for the total count of lifetime write operations from a requestor. Finally, the method sends the total count of lifetime write operations to the requestor. | 2014-04-10 |
20140101375 | APPARATUS, SYSTEM, AND METHOD FOR ALLOCATING STORAGE - An apparatus, system, and method are disclosed for allocating non-volatile storage. The storage device may present a logical address, which may exceed a physical storage capacity of the device. The storage device may allocate logical capacity in the logical address space. An allocation request may be allowed when there is sufficient unassigned and/or unallocated logical capacity to satisfy the request. Data may be stored on the non-volatile storage device by requesting physical storage capacity. A physical storage request, such as a storage request or physical storage reservation, when there is sufficient available physical storage capacity to satisfy the request. The device may maintain an index to associate logical identifiers (LIDs) in the logical address space with storage locations on the storage device. This index may be used to make logical capacity allocations and/or to manage physical storage space. | 2014-04-10 |
20140101376 | APPARATUS, SYSTEM, AND METHOD FOR CONDITIONAL AND ATOMIC STORAGE OPERATIONS - An apparatus, system, and method are disclosed for implementing conditional storage operations. Storage clients access and allocate portions of an address space of a non-volatile storage device. A conditional storage request is provided, which causes data to be stored to the non-volatile storage device on the condition that the address space of the device can satisfy the entire request. If only a portion of the request can be satisfied, the conditional storage request may be deferred or fail. An atomic storage request is provided, which may comprise one or more storage operations. The atomic storage request succeeds if all of the one or more storage operations are complete successfully. If one or more of the storage operations fails, the atomic storage request is invalidated, which may comprise deallocating logical identifiers of the request and/or invalidating data on the non-volatile storage device pertaining to the request. | 2014-04-10 |
20140101377 | SOLID STATE MEMORY (SSM), COMPUTER SYSTEM INCLUDING AN SSM, AND METHOD OF OPERATING AN SSM - In one aspect, data is stored in a solid state memory which includes first and second memory layers. A first assessment is executed to determine whether received data is hot data or cold data. Received data which is assessed as hot data during the first assessment is stored in the first memory layer, and received data which is first assessed as cold data during the first assessment is stored in the second memory layer. Further, a second assessment is executed to determine whether the data stored in the first memory layer is hot data or cold data. Data which is then assessed as cold data during the second assessment is migrated from the first memory layer to the second memory layer. | 2014-04-10 |
20140101378 | Metadata Rebuild in a Flash Memory Controller Following a Loss of Power - A method of rebuilding metadata in a flash memory controller following a loss of power is provided. The method includes reading logical address information associated with an area of flash memory, and using time stamp information to determine if data stored in the flash memory area are valid. | 2014-04-10 |
20140101379 | Variable Over-Provisioning For Non-Volatile Storage - Dynamically varying Over-Provisioning (OP) enables improvements in lifetime, reliability, and/or performance of a Solid-State Disk (SSD) and/or a flash memory therein. A host coupled to the SSD writes newer data to the SSD. If the newer host data is less random than older host data, then entropy of host data on the SSD decreases. In response, an SSD controller of the SSD dynamically alters allocations of the flash memory, decreasing host allocation and increasing OP allocation. If the newer host data is more random, then the SSD controller dynamically increases the host allocation and decreases the OP allocation. The SSD controller dynamically allocates the OP allocation between host OP and system OP proportionally in accordance with a ratio of bandwidths of host and system data writes to the flash memory. | 2014-04-10 |
20140101380 | MANAGING BANKS IN A MEMORY SYSTEM - Systems and methods are provided that facilitate memory storage in a memory device. The system contains a memory controller and a memory array communicatively coupled to the memory controller. The memory controller sends commands to the memory array and the memory array writes or retrieves data contained therein based upon the command. The memory controller can monitor multiple banks and manage bank activations. Accordingly, memory access overhead can be reduced and memory devices can be more efficient. | 2014-04-10 |
20140101381 | MANAGING BANKS IN A MEMORY SYSTEM - Systems and methods are provided that facilitate memory storage in a multi-bank memory device. The system contains a memory controller and a memory array communicatively coupled to the memory controller. The memory controller sends commands to the memory array and the memory array updates or retrieves data contained therein based upon the command. If the memory controller detects a pattern of memory requests, the memory controller can issue a preemptive activation request to the memory array. Accordingly, memory access overhead is reduced. | 2014-04-10 |
20140101382 | DATA BUFFER WITH A STROBE-BASED PRIMARY INTERFACE AND A STROBE-LESS SECONDARY INTERFACE - A data buffer with a strobe-based primary interface and a strobe-less secondary interface used on a memory module is described. One memory module includes an address buffer, the data buffer and multiple dynamic random-access memory (DRAM) devices. The address buffer provides a timing reference to the data buffer and to the DRAM devices for one or more transactions between the data buffer and the DRAM devices via the strobe-less secondary interface. | 2014-04-10 |
20140101383 | REGISTER BANK CROSS PATH CONNECTION METHOD IN A MULTI CORE PROCESSOR SYSTEM - Scratch pad register banks are used as shared fast access storage between processors in a multi processor system. Instead of the usual one to one register mapping between the processors and the scratch pad register banks, an any to any mapping is implemented. The utilization of the scratch pad register banks is improved as the any to any mapping of the registers allow the storage of any processor register anywhere in the scratch pad register bank. | 2014-04-10 |
20140101384 | Storage System - Disclosed is a storage system that suppress occurrence of a bottleneck in the storage system, efficiently uses a bandwidth of hardware, and achieves high reliability. A storage system includes a storage apparatus that stores data, a controller that controls data input/output with respect to the storage apparatus, and an interface that couples the storage apparatus and the controller. The storage apparatus has a plurality of physical ports that are coupled to the interface. The controller logically partitions a storage area of the storage apparatus into a plurality of storage areas and provides the plurality of storage areas, or allocates the plurality of physical ports to the logically partitioned storage areas. | 2014-04-10 |
20140101385 | File Management Method and Hierarchy Management File System - There is provided a file management system and method of creating a hierarchy management file capable of preventing an access performance from dropping when a user accesses to a file. According to the system and method, a server creates file systems in high-speed and low-speed volumes and a file-sharing server virtually integrates those file systems into one system as a pseudo file system. Then, the server moves a file to be moved to the file system created in the low-speed volume in advance, not when an access is made to the file. When a user accesses to the file after that, the user directly accesses to destination without requiring tcopying the file, so that the accessing performance may be prevented from dropping. | 2014-04-10 |
20140101386 | DATA STORAGE DEVICE INCLUDING BUFFER MEMORY - A data storage device includes a data storage medium a micro control unit (MCU) connected to a host through a first interface method and configured to control the data storage medium in response to a request of the host; and a buffer memory connected to the host through a second interface method, connected to the MCU, and controlled by the MCU and the host, respectively. | 2014-04-10 |
20140101387 | OPPORTUNISTIC CACHE REPLACEMENT POLICY - A cache management system employs a replacement policy in a manner that manages concurrent accesses to cache. The cache management system comprises a cache, a replacement policy storage for storing replacement statuses of cache lines of the cache, and an update module. The update module, comprising access filtering and a concurrent update handling, determines how updates to the replacement policy storage are handled. In a multi-threaded compute environment, a concurrent access to shared cache causes a selective update to the replacement policy storage. | 2014-04-10 |
20140101388 | CONTROLLING PREFETCH AGGRESSIVENESS BASED ON THRASH EVENTS - A method and apparatus for controlling the aggressiveness of a prefetcher based on thrash events is presented. An aggressiveness of a prefetcher for a cache is controlled based upon a number of thrashed cache lines that are replaced by a prefetched cache line and subsequently written back into the cache before the prefetched cache line has been accessed. | 2014-04-10 |
20140101389 | CACHE MANAGEMENT - A system includes a data store and a memory cache subsystem. A method for pre-fetching data from the data store for the cache includes determining a performance characteristic of a data store. The method also includes identifying a pre-fetch policy configured to utilize the determined performance characteristic of the data store. The method also includes pre-fetching data stored in the data store by copying data from the data store to the cache according to the pre-fetch policy identified to utilize the determined performance characteristic of the data store. | 2014-04-10 |
20140101390 | Computer Cache System Providing Multi-Line Invalidation Messages - A computer cache system delays cache coherence invalidation messages related to cache lines of a common memory region to collect these messages into a combined message that can be transmitted more efficiently. This delay may be coordinated with a detection of whether the processor is executing a data-race free portion of the program so that the delay system may be used for a variety of types of programs which may have data-race and data-race free sections. | 2014-04-10 |
20140101391 | CONDITIONAL WRITE PROCESSING FOR A CACHE STRUCTURE OF A COUPLING FACILITY - A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system. | 2014-04-10 |
20140101392 | LATENCY REDUCTION IN READ OPERATIONS FROM DATA STORAGE IN A HOST DEVICE - An apparatus includes a memory and a processor. The processor is configured to send to a storage device a request from an application to retrieve data from the storage device, so as to cause the data to be transferred from the storage device to the memory, to send to the application an acknowledgement that the requested data is available in the memory before the data has been fully transferred from the storage device to the memory, and, when the fetched data is ready in the memory, to provide the data to the application. | 2014-04-10 |
20140101393 | Storage Device and Controlling Method Thereof - A controlling method of a storage device is provided. The storage device is in communication with a handheld electronic device. Firstly, a connection status is provided to the handheld electronic device from the storage device, so that the connection status is shown on the handheld electronic device. The connection status indicates that a first storage unit is connected with the storage device. Then, a specified file of the first storage unit is selected according to the connection status shown on the handheld electronic device. Then, a read command is issued from the storage device to the first storage unit, and the specified file of the first storage unit is read in response to the read command. Afterwards, the specified file is stored into the storage device, and a storing result is provided to the handheld electronic device. | 2014-04-10 |
20140101394 | COMPUTER SYSTEM AND VOLUME MANAGEMENT METHOD FOR THE COMPUTER SYSTEM - The present invention allows distribution of load generated by a single VOL to multiple processor units, by dividing the VOL into a plurality of smaller fractions called sub-VOL and distributing their ownership to multiple processor units. The division of a VOL is performed by dividing the control information of the VOL for plurality of sub-VOLs and (A) assigning VOL ownership to a processor unit for processing the tasks that are related to complete VOL (e.g. VOL RESERVE command) and (B) assigning ownership of each sub-VOL to different processor units for processing tasks that are specific to that sub-VOL (e.g. Read/Write commands). Thus the load on a singular sub-VOL owner processor unit becomes only a fraction of the total load generated by the VOL. The present invention helps in achieving a relatively even distribution of load among processor units. | 2014-04-10 |
20140101395 | SEMICONDUCTOR MEMORY DEVICES INCLUDING A DISCHARGE CIRCUIT - Semiconductor memory devices are provided. Each of the semiconductor memory devices may include first and second memory cells. The first memory cell may be connected to a bit line and a complementary bit line. Moreover, each of the semiconductor memory devices may include a discharge circuit connected to the first memory cell via the bit line and the complementary bit line. The discharge circuit may be configured to discharge the first memory cell during a read or write operation of the second memory cell. | 2014-04-10 |
20140101396 | COUNTER-BASED ENTRY INVALIDATION FOR METADATA PREVIOUS WRITE QUEUE - Embodiments of the invention relate to counter-based entry invalidation for a metadata previous write queue (PWQ). An aspect of the invention includes writing an address into an entry in the metadata PWQ, the address being associated with an instance of metadata received from a pipeline and setting a valid tag associated with the entry in the metadata PWQ to valid. Another aspect of the invention includes initializing a counter to zero and incrementing the counter based on receiving a count signal from the pipeline until the counter is equal to a threshold. Yet another aspect of the invention includes setting the valid tag to invalid based on the counter being equal to the threshold. | 2014-04-10 |
20140101397 | REMOTE REDUNDANT ARRAY OF INEXPENSIVE MEMORY - A method for retrieving stored information from a storage node includes operating a computing device to generate a memory access request comprising a virtual memory address that identifies a first storage node and at least a second storage node based on the virtual memory address. The method further includes operating the computing device to transmit a retrieve request to both the first storage node and the second storage node to retrieve stored information. The first and the second storage nodes are each enabled to store a copy of the stored information, and are included in a plurality of storage nodes that constitute an extended memory. If a first response from the first storage node is received before a second response is received from the second storage node, then the method further includes operating the computing devices to receive the stored information from the first storage node. | 2014-04-10 |
20140101398 | REMOTE OFFICE DUPLICATION - Remote office deduplication comprises calculating one or more fingerprints of one or more data blocks, sending the one or more fingerprints to one or more backup servers via a network interface, receiving from the one or more backup servers an indication of which one or more data blocks corresponding to the one or more fingerprints should be sent to the one or more backup servers, and if the indication indicates one or more data blocks to be sent to the one or more backup servers, sending the one or more data blocks to the one or more backup servers via the network interface. | 2014-04-10 |
20140101399 | CONTINUOUS DATA PROTECTION OVER INTERMITTENT CONNECTIONS, SUCH AS CONTINUOUS DATA BACKUP FOR LAPTOPS OR WIRELESS DEVICES - A portable data protection system is described for protecting, transferring or copying data using continuous data protection (CDP) over intermittent or occasional connections between a computer system or mobile device containing the data to be protected, transferred or copied, called a data source, and one or more computer systems that receive the data, called a data target. CDP can be broken down logically into two phases: 1) detecting changes to data on a data source and 2) replicating the changes to a data target. The portable data protection system uses a method that performs the first phase continuously or near continuously on the data source, and the second phase when a connection is available between the data source and the data target. | 2014-04-10 |
20140101400 | STORE PERIPHERAL COMPONENT INTERCONNECT (PCI) FUNCTION CONTROLS INSTRUCTION - An instruction is provided that includes an opcode field to identify a store instruction to store in a designated location current values of operational parameters of an adapter function of an adapter; a first field to identify a location, the contents of which include a function handle identifying a handle of the adapter function for which the store instruction is being performed, and an indication of an address space associated with the adapter function identified by the function handle to which the store instruction applies; and a second field to identify the designated location of where a result of the store instruction is to be stored. Execution of the instruction includes obtaining information from a function information block associated with the adapter function; and copying the information from the function information block into the designated location, based on completion of one or more validity checks with one or more predefined results. | 2014-04-10 |
20140101401 | RESOURCE RECOVERY FOR CHECKPOINT-BASED HIGH-AVAILABILITY IN A VIRTUALIZED ENVIRONMENT - A computer-implemented method provides checkpoint high-available for an application in a virtualized environment with reduced network demands. An application executes on a primary host machine comprising a first virtual machine. A virtualization module receives a designation from the application of a portion of the memory of the first virtual machine as purgeable memory, wherein the purgeable memory can be reconstructed by the application when the purgeable memory is unavailable. Changes are tracked to a processor state and to a remaining portion that is not purgeable memory and the changes are periodically forwarded at checkpoints to a secondary host machine. In response to an occurrence of a failure condition on the first virtual machine, the secondary host machine is signaled to continue execution of the application by using the forwarded changes to the remaining portion of the memory and by reconstructing the purgeable memory. | 2014-04-10 |
20140101402 | SYSTEM SUPPORTING MULTIPLE PARTITIONS WITH DIFFERING TRANSLATION FORMATS - A system configuration is provided with multiple partitions that supports different types of address translation structure formats. The configuration may include partitions that use a single level of translation and those that use a nested level of translation. Further, differing types of translation structures may be used. The different partitions are supported by a single hypervisor. | 2014-04-10 |
20140101403 | Application-Managed Translation Cache - Mechanisms are provided, in a data processing system, for accessing a memory location in a physical memory of the data processing system. With these mechanisms, a request is received from an application to access a memory location specified by an effective address in an application address space. A translation is performed, at a user level of execution, of the effective address to a real address table index (RATI) value corresponding to the effective address. At a hardware level of execution, a lookup operation is performed that looks-up the RATI value in a real address table data structure maintained by trusted system level hardware of the data processing system, to identify a real address for accessing physical memory. A memory location in physical memory is thereafter accessed based on the identified real address. | 2014-04-10 |
20140101404 | SELECTABLE ADDRESS TRANSLATION MECHANISMS - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration, and the use of a particular translation structure format in translating an address is selectable. | 2014-04-10 |
20140101405 | REDUCING COLD TLB MISSES IN A HETEROGENEOUS COMPUTING SYSTEM - Methods and apparatuses are provided for avoiding cold translation lookaside buffer (TLB) misses in a computer system. A typical system is configured as a heterogeneous computing system having at least one central processing unit (CPU) and one or more graphic processing units (GPUs) that share a common memory address space. Each processing unit (CPU and GPU) has an independent TLB. When offloading a task from a particular CPU to a particular GPU, translation information is sent along with the task assignment. The translation information allows the GPU to load the address translation data into the TLB associated with the one or more GPUs prior to executing the task. Preloading the TLB of the GPUs reduces or avoids cold TLB misses that could otherwise occur without the benefits offered by the present disclosure. | 2014-04-10 |
20140101406 | ADJUNCT COMPONENT TO PROVIDE FULL VIRTUALIZATION USING PARAVIRTUALIZED HYPERVISORS - A system configuration is provided with a paravirtualizing hypervisor that supports different types of guests, including those that use a single level of translation and those that use a nested level of translation. When an address translation fault occurs during a nested level of translation, an indication of the fault is received by an adjunct component. The adjunct component addresses the address translation fault, at least in part, on behalf of the guest. | 2014-04-10 |
20140101407 | SELECTABLE ADDRESS TRANSLATION MECHANISMS - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration, and the use of a particular translation structure format in translating an address is selectable. | 2014-04-10 |
20140101408 | ASYMMETRIC CO-EXISTENT ADDRESS TRANSLATION STRUCTURE FORMATS - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration. This facilitates provision of guest access in virtualized operating systems, and/or the mixing of translation formats to better match the data access patterns being translated. | 2014-04-10 |
20140101409 | 3D MEMORY BASED ADDRESS GENERATOR - Systems and methods are disclosed for reducing memory usage and increasing the throughput in variable-size Fast Fourier Transform (FFT) architectures. In particular, 3D symmetric virtual memory is disclosed to exploit the structure inherent in variable-size FFT computations. Data samples may be written to and read from the 3D symmetric virtual memory in a specific sequence of coordinates that exploits the structure inherent in variable-size FFT computations. Memory locations in the 3D symmetric virtual memory may be mapped to memory address in a 1D buffer using an address generation circuit. | 2014-04-10 |
20140101410 | METHOD AND SYSTEM FOR MANAGING HARDWARE RESOURCES TO IMPLEMENT SYSTEM FUNCTIONS USING AN ADAPTIVE COMPUTING ARCHITECTURE - An adaptable integrated circuit is disclosed having a plurality of heterogeneous computational elements coupled to an interconnection network. The interconnection network changes interconnections between the plurality of heterogeneous computational elements in response to configuration information. A first group of computational elements is allocated to form a first version of a functional unit to perform a first function by changing interconnections in the interconnection network between the first group of heterogeneous computational elements. A second group of computational elements is allocated to form a second version of a functional unit to perform the first function by changing interconnections in the interconnection network between the second group of heterogeneous computational elements. One or more of the first or second group of heterogeneous computational elements are reallocated to perform a second function by changing the interconnections between the one or more of the first or second group of heterogeneous computational elements. | 2014-04-10 |