36th week of 2012 patent applcation highlights part 47 |
Patent application number | Title | Published |
20120226793 | SYSTEM AND METHOD FOR DESCRIBING NETWORK COMPONENTS AND THEIR ASSOCIATIONS - A system and method for describing network components and their associations is provided. The network management layer receives descriptions of network components and places at least a portion of the received description into one of a plurality of sections of an electronic list of network components. Each of the plurality of sections has a standard format. | 2012-09-06 |
20120226794 | SCALABLE QUEUES ON A SCALABLE STRUCTURED STORAGE SYSTEM - A cloud computing platform contains a structured storage subsystem the provides scalable queues. The cloud computing platform monitors message throughput for the scalable queues and automatically increases or decreases subqueues that provide the operational functionality for each scalable queue. A visibility start time and cloud computing platform time are maintained for each message to provide an approximate first-in-first-out order for messages within each subqueue. A message in a subqueue may be available for processing when the current cloud computing time is greater than the visibility start of the message. | 2012-09-06 |
20120226795 | Agile Network Protocol For Secure Communications Using Secure Domain Names - A secure domain name service for a computer network is disclosed that includes a portal connected to a computer network, such as the Internet, and a domain name database connected to the computer network through the portal. The portal authenticates a query for a secure computer network address, and the domain name database stores secure computer network addresses for the computer network. Each secure computer network address is based on a non-standard top-level domain name, such as .scom, .sorg, .snet, .snet, .sedu, .smil and .sint. | 2012-09-06 |
20120226796 | SYSTEMS AND METHODS FOR GENERATING OPTIMIZED RESOURCE CONSUMPTION PERIODS FOR MULTIPLE USERS ON COMBINED BASIS - Embodiments relate to systems and methods for generating optimized resource consumption periods for multiple users on a combined basis. A set of aggregate usage history data can record consumption of processor, software, or other resources subscribed to by a set of users, in one cloud or across multiple clouds. An entitlement engine can analyze the usage history data to identify a subscription margin for the subscribed resources, reflecting collective under/over-consumption of cloud resources by the users against subscription limits. The entitlement engine can track the short-term subscription margin for one or multiple resources over hours of a day, or over other intervals. The entitlement engine can thereby generate a set of variable or dynamic consumption periods over which to track the resource consumption, based on trends or conditions demonstrated in that consumption pattern by the set of users on a combined basis. Combined consumption can be metered on smaller, larger, and/or more rapidly changing intervals when the collective consumption rate is rapidly changing. | 2012-09-06 |
20120226797 | Active Load Distribution for Control Plane Traffic Using a Messaging and Presence Protocol - Techniques are provided herein for a device in a network to receive information configured to indicate a control plane traffic load level for one or more server devices that are configured to manage traffic for messaging and presence clients communicating via a messaging and presence protocol. The control plane traffic is associated with the messaging and presence protocol. A determination is made as to when the control plane traffic load level has become unbalanced among the two or more server devices and in response to determining that the control plane traffic load level has become unbalanced, sending a transfer message to one or more clients comprising information configured to initiate migration of one or more clients from a server device that is relatively overloaded to a server device that is relatively underloaded in order to balance the control plane traffic load level among the two or more server devices. | 2012-09-06 |
20120226798 | FAST NETWORK DISCOVERY USING SNMP MULTI-CAST - A network management device sends an SNMP multi-cast GET on a network to discover all the network devices on the network or subnet. The network management device builds a Management Information Base (MIB) based on the responses received from the SNMP multi-cast GET. The MIB information is then sent to a Network Management System (NMS). | 2012-09-06 |
20120226799 | Capabilities Based Routing of Virtual Data Center Service Request - Systems and methods are provided for receiving at a provider edge routing device capabilities data representative of capabilities of computing devices disposed in a data center, the capabilities data having been published by an associated local data center edge device, and advertising, by the provider edge routing device, the capabilities data to other provider edge routing devices in communication with one another in a network of provider edge routing devices. The provider edge routing device also receives respective capabilities data from each of the other provider edge routing devices, wherein each of the other provider edge routing devices is associated with a respective local data center via a corresponding data center edge device, and stores all the capabilities data in a directory of capabilities. Thereafter, a request for computing services is received at the provider edge network and the methodology provides for selecting, based on the directory of capabilities, one of the data centers to fulfill the request for computing services to obtain a selected data center, and for routing the request for computing services to the selected data center. | 2012-09-06 |
20120226800 | REGULATING NETWORK BANDWIDTH IN A VIRTUALIZED ENVIRONMENT - In a method for regulating network bandwidth in a virtualized computer environment, a computer having a hypervisor program receives a request from a first virtual client to transmit data. In response, the computer transfers the data from a memory of the first virtual client to a memory of a virtual server. The computer receives an error notification from a shared virtual network adapter of the virtual server, indicative of insufficient network bandwidth available to transmit the data. In response, the computer notifies the first virtual client that insufficient network bandwidth is available to transmit the data. | 2012-09-06 |
20120226801 | Network Appliance with Integrated Local Area Network and Storage Area Network Extension Services - Techniques and a network appliance apparatus are provided herein to extend local area networks (LANs) and storage area networks (SANs) beyond a data center while converging the associated local area network and storage area network host layers. A service flow is received at a device in a network. It is determined if the service flow is associated with storage area network or with local area network traffic. In response to determining that the service flow is storage area network traffic, storage area network extension services are performed with respect to the service flow in order to extend the storage area network on behalf of a remote location. In response to determining that the service flow is local area network traffic, local area network extension services are performed with respect to the service flow in order to extend the local area network on behalf of the remote location. | 2012-09-06 |
20120226802 | Controlling Network Device Behavior - A sender device is able to send packets over a network destined to a receiver device, and the sender device receives response information that is responsive to the packets. A behavior of the sender device with respect to data transmission on plural subflows of a connection is controlled based on the response information. | 2012-09-06 |
20120226803 | METHOD AND SYSTEM FOR PROVIDING STATUS OF A MACHINE - A method for providing machine status information via an enterprise social network is disclosed. The method embodiment includes receiving by a server a status update message from a machine where the status update message includes an identifier of the machine and an indication of a status update of the machine. The server is configured to determine information identifying a first user from a database system, where the first user follows a status of the machine, and to post a notification message in a feed on a web page associated with the first user. In an embodiment, the notification message identifies the machine and includes the status update of the machine. By posting the status update on the first user's web page, the first user is notified of the status of the machine. | 2012-09-06 |
20120226804 | SYSTEMS AND METHODS FOR SCALABLE N-CORE STATS AGGREGATION - The present invention is directed towards systems and methods for aggregating and providing statistics from cores of a multi-core system intermediary between one or more clients and servers. The system may maintain in shared memory a global device number for each core of the multi-core system. The system may provide a thread for each core of the multi-core system to gather data from the corresponding core. A first thread may generate aggregated statistics from a corresponding core by parsing the gathered data from the corresponding core. The first thread may transfer the generated statistics to a statistics log according to a schedule. The system may adaptively reschedule the transfer by monitoring the operation of each computing thread. Responsive to a request from a client, an agent of the client may obtain statistics from the statistics log. | 2012-09-06 |
20120226805 | DYNAMIC BANDWIDTH MANAGER - A dynamic bandwidth manager for determining the bandwidth available to an IP connected client device, the IP connected client device requesting access to multimedia resources from a service provider, the dynamic bandwidth manager comprising: a receiving component for receiving an IP address of an IP connected client requesting access to a resource; a requesting component for locating a nearest managed device to the IP connected client and requesting a current network management data set pertaining to the IP connected client from the located managed device; a calculation component for retrieving a previously stored set of network management data pertaining to the IP connected client and for analysing the network management data sets, in dependence on the current network management data set and the previously stored network management data sets, to calculate the available bandwidth capacity of the IP connected client. | 2012-09-06 |
20120226806 | DYNAMICALLY ENABLING FEATURES OF AN APPLICATION BASED ON USER STATUS - In one embodiment, a first application detects a user's activity. Based on the user's activity detected by the first application, a type of user status is determined from among a plurality of different types of possible user status. One or more features are determined of a feature set of a second application that correspond to the type of user status detected by the first application. The corresponding one or more features of the feature set of the second application are limited to prevent the one or more features from interrupting the user's activity. One or more other features of the feature set of the second application that do not correspond to the type of user status detected by the first application are permitted. | 2012-09-06 |
20120226807 | SYSTEM FOR AND METHOD OF NETWORK ASSET IDENTIFICATION - A method of identifying a new end-user device connected within a network includes monitoring a plurality of remote outlets and detecting a new end-user device upon connection thereof to the network at a first of the remote outlets and determining information about the new end-user device by electronically communicating with some but not all of the remote outlets. A system for performing such a method with a network having a plurality of end-user devices connected thereto is also disclosed. | 2012-09-06 |
20120226808 | SYSTEMS AND METHODS FOR METERING CLOUD RESOURCE CONSUMPTION USING MULTIPLE HIERARCHICAL SUBSCRIPTION PERIODS - Embodiments relate to systems and methods for metering cloud resource consumption using multiple hierarchical subscription periods. A set of aggregate usage history data can record consumption of processor, software, or other resources subscribed to by a set of users, in one cloud or across multiple clouds. An entitlement engine can analyze the usage history data to identify a subscription margin for the subscribed resources, reflecting collective under-consumption of resources by the set of users on a collective basis, over different and/or dynamically updated subscription periods. In aspects, the entitlement engine or other logic can generate multiple hierarchical time periods or layers over which resource consumption can be tracked. For instance, processor usage can be tracked over blocks of two hours or other intervals, but can also be tracked over 24 hour intervals for which additional subscription costs, terms, or factors may apply. In aspects, the consumption of not just one but multiple resources can be tracked over the hierarchical time periods, with cost adjustments being keyed to joint consumption levels of those resources, and/or over different time periods or layers. | 2012-09-06 |
20120226809 | SYSTEM AND METHOD FOR LOADING WEB PAGE USING MULTIPLE PATHS IN MULTIPLE INTERFACE CIRCUMSTANCES - A system and method for loading a web page using multiple paths in multiple interface circumstances are disclosed. The web page loading system providing multiple interfaces may include an allocator to set interfaces for loading resources, for each resource, constituting a web page associated with a Hypertext Transfer Protocol (HTTP) request when the HTTP request is received from a browser. In this instance, the browser may render the web page by respectively loading corresponding resource data through the interfaces set for each resource. | 2012-09-06 |
20120226810 | TECHNIQUES FOR VIRTUALIZATION OF APPLICATION DELIVERY CONTROLLERS - A virtualized application delivery controller (ADC) device operable in a communication network comprises a hardware infrastructure including at least a memory, a plurality of core processors, and a network interface; a plurality of instances of virtual ADCs (vADCs), the plurality of vADCs are executed over the hardware infrastructure, each of the plurality of vADCs utilizes a portion of hardware resources of the hardware infrastructure, the portion of hardware resources are determined by at least one ADC capacity unit allocated for each of the plurality of the vADCs; a management module for at least creating the plurality of instances of the vADCs; and a traffic distributor for distributing incoming traffic to one of the plurality of vADCs and scheduling execution of the plurality of vADCs on the plurality of core processors, wherein each of the plurality of vADCs is independently executed on at least one of the plurality of core processors. | 2012-09-06 |
20120226811 | GRID-ENABLED, SERVICE-ORIENTED ARCHITECTURE FOR ENABLING HIGH-SPEED COMPUTING APPLICATIONS - According to one aspect of the present disclosure, a method and technique for data processing in a distributed computing system having a service-oriented architecture is disclosed. The method includes: receiving, by a workload input interface, workloads associated with an application from one or more clients for execution on the distributed computing system; identifying, by a resource management interface, available service hosts or service instances for computing the workloads received from the one or more clients; responsive to receiving an allocation request for the one or more hosts or service instances by the workload input interface, providing, by the resource management interface, address information of one or more workload output interfaces; and sending, by the one or more workload output interfaces, workloads received from the workload input interface to the one or more service instances. | 2012-09-06 |
20120226812 | Method and system for subscription service in IP multimedia subsystem network - A method for subscription service in an IP multimedia subsystem is disclosed. A Session Border Controller (SBC) establishes IP channels between the SBC and an IMS terminal as well as between the SBC and a Resource List Server (RLS) after receiving a status subscribe request message from the IMS terminal; and the RLS sends the status information and an acknowledgment message to the IMS terminal through the IP channels after finding subscribed status information for the IMS terminal. A system for a subscription service in an IP multimedia subsystem network is further disclosed. The IP channels established in the present disclosure to transmit the subscription information on the RLS not only can transmit a great amount of information, but also has higher efficiency of information transmission, as long as the IMS terminal has a capability of processing IP data packets. | 2012-09-06 |
20120226813 | COMPUTER NETWORK, COMPUTER SYSTEM, COMPUTER-IMPLEMENTED METHOD, AND COMPUTER PROGRAM PRODUCT FOR MANAGING SESSION TOKENS - A computer network for managing session tokens may include a client operable to run a client application; a web server hosting at least one web service; and a session token manager. The session token manager may be operable to receive a check out message along with user credentials from the client application, wherein the user credentials identify a user operating the client application; process the check out message to determine a session token from a pool of session tokens managed for the user; and send a token identifier (token ID) to the client application pointing to the determined session token, wherein the session token can be used by the client application to point to and/or to re-use a previously established session with the web service without re-establishing a new session. | 2012-09-06 |
20120226814 | TOPOLOGY HIDING OF A NETWORK FOR AN ADMINISTRATIVE INTERFACE BETWEEN NETWORKS - An administrative interface is provided between a first network and a second network, where the administrative interface is separate from one or more communications session signaling interfaces between the first network and second network. At least one of authorization, authentication, and accounting messages is communicated over the administrative interface. A module associated with the administrative interface is provided to perform topology hiding of the first network such that topology information of the first network is hidden from the second network. | 2012-09-06 |
20120226815 | SECURE MANAGEMENT OF SIP USER CREDENTIALS - A device may obtain, from a remote device on a network, information regarding loads and Session Initiation Protocol (SIP) devices on which the loads are installed. In addition, the device may access a database storing load compatibility information, identify problematic loads based on the obtained information and the load compatibility information, determine fixes for one or more of the problematic loads, and apply the fixes to the one or more of the problematic loads over the network. | 2012-09-06 |
20120226816 | DATACASTING SYSTEM WITH HIERARCHICAL DELIVERY QUALITY OF SERVICE MANAGEMENT CAPABILITY - Datacasting systems may include one or more compound carousels each managing one or more elementary carousels, and managed by a bandwidth manager. Subsets of compounds carousels may be identified, for example, according to priority levels. Bandwidth allocations may be determined for the compound carousels. For example, the bandwidth manager may utilize multiple bandwidth allocation cycles to determine the bandwidth allocations. The multiple bandwidth allocation cycles may form a sequence. Each bandwidth allocation cycle may at least partially allocate an available datacasting bandwidth resource to at least one of the identified subsets of the compound carousels. The allocations may be based at least in part on desired bandwidths determined by the compound carousels and/or one or more bandwidth guidelines of datacast sessions associated with the compound carousels. | 2012-09-06 |
20120226817 | Methods for Transferring Media Sessions Between Local Networks Using an External Network Connected ue and Related Devices - An external network-connected UE is provided and configured to transfer a media session stream playing on a first local network-UE to start playing on the same position on a second local network-UE. The external network-connected UE is located outside each of the local networks and is configured to communicate with the virtual control nodes of the local networks, and via a media aggregating node, which all are configured to transfer the media session stream. | 2012-09-06 |
20120226818 | Publishable Metadata for Content Management and Component Testing - Techniques related to publishable metadata for content management are described that enable selective invocation of new components in a web content management system. Metadata that is published in connection with corresponding content can be configured to include tags or other identifiers that cause a content management system to selectively direct content processing between existing and new components. Switches implemented by the content management system can operate to examine the metadata to determine which processing components are selected for particular content and direct the content to corresponding components. Switches can also be placed in websites to direct page requests from clients to existing or new rendering controls based upon publishable metadata that is associated with a requested page. Thus, the metadata and switches can be employed to perform testing of and load balancing between new and existing components in a live environment. | 2012-09-06 |
20120226819 | LOCAL ADVERTISEMENT INSERTION THROUGH WEB REQUEST REDIRECTION - According to one aspect, the subject matter described herein includes a method for communicating advertisement information. The method includes steps occurring at a packet inspection node. The method also includes monitoring data packets associated with a user. The method further includes detecting a local advertisement request within the data packets. The method further includes redirecting the request to a local advertisement server. | 2012-09-06 |
20120226820 | SYSTEM AND METHOD OF TRAFFIC INSPECTION AND STATEFUL CONNECTION FORWARDING AMONG GEOGRAPHICALLY DISPERSED NETWORK APPLIANCES ORGANIZED AS CLUSTERS - A peering relationship among two or more network appliances is established through an exchange of control messages among the network appliances. The peering relationship defines a cluster of peered network appliances, and at each network appliance of the cluster traffic flow state information for all the network appliances of the cluster is maintained. Network traffic associated with traffic flows of the network appliances of the cluster is managed according to the state information for the traffic flows. This managing of the network traffic may include forwarding among the network appliances of the cluster (i.e., to those of the appliances handling the respective flows) at least some of the network traffic associated with one or more of the traffic flows according to the state information for the one or more traffic flows. The traffic flows may be TCP connections or UDP flows. | 2012-09-06 |
20120226821 | APPARATUS AND METHOD FOR LAYER-2 AND LAYER-3 VPN DISCOVERY - An apparatus and a method for layer-2 and layer-3 VPN discovery are disclosed. The apparatus is incorporated in a network, and the network includes a first carrier network. The first carrier network includes at least two layer-1 provider edge devices. Layer-1 VPN information is created within the first carrier network. BGP next hop information passes within the first carrier network. The BGP next hop information is for a selected one of the following: a layer-2 VPN-based provider edge device, a layer-3 VPN-based provider edge device, and a layer-2 and layer-3 VPN-based provider edge device. The network also includes a second carrier network within which the BGP next hop information is used for VPN discovery. | 2012-09-06 |
20120226822 | METHOD AND APPARATUS FOR ADDRESSING IN A RESOURCE-CONSTRAINED NETWORK - An electronic device may receive a protocol data unit (PDU) comprising a plurality of addressing bits. Data-link-layer processing of the PDU may be based on each of the addressing bits. Network layer processing of the PDU may be based on a first subset of the plurality of addressing bits. Transport-layer processing of the PDU may be based on a second subset of plurality of addressing bits. The data-link-layer processing may comprise determining whether the PDU is unicast-addressed or non-unicast-addressed. For a unicast-addressed PDU, the data-link-layer processing may comprise determining whether the PDU is destined for the electronic device based on a comparison of a Target ID field of the PDU and a device ID of the electronic device. For a non-unicast-addressed PDU, the Target ID field may not be present, and whether the PDU is destined for the electronic device may be determined based on other criteria. | 2012-09-06 |
20120226823 | DOCUMENT DISTRIBUTION SYSTEM AND METHOD - A computer implemented method for document distribution, the method comprising steps the computer is programmed to perform, the steps comprising: on a networked computer, receiving a document from a sender and authorization data defining at least one authorized recipient for the document, the sender and the authorized recipient being remote from the networked computer, for at least one of the authorized recipients, selecting at least one respective optimal format among a plurality of file formats, converting the received document into the selected format, and distributing the converted document to the authorized recipient. | 2012-09-06 |
20120226824 | DISTRIBUTED NETWORK PLANNING SYSTEMS AND METHODS - The present disclosure provides distributed domain network planning systems and methods. The network planning systems and methods include a distributed domain network planning system that adapts planning concepts to networks operated by modern distributed control planes, such as ASON/ASTN, GMPLS, etc. The network planning systems and methods operate on a multi-domain network utilizing a control plane and local planning systems associated with each individual domain in the multi-domain network. The network planning systems and methods also operate on a single domain network utilizing a control plane and local planning systems associated with the single domain network. The network planning systems and methods build on a distributed control plane philosophy that the network is the database of record. There is significant operational value to distributing the planning function of a large network using the systems and methods disclosed herein. | 2012-09-06 |
20120226825 | NETWORK ACCESS CONTROL FOR MANY-CORE SYSTEMS - In a processor based system comprising a plurality of logical machines, selecting a logical machine of the system to serve as a host; the host communicating with a policy decision point (PDP) of a network to provision a data channel interconnecting the processor based system and the network and to provision a logical data channel interconnecting each logical machine of the system to the network. | 2012-09-06 |
20120226826 | Integrated Circuit Arrangement for Buffering Service Requests - The present invention discloses an integrated circuit arrangement ( | 2012-09-06 |
20120226827 | Mechanism for Performing SDIO Aggregation and Conveying SDIO Device Status to the Host Software - The subject matter disclosed herein relates to systems and/or devices capable of transmitting data packets over a hardware command interface. In one particular example, multiple data packets may be transmitted between a host device and a peripheral device in a single hardware interface command. | 2012-09-06 |
20120226828 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An information processing apparatus provided with a unit that acquires identification information indicating a function of an external device connected to a connector, a holding unit that holds a device driver to control the external device, and a control unit to control an assignment of the device driver to the external device in accordance with control information. The holding unit holds a generic device driver to perform a process not dependent on the function of the external device. If the control information indicates a first value, the control unit assigns the generic device driver to the external device. If the control information indicates a second value, the control unit determines whether the holding unit holds a device driver compatible with the indicated function. If it is determined that the holding unit does not hold the device driver, the control unit assigns the generic device driver to the external device. | 2012-09-06 |
20120226829 | ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THE SAME - According to one embodiment, an electronic apparatus comprises at least one equipment connection port and a control unit configured to switch enabling and disabling the equipment connection port before and after an OS is started up. | 2012-09-06 |
20120226830 | MEMORY SYSTEM HAVING HIGH DATA TRANSFER EFFICIENCY AND HOST CONTROLLER - According to one embodiment, the host controller includes a register set to issue command, and a direct memory access (DMA) unit and accesses a system memory and a device. First, second, third and fourth descriptors are stored in the system memory. The first descriptor includes a set of a plurality of pointers indicating a plurality of second descriptors. Each of the second descriptors comprises the third descriptor and fourth descriptor. The third descriptor includes a command number, etc. The fourth descriptor includes information indicating addresses and sizes of a plurality of data arranged in the system memory. The DMA unit sets, in the register set, the contents of the third descriptor forming the second descriptor, from the head of the first descriptor as a start point, and transfers data between the system memory and the host controller in accordance with the contents of the fourth descriptor. | 2012-09-06 |
20120226831 | MEMORY SYSTEM AND INTEGRATED MANAGEMENT METHOD FOR PLURALITY OF DMA CHANNELS - Provided are a memory system and an integrated management method for a plurality of direct memory access (DMA) channels. The memory system includes a memory controller exchanging data with a memory and having a plurality of channels physically separated from each other, and a DMA controller having a plurality of DMA channels physically separated from each other and in contact with the plurality of channels of the memory controller, and exchanging data with the memory via the plurality of DMA channels and the memory controller. | 2012-09-06 |
20120226832 | DATA TRANSFER DEVICE, FT SERVER AND DATA TRANSFER METHOD - A data transfer device includes a transfer method setter that sets the transfer method to either a first transfer method or second transfer method that differ from each other, and a transfer controller that causes data to be transferred. The transfer controller causes data to be transferred according to the transfer method that was set by the transfer method setter. The first transfer method, for example, is a transfer method that has a smaller expected writing disabled time than the second transfer method when the probability that writing will occur during the transfer process is high. The second transfer method, for example, is a transfer method that has a smaller expected writing disabled time than the first transfer method when the probability that writing will occur during the transfer process is low. | 2012-09-06 |
20120226833 | INTEGRATED CIRCUIT AND METHOD FOR REDUCING VIOLATIONS OF A TIMING COSTRAINT - An integrated circuit and a method for reducing violations of a timing constraint. The integrated circuit comprises a shared resource for providing data and a buffer for storing data. A buffer level monitor is coupled to the buffer, for monitoring a monitored level of data in the buffer. A retrieving circuit is coupled to the buffer, for retrieving the data from the buffer, according to a timing constraint. A filling circuit is coupled to the buffer for writing the data to the buffer and coupled to the shared resource for receiving the data from the shared resource when the filling circuit has access to the shared resource. An access-requesting circuit is coupled to the shared resource for receiving the data from the shared resource when the access-requesting circuit has access to the shared resource. An arbiter is coupled to the shared resource, the filling circuit, and the access-requesting circuit, for receiving access requests from the filling circuit and from the access-requesting circuit, and for granting to a selected one thereof access to the shared resource. A controller is coupled to the buffer level monitor and to the access-requesting circuit, for causing the access-requesting circuit to reduce a rate of access requests sent to the arbiter when a condition indicating an anticipated violation of the timing constraint is fulfilled, the condition at least involving the monitored level of data in the buffer. | 2012-09-06 |
20120226834 | METHOD FOR ENABLING SEVERAL VIRTUAL PROCESSING UNITS TO DIRECTLY AND CONCURRENTLY ACCESS A PERIPHERAL UNIT - The present disclosure relates to a method for enabling a virtual processing unit to access a peripheral unit, the virtual processing unit being implemented by a physical processing unit connected to the peripheral unit, the method comprising a step of transmitting to the peripheral unit a request sent by the virtual processing unit to access a service provided by the peripheral unit, the access request comprising at least one parameter and an identifier of the virtual unit, the method comprising steps, executed by the peripheral unit after receiving an access request, of allocating a set of registers to the virtual unit identifier received, storing the parameter received in the register set allocated, and when the peripheral unit is available for processing a request, selecting one of the register sets, and triggering a process in the peripheral unit from the parameters stored in the selected register set. | 2012-09-06 |
20120226835 | PCI Express to PCI Express based low latency interconnect scheme for clustering systems - PCI Express is a Bus or I/O interconnect standard for use inside the computer or embedded system enabling faster data transfers to and from peripheral devices. The standard is still evolving but has achieved a degree of stability such that other applications can be implemented using PCIE as basis. A PCIE based interconnect scheme to enable switching and inter-connection between external systems, such that the scalability can be applied to enable data transport between connected systems to form a cluster of systems is proposed. These connected systems can be any computing or embedded system. The scalability of the interconnect will allow the cluster to grow the bandwidth between the systems as they become necessary without changing to a different connection architecture. | 2012-09-06 |
20120226836 | SEMICONDUCTOR INTEGRATED CIRCUIT - A semiconductor integrated circuit capable of reducing unnecessary current consumption includes a plurality of bus drive circuits for receiving data input, a common bus coupled to the bus drive circuits, and a bus holder coupled to the common bus. One of the bus drive circuits is selected as the selected bus drive circuit. When a logical value corresponding to the data input to be output is the same as a logical value that has been held by the bus holder and output to the common bus, the selected bus drive circuit stops outputting the logical value corresponding to the data input to the common bus. With this configuration, it is possible to eliminate the unnecessary output of the selected bus drive circuit, and to reduce unnecessary current consumption compared to the conventional semiconductor integrated circuit. | 2012-09-06 |
20120226837 | Method and System of debugging Multicore Bus Transaction Problems - A bus monitoring and debugging system operating independently without impacting the normal operation of the CPU and without adding any overhead to the application being monitored. Users are alerted to timing problems as they occur, and bus statistics that are relevant to providing insight to system operation are automatically captured. Logging of relevant events may be enabled or disabled when a sliding time window expires, or alternatively by external trigger events. | 2012-09-06 |
20120226838 | Method and System for Handling Discarded and Merged Events When Monitoring a System Bus - A bus monitoring and debugging system operating independently without impacting the normal operation of the CPU and without adding any overhead to the application being monitored. The bus is monitored for discarded speculative read and for merged write transactions in order to determine the true bus throughputs. Bus statistics that are relevant to providing insight to system operation are automatically captured. Logging of relevant events may be enabled or disabled when a sliding time window expires, or alternatively by external trigger events. | 2012-09-06 |
20120226839 | Method and System for Monitoring and Debugging Access to a Bus Slave Using One or More Throughput Counters - A bus monitoring and debugging system operating independently without impacting the normal operation of the CPU and without adding any overhead to the application being monitored. Bus transactions to a selected slave are monitored to determine possible conflicts when multiple masters may be addressing the slave. Users are alerted to timing problems as they occur, and bus statistics that are relevant to providing insight to system operation are automatically captured. Logging of relevant events may be enabled or disabled when a sliding time window expires, by a selected address range or alternatively by external trigger events. | 2012-09-06 |
20120226840 | MULTIPLE COMMUNICATION CHANNELS ON MMC OR SD CMD LINE - The claimed subject matter can provide an architecture that interfaces a single slave device such as a UICC smartcard with multiple host controllers. For example, a secondary host can be interfaced between a primary host (e.g., a controller in a cellular phone, a PDA, an MP3 player . . . ) to manage all transactions with the slave device. The secondary host can operate transparently to the primary host and thus does not require any modifications to the primary host. This can be accomplished, e.g., by employing the CMD channel (which is relatively sparsely used by the primary host) to communicate both commands and data with the slave. | 2012-09-06 |
20120226841 | READ STACKING FOR DATA PROCESSOR INTERFACE - A gasket of a data processing device controls the number of released storage locations of a buffer where read and write access requests are stored so that more read access requests can be stored without a corresponding increase in the amount of space at the buffer to store write access requests. An interface of the gasket accepts new access requests from one or more requesting modules only when a number of released storage locations at a buffer associated with the interface (referred to as an outbound buffer) is above a threshold number. As long as the number of stored access requests at the outbound buffer are less than a threshold amount, a buffer location can be immediately released. In addition, the gasket is configured to issue read access requests from the outbound buffer without regard to whether the inbound buffer has space available. | 2012-09-06 |
20120226842 | ENHANCED PRIORITISING AND UNIFYING INTERRUPT CONTROLLER - An enhanced interrupt controller is provided which is able to receive both hardware-generated and software-generated request signals. Data associated with each received interrupt or request signal is stored in a storage unit within the enhanced interrupt controller in an order which depends on the priority level of the data and, for data of the same level of priority, on the chronological order of receipt. The enhanced interrupt controller instructs the processor, with which it is in communication, to read the stored data from the controller in the stored order ensuring that data of higher priority is read before data of lower priority. A method of routing hardware-generated and software-generated signals from an enhanced interrupt controller to a processor is also disclosed. | 2012-09-06 |
20120226843 | Method and Computer System for Processing Data in a Memory - The invention discloses a method for processing data in a memory for a computer system. The method comprises receiving a first interrupt for triggering a first job, backing up data corresponding to a second interrupt in the memory when a priority degree of the first interrupt is higher than a priority degree of the second interrupt corresponding to a second job currently being executed by the computer system, executing the first job corresponding to the first interrupt, and restoring the data corresponding to the second interrupt to the memory after the first job corresponding to the first interrupt is finished and continue executing the second job corresponding to the second interrupt. | 2012-09-06 |
20120226844 | DUAL PROCESSOR SYSTEM AND METHOD FOR USING THE SAME - A dual processor system comprises a first processor, a second processor, and a dual-ported random access memory (DPRAM). When the first processor stores data to be processed by the second processor to the DPRAM and writes interrupt data to the DPRAM, the DPRAM generates a first information status. The second processor reads the interrupt data once when detecting the first information status, processes the to be processed data when successfully reading the interrupt data once, and reads the interrupt data twice when completing processing the to be processed data. The DPRAM generates a second information status when the second processor successfully reads the interrupt data twice, and the first processor identifies that the second processor has processed the to be processed data when detecting the second information status. | 2012-09-06 |
20120226845 | Hardware interrupt processing circuit - A hardware interrupt processing circuit converts selected hardware interrupts to an interrupt vector having bits corresponding to the selected hardware interrupts. The hardware interrupt processing circuit includes circuit assemblies that correspond to the selected hardware interrupts. Each circuit assembly includes a detector circuit and a persistent capture circuit. The detector circuit is to output a pulse responsive to the corresponding selected hardware interrupt being asserted. The persistent capture circuit is triggered by the persistent capture circuit to output a corresponding bit of the interrupt vector until a ready signal has been asserted. | 2012-09-06 |
20120226846 | HDMI DEVICE AND ASSOCIATED POWER MANAGEMENT METHOD - An HDMI device and an associated power management method are provided for use in the case with an HDMI Ethernet Channel implemented. The HDMI device can acquire an external power source by connecting to another HDMI device through an HDMI interface. Thus, when the internal power source of the HDMI device is disabled, the external power source can be used as a backup power source for the internal Ethernet circuit of the HDMI device. | 2012-09-06 |
20120226847 | MULTI-PROCESSOR DEVICE - The present invention intends to provide a high-performance multi-processor device in which independent buses and external bus interfaces are provided for each group of processors of different architectures, if a single chip includes a plurality of multi-processor groups. A multi-processor device of the present invention comprises a plurality of processors including first and second groups of processors of different architectures such as CPUs, SIMD type super-parallel processors, and DSPs, a first bus which is a CPU bus to which the first processor group is coupled, a second bus which is an internal peripheral bus to which the second processor group is coupled, independent of the first bus, a first external bus interface to which the first bus is coupled, and a second external bus interface to which the second bus is coupled, over a single semiconductor chip. | 2012-09-06 |
20120226848 | INCREASING INPUT OUTPUT HUBS IN CONSTRAINED LINK BASED MULTI-PROCESSOR SYSTEMS - Methods and apparatus relating to increase Input Output Hubs in constrained link based multi-processor systems are described. In one embodiment, a first input output hub (IOH) and a second IOH are coupled a link interconnect and a plurality of processors, coupled to the first and second IOHs include pre-allocated resources for a single IOH. Other embodiments are also disclosed and claimed. | 2012-09-06 |
20120226849 | VIRTUAL COMPUTER SYSTEM, AREA MANAGEMENT METHOD, AND PROGRAM - A virtual computer system having a plurality of virtual computers, the virtual computer system including: an area assignment unit operable to, when a virtual computer attempts to perform writing to a basic area which is assigned to and shared by the plurality of virtual computers, change an assignment to the virtual computer from the basic area to a copy area to which the basic area is copied and the writing is performed; and an area freeing unit operable to, when a content of the basic area matches a content of at least one copy area, change area assignment to one or more virtual computers, to which have been assigned one or more other areas than one area among the areas whose contents match each other, to the one area, and free the one or more other areas. | 2012-09-06 |
20120226850 | VIRTUAL MEMORY SYSTEM, VIRTUAL MEMORY CONTROLLING METHOD, AND PROGRAM - Disclosed herein is a virtual memory system including a nonvolatile memory allowing random access, having an upper limit to a number of times of rewriting, and including a physical address space accessed via a virtual address; and a virtual memory control section configured to manage the physical address space of the nonvolatile memory in page units, map the physical address space and a virtual address space, and convert an accessed virtual address into a physical address; wherein the virtual memory control section is configured to expand a physical memory capacity allocated to a virtual page in which rewriting occurs. | 2012-09-06 |
20120226851 | ISOLATION DEVICES FOR HIGH PERFORMANCE SOLID STATE DRIVES - Systems and methods are provided for coupling multiple flash devices to a shared bus utilizing isolation switches within a SSD device. The SSD device is operable at a speed of about 400 MT/s or higher with high signal integrity. The SSD device includes a controller, a channel in electrical communication with the controller, a plurality of isolation devices in electrical communication with channel, and a plurality of flash memory devices, wherein each flash memory device is in electrical communication with the channel and controller through the one of the isolation devices. | 2012-09-06 |
20120226852 | CONTROL METHOD AND CONTROLLER FOR DRAM - A DRAM controller including a judging module, a determination module, and a transmission module is provided. The judging module judges an address content difference between a first command and a third command. The determination module determines a plurality of buffering address contents, asoociated with at least one second command, according to the address content difference. The transmission module then sequentially transmits the first command, the at least one second command, and the third command to the DRAM. | 2012-09-06 |
20120226853 | REDUNDANT ARRAY OF INEXPENSIVE DISKS (RAID) SYSTEM CONFIGURED TO REDUCE REBUILD TIME AND TO PREVENT DATA SPRAWL - A RAID system is provided in which, in the event that a rebuild is to be performed for one of the PDs, a filter driver of the operating system of the computer of the RAID system informs the RAID controller of the RAID system of addresses in the virtual memory that are unused. Unused virtual memory addresses are those which have never been written by the OS as well as those which have been written by the OS and subsequently freed by the OS. The RAID controller translates the unused virtual memory addresses into unused physical addresses. The RAID controller then reconstructs data and parity only for the unused physical addresses in the PD for which the rebuild is being performed. This reduces the amount of data and parity that are rebuilt during a rebuild process and reduces the amount of time that is required to perform the rebuild process. In addition, the RAID system is capable of being configured to prevent or reduce data sprawl. | 2012-09-06 |
20120226854 | STORAGE SYSTEM AND A METHOD OF CONTROL OF A STORAGE SYSTEM - A storage system and a method of control of a storage system including plural storage media, at least one SAS expander physically connected to each of the plural storage media and to a controller via plural parallel data channels, the controller being connected to a host CPU arranged in use to execute input/output operations to transfer data to and read data from the plural storage media, the method including: at the expander, varying the available bandwidth for communication with the plural storage media by varying the available number of the plural parallel data channels thereby providing control of the number of input/output operations executed by the host CPU. | 2012-09-06 |
20120226855 | SHARING A DIRECTORY OF A DISPERSED STORAGE NETWORK - A method begins by a processing module receiving a dispersed storage network (DSN) access request accessing DSN memory and determining state of a shared global DSN directory. When the shared global DSN directory is in a ready-for-modification state, the method continues with the processing module updating state of the shared global DSN directory to a modification state, executing the DSN access request, updating a non-shared local DSN directory and the shared global DSN directory, and changing the state of the shared global DSN directory to the ready-for-modification state. When the shared global DSN directory is in the modification state, the method continues with the processing module executing the DSN access request, generating a shared global DSN directory update request, updating the non-shared local DSN directory, and when the shared global DSN directory is in the ready-for-modification state, coordinating updating of the shared global DSN directory. | 2012-09-06 |
20120226856 | CONTROL METHOD WITH MANAGEMENT SERVER APPARATUS FOR STORAGE DEVICE AND AIR CONDITIONER AND STORAGE SYSTEM - Arrangements reducing power consumption of an air conditioner and a storage device. A control method with a management server apparatus for a plurality of storage devices and an air conditioner includes calculating plural combinations of allocating the work amount to the plurality of storage devices, calculating the heating value of each storage device included in the plurality of storage devices for each of the plural combinations, calculating the quantity of heat conducted to the air conditioner, based on the heating value and positional information of the plurality of storage devices and the air conditioner, calculating the power consumption to cool the quantity of heat conducted to the air conditioner, selecting a combination included in the plural combinations based on the power consumption of the air conditioner, and issuing a move of the data stored in a first storage device to a second storage device, based on the selected combination. | 2012-09-06 |
20120226857 | COMPUTER AND METHOD FOR MANAGING STORAGE APPARATUS - A management computer manages the pool application information that indicates a pool application for a pool and the application condition information that indicates the condition for the pool application. The management computer calculates an excess storage capacity based on a pool usage status for the pool. The management computer specifies a pool application for the pool and the condition for the pool application based on the pool application information and the application condition information. The management computer judges whether the specified condition is satisfied even in the case in which a storage area having a storage capacity equivalent to or less than the calculated excess storage capacity is deleted from the pool. In the case in which the result of the judgment is positive, the management computer defines a capacity equivalent to or less than the excess storage capacity as an unused capacity. | 2012-09-06 |
20120226858 | METHOD FOR MANAGING HIERARCHICAL STORAGE DURING DETECTION OF SENSITIVE INFORMATION, COMPUTER READABLE STORAGE MEDIA AND SYSTEM UTILIZING SAME - Examples of methods, systems, and computer-readable media for detection of sensitive information on hierarchical storage management mainframes are described using multiple techniques. The techniques may include determining if data has been migrated from a first storage medium to a second storage medium, recalling the migrated data from a second storage medium to the first storage medium, reading the migrated data, then remigrating the data to the second storage medium. | 2012-09-06 |
20120226859 | SPATIAL EXTENT MIGRATION FOR TIERED STORAGE ARCHITECTURE - Provided are techniques for migrating a first extent, determining a spatial distance between the first extent and a second extent, determining a ratio of a profiling score of the second extent to the spatial distance, and, in response to determining that the ratio exceeds a threshold, migrating the second extent. | 2012-09-06 |
20120226860 | COMPUTER SYSTEM AND DATA MIGRATION METHOD - A path is formed between a host computer and storage apparatuses without depending on the configuration of the host computer and a network and a plurality of volumes having a copy function are migrated between storage apparatuses while keeping the latest data. | 2012-09-06 |
20120226861 | STORAGE CONTROLLER AND METHOD OF CONTROLLING STORAGE CONTROLLER - Provided is a storage controller and method of controlling same which, if part of a storage area of a local memory is used as cache memory, enable an access conflict for access to a parallel bus connected to the local memory to be avoided. | 2012-09-06 |
20120226862 | EVENT TRANSPORT SYSTEM - A method for communicating events from an event source to an event consumer is disclosed herein. In one embodiment, such a method includes monitoring an event generation rate associated with an event source. The method further determines if the event generation rate exceeds a threshold rate. Upon receiving an event from the event source, the method generates a condensed version of the event if the event generation rate exceeds the threshold rate. The method then communicates the condensed version to an event consumer. A corresponding system and computer program product are also disclosed. | 2012-09-06 |
20120226863 | INFORMATION PROCESSING DEVICE, MEMORY ACCESS CONTROL DEVICE, AND ADDRESS GENERATION METHOD THEREOF - An information processing device according to the present invention includes an operation unit that outputs an access request, a storage unit including a plurality of connection ports and a plurality of memories capable of a simultaneous parallel process that has an access unit of a plurality of word lengths for the connection ports, and a memory access control unit that distributes a plurality access addresses corresponding to the access request received for each processing cycle from the operation unit, and generates an address in a port including a discontinuous word by one access unit for each of the connection ports. | 2012-09-06 |
20120226864 | TIERED DATA MANAGEMENT METHOD AND SYSTEM FOR HIGH PERFORMANCE DATA MONITORING - A method for managing memory in a system for an application, comprising: assigning a first block (i.e., a big block) of the memory to the application when the application is initiated, the first block having a first size, the first block being assigned to the application until the application is terminated; dividing the first block into second blocks (i.e., intermediate blocks), each second block having a same second size, a second block of the second blocks for containing data for one or more components of a single data structure to be accessed by one thread of the application at a time; and, dividing the second block into third blocks (i.e., small blocks), each third block having a same third size, a third block of the third blocks for containing data for a single component of the single data structure. | 2012-09-06 |
20120226865 | NETWORK-ON-CHIP SYSTEM INCLUDING ACTIVE MEMORY PROCESSOR - Disclosed is a network-on-chip system including an active memory processor for processing increased communication latency by multiple processors and memories. The network-on-chip system includes a plurality of processing elements that request to perform an active memory operation for a predetermined operation from a shared memory to reduce access latency of the shared memory, and an active memory processor connected to the processing elements through a network, storing codes for processing custom transaction in request to the active memory operation, performing an operation addresses or data stored in a shared cache memory or the shared memory based on the codes and transmitting the performed operation result to the processing elements. | 2012-09-06 |
20120226866 | DYNAMIC MIGRATION OF VIRTUAL MACHINES BASED ON WORKLOAD CACHE DEMAND PROFILING - A computer-implemented method comprises obtaining a cache hit ratio for each of a plurality of virtual machines, and identifying, from among the plurality of virtual machines, a first virtual machine having a cache hit ratio that is less than a threshold ratio. The identified first virtual machine is then migrated from the first physical server having a first cache size to a second physical server having a second cache size that is greater than the first cache size. Optionally, a virtual machine having a cache hit ratio that is less than a threshold ratio is identified on a class-specific basis, such as for L1 cache, L2 cache and L3 cache. | 2012-09-06 |
20120226867 | Binary tree based multilevel cache system for multicore processors - A binary tree based multi-level cache system for multi-core processors and its two possible implementations LogN and LogN+1 models maintaining a true pyramid is described. | 2012-09-06 |
20120226868 | SYSTEMS AND METHODS FOR PROVIDING DETERMINISTIC EXECUTION - Devices and methods for providing deterministic execution of multithreaded applications are provided. In some embodiments, each thread is provided access to an isolated memory region, such as a private cache. In some embodiments, more than one private cache are synchronized via a modified MOESI coherence protocol. The modified coherence protocol may be configured to refrain from synchronizing the isolated memory regions until the end of an execution quantum. The execution quantum may end when all threads experience a quantum end event such as reaching a threshold instruction count, overflowing the isolated memory region, and/or attempting to access a lock released by a different thread in the same quantum. | 2012-09-06 |
20120226869 | FILE SERVER APPARATUS, MANAGEMENT METHOD OF STORAGE SYSTEM, AND PROGRAM - When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction. | 2012-09-06 |
20120226870 | RECOVERY IN SHARED MEMORY ENVIRONMENT - A method for recovery in a shared memory environment is provided in the illustrative embodiments. A core in a multi-core processor is designated as a user level core (ULC), which executes an instruction to modify a memory while executing an application. A second core is designated as a operating system core (OSC), which manages checkpointing of several segments of the shared memory. A set of flags is accessible to a memory controller to manage a shared memory. A flag in the set of flags corresponds to one segment in the segments of the shared memory. A message or instruction for modification of a segment is received. A cache line tracking determination is made whether a cache line used for the modification has already been used for a similar modification. If not, a part of the segment is checkpointed. The modification proceeds after checkpointing. | 2012-09-06 |
20120226871 | MULTIPLE-CLASS PRIORITY-BASED REPLACEMENT POLICY FOR CACHE MEMORY - This invention is a method and system for replacing an entry in a cache memory (replacement policy). The cache is divided into a high-priority class and a low-priority class. Upon a request for information such as data, an instruction, or an address translation, the processor searches the cache. If there is a cache miss, the processor locates the information elsewhere, typically in memory. The found information replaces an existing entry in the cache. The entry selected for replacement (eviction) is chosen from within the low-priority class using a FIFO algorithm. Upon a cache hit, the processor performs a read, write, or execute using or upon the information. If the performed instruction was a “write”, the information is placed into the high-priority class. If the high-priority class is full, an entry within the high-priority class is selected for removal based on a FIFO algorithm, and re-classified into the low-priority class. | 2012-09-06 |
20120226872 | PREFETCHING CONTENT OF A DIRECTORY BY EXECUTING A DIRECTORY ACCESS COMMAND - In response to a request to access a directory, a directory access command is invoked and executed, where the executed directory access command accesses the directory and prefetches content of the directory. | 2012-09-06 |
20120226873 | MULTIPROCESSOR ARRANGEMENT HAVING SHARED MEMORY, AND A METHOD OF COMMUNICATION BETWEEN PROCESSORS IN A MULTIPROCESSOR ARRANGEMENT - A multiprocessor arrangement is disclosed, in which a plurality of processors are able to communicate with each other by means of a plurality of time-sliced memory blocks. At least one, and up to all, of the processors may be able to access more than one time-sliced memories. A mesh arrangement of such processors and memories is disclosed, which may be a partial or complete mesh. The mesh may to two-dimensional, or higher dimensional. | 2012-09-06 |
20120226874 | CHARACTERIZATION AND OPTIMIZATION OF TRACKS ON DISKS - Embodiment of the invention related to characterization and optimization of tracks on a disk, magnetic or optical by determining an input/output (I/O) characteristics for a plurality of blocks on a disk by a processor, wherein the characteristics comprise at least one of a data size or data type or an association between the data files, and determining a plurality of parameters affecting operation performed on the disk for placement of the plurality of data clusters. | 2012-09-06 |
20120226875 | STORAGE CONTROL APPARATUS - A storage unit is provided with a plurality of sub storage units configured to include a plurality of hard disk drives, an enclosure, a printed wiring board, a power supply device, and a cable holder. The sub storage units each operate separately. The enclosure is provided in the array of the hard disk drives so that the distance can be shorter between the enclosure and each of the hard disk drives. With the provision of the cable holder, communications cables can be both brought closer to the printed wiring board. With such a configuration, the coupling point among the communications cables and the printed wiring board, and the enclosure can be favorably reduced. The resulting storage control apparatus can be mounted with a larger number of storage devices, thereby being able to maintain good signal quality. | 2012-09-06 |
20120226876 | NETWORK EFFICIENCY FOR CONTINUOUS REMOTE COPY - A method for controlling data for a storage system comprises: receiving a write input/output (I/O) command of a data from a host computer, the write I/O command including an application ID identifying an application operating on the host computer which sends the write I/O request; maintaining a record of a relation between the application ID in the write I/O command and a storage location of the data to be written in a first volume of the storage system; determining, based on the application ID, whether a data transfer function between the first volume and a second storage volume is to be performed on the data beyond writing the data to the storage location in the first volume; and if the data transfer function is to be performed on the data, then performing the data transfer function on the data to the second volume. | 2012-09-06 |
20120226877 | MAINTAINING MIRROR AND STORAGE SYSTEM COPIES OF VOLUMES AT MULTIPLE REMOTE SITES - Provided is a method for maintaining mirror and storage system copies of volumes at multiple remote sites. A first server maintains a mirror copy relationship between a first storage system at a first site and a second storage system at a second site. The first server performs a first point-in-time copy operation from the first storage system to a first storage system copy, wherein the data for the first storage system copy is consistent as of the determined point-in-time. The first server transmits a command to a second server to create a point-in-time copy of the second storage system. The second server processes mirror data transferred from the first server as part of the mirror copy relationship to determine when to create a second point-in-time copy. The second server performs the second point-in-time copy operation. | 2012-09-06 |
20120226878 | DATA PROCESSING SYSTEM - A data processing system has a plurality of storage systems. In this system, data replication is performed at high speed and efficiency while maintaining data integrity. In addition, when failure has occurred in a configuration element, the time necessary to resume the data replication is reduced. In accordance with an instruction from first host computer, updating of replication-target data and creation of a journal are performed in a storage system A, and updating of replication data and creation of a journal are performed in a storage system B. A storage system C retrieves a journal from the storage system B in asynchronization with the updating, and performs updating of replication data. When failure has occurred in the storage system B, a journal-retrieving end is altered to the storage system, and the replication data is updated in accordance with the retrieved journal. | 2012-09-06 |
20120226879 | FLASHCOPY HANDLING - A technique for handling a FlashCopy® process includes receiving a FlashCopy® instruction for a source disk, performing a FlashCopy® point in time copy of the source disk on to a target disk, creating a map specifying the FlashCopy® point in time copy from the source disk to the target disk, creating a primary fdisk for the source disk, if one does not already exist, and creating a primary fdisk for the target disk, if one does not already exist, or, if one does already exist, converting the existing primary fdisk for the target disk into a secondary fdisk, and creating a new primary fdisk for the target disk. | 2012-09-06 |
20120226880 | APPARATUS, ELECTRONIC DEVICES AND METHODS ASSOCIATED WITH AN OPERATIVE TRANSITION FROM A FIRST INTERFACE TO A SECOND INTERFACE - Subject matter disclosed herein relates to an apparatus comprising memory and a controller, such as a controller which determines block locking states in association with operative transitions between two or more interfaces that share at least one block of memory. The apparatus may support single channel or multi-channel memory access, write protection state logic, or various interface priority schemes. | 2012-09-06 |
20120226881 | Hard Disk Control Method, Hard Disk Control Device and Computer - A hard disk control method, a hard disk control device and a computer are provided The method includes detecting the current mode in which the system runs; determining the access frequency of the hard disk in the system when detecting the system runs in an idle mode currently; intercepting the hard disk access commands to be sent to the hard disk when the access frequency of the hard disk is lower than a predetermined access frequency threshold to make the hard disk enter into a preset power saving mode, and saving the hard disk access commands into a preset memory. | 2012-09-06 |
20120226882 | STORAGE SYSTEM, STORAGE CONTROL APPARATUS, AND STORAGE CONTROL METHOD - In a storage system a write processing section writes data, of ejection object data, which is stored in a storage apparatus to an ejection portable record medium. A read processing section reads out data, of the ejection object data, which is not stored in the storage apparatus at the time of an ejection request being made from portable record mediums contained in a store section, and stores the data in the storage apparatus as data to be written by the write processing section. An ejection process control section controls timing at which the write processing section begins writing on the basis of an amount of the data, of the ejection object data, which is stored in the storage apparatus and an amount of remaining data, of the ejection object data, which is to be read out by the read processing section from the portable record mediums. | 2012-09-06 |
20120226883 | MANAGEMENT APPARATUS - According to an embodiment a management apparatus includes: a stream storage configured to store a stream constituted by a plurality of pages; a trace information storage configured to store trace information in each stream, a receiving unit configured to receive a request to write the pages constituting the stream; and a management unit. The management unit refers to the trace information; writes the page into the stream storage when the write rule indicates that the page is to be written in the stream storage; writes the page into the temporary storage when the write rule indicates that the page is to be written in the temporary storage; and writes the page that has been written in the temporary storage into the stream storage in units of extents at a predetermined timing. | 2012-09-06 |
20120226884 | SIGNAL RESTORATION CIRCUIT, LATENCY ADJUSTMENT CIRCUIT, MEMORY CONTROLLER, PROCESSOR, COMPUTER, SIGNAL RESTORATION METHOD, AND LATENCY ADJUSTMENT METHOD - A signal restoration circuit includes a storage configured to store input signals by disposing the input signals in an input order, the input signals being readable from the storage in the disposed order, and a storage controller configured to control delay time from an input of the input signal to an output in the storage based on delay information of the input signal. | 2012-09-06 |
20120226885 | COMPUTER SYSTEM AND CONTROL METHOD THEREFOR - To reduce the number of data copies between volume pools by preventing occurrence of unevenness of resource usage between the pools, provided is a computer system including: storage apparatus; and host computer coupled to the storage apparatus, the storage apparatus including physical storage device, the storage apparatus holding information associating virtual volumes and pools each including real storage areas of the physical storage device, the storage apparatus allocating, to the virtual volume of a write destination designated by the host computer, the real storage areas included in each of the plurality of pools corresponding to the virtual volume of the write destination, and storing the data therein, the computer system being configured to: determine, based on the information held by the storage apparatus, orders of priority of the volumes of the write destination by the host computers; and hold the determined orders of priority. | 2012-09-06 |
20120226886 | METHODS AND SYSTEMS FOR RELEASING AND RE-ALLOCATING STORAGE SEGMENTS IN A STORAGE VOLUME - Storage segments in a storage volume coupled to a cache memory are released and re-allocated. A processor receives notice to release a segment allocated to the storage volume. A release pending status is assigned to the segment while preparing the segment for release. The storage volume is enabled to re-claim the segment while the segment includes the release pending status. | 2012-09-06 |
20120226887 | LOGICAL ADDRESS TRANSLATION - The present disclosure includes methods for logical address translation, methods for operating memory systems, and memory systems. One such method includes receiving a command associated with a LA, wherein the LA is in a particular range of LAs and translating the LA to a physical location in memory using an offset corresponding to a number of physical locations skipped when writing data associated with a range of LAs other than the particular range. | 2012-09-06 |
20120226888 | Memory Management Unit With Pre-Filling Capability - Systems and method for memory management units (MMUs) configured to automatically pre-fill a translation lookaside buffer (TLB) with address translation entries expected to be used in the future, thereby reducing TLB miss rate and improving performance. The TLB may be pre-filled with translation entries, wherein addresses corresponding to the pre-fill may be selected based on predictions. Predictions may be derived from external devices, or based on stride values, wherein the stride values may be a predetermined constant or dynamically altered based on access patterns. Pre-filling the TLB may effectively remove latency involved in determining address translations for TLB misses from the critical path. | 2012-09-06 |
20120226889 | SYSTEM AND METHOD FOR DETERMINING EXACT LOCATION RESULTS USING HASH ENCODING OF MULTI-DIMENSIONED DATA - Aspects of the present invention are directed to system and methods for optimizing identification of locations within a search area using hash values. A hash value represents location information in a single dimension format. Computing points around some location includes calculating an identification boundary that surrounds the location of interest based on the location's hash value. The identification boundary is expanded until it exceeds a search area defined by the location and a distance. Points around the location can be identified based on having associated hash values that fall within the identification boundary. Hashing operations let a system reduce the geometric work (i.e. searching inside boundaries) and processing required, by computing straightforward operations on hash quantities (e.g. searching a linear range of geohashes), instead of, for example, point to point comparisons. | 2012-09-06 |
20120226890 | ACCELERATOR AND DATA PROCESSING METHOD - The process speed and the power efficiency are improved while accomplishing downsizing by configuring an integrated hard-wired logic controller by a hard-wired logic, and a function modification is enabled by a patch circuit without re-designing of the integrated hard-wired logic controller itself by high-level synthesis even when the function modification becomes necessary because of a specification change and a false design after the production. The costs can be reduced by what corresponds to the unnecessity of re-designing. Therefore, an accelerator is provided which can improve the process speed and the power efficiency while accomplishing downsizing, and which can remarkably reduce the costs for the function modification after the production. | 2012-09-06 |
20120226891 | PROCESSOR WITH INCREASED EFFICIENCY VIA CONTROL WORD PREDICTION - Methods and apparatuses are provided for increased efficiency in a processor via control word prediction. The apparatus comprises an operational unit capable of determining whether an instruction will change a first control word to a second control word for processing dependent instructions. Execution units process the dependent instructions using a predicted control word and compare the second control word to the predicted control word. A scheduling unit causes the execution units to reprocess the dependent instructions when the predicted control word does not match the second control word. The method comprises determining that an instruction will change a first control word to a second control word and processing the dependent instructions using a predicted control word. The second control word is compared to the predicted control word and the dependent instructions are reprocessed using the second control word when the predicted control word does not match the second control word. | 2012-09-06 |
20120226892 | Method and apparatus for generating efficient code for scout thread to prefetch data values for a main thread - One embodiment of the present invention provides a system that generates code for a scout thread to prefetch data values for a main thread. During operation, the system compiles source code for a program to produce executable code for the program. This compilation process involves performing reuse analysis to identify prefetch candidates which are likely to be touched during execution of the program. Additionally, this compilation process produces executable code for the scout thread which contains prefetch instructions to prefetch the identified prefetch candidates for the main thread. In this way, the scout thread can subsequently be executed in parallel with the main thread in advance of where the main thread is executing to prefetch data items for the main thread. | 2012-09-06 |