29th week of 2013 patent applcation highlights part 55 |
Patent application number | Title | Published |
20130185427 | TRAFFIC SHAPING BASED ON REQUEST RESOURCE USAGE - A current request for a server to perform work for a user profile can be received and processed at the server. It can be determined whether server usage by the profile exhibits a sufficient trend toward a threshold value to warrant performing traffic shaping for the user profile. If so, then a delay time can be calculated based on, or as a function of, server resources used in processing the current request, and a response to the current request can be delayed by the delay time. | 2013-07-18 |
20130185428 | System and Method for Network Path Validation - In a server device, a method for validating a network path in a network includes receiving a listing of ports from a client device, each port in the listing of ports associated with the server device and receiving a request message from the client device via a first identified port in the listing of ports. The method includes, in response to receiving the request message, opening a subsequent identified port in the listing of ports for communication with the client device and, following opening of the subsequent identified port in the listing of ports, transmitting a response message to the client device via the first identified port. | 2013-07-18 |
20130185429 | Processing Store Visiting Data - The present disclosure introduces a method and a system for processing store visiting data. New visiting data is obtained. A user ID, a store ID, and a visiting time are analyzed from the new visiting data. It is determined whether the user ID and the store ID match one of user IDs and store IDs in static historical visiting data. If there is a match, it is determined that a user corresponding to the new visiting data is a repeated user of the store. Otherwise, it is then determined whether the user ID and the store ID match one of user IDs and store IDs in dynamic historical visiting data. If there is a match, it is also determined that a user corresponding to the new visiting data is a repeated user of the store. | 2013-07-18 |
20130185430 | MULTI-LEVEL HASH TABLES FOR SOCKET LOOKUPS - Methods, systems, and devices are described for managing socket lookups in an operating system of a device providing high-speed network services using multi-level hash tables. A system includes a listen socket lookup hash table and a connection socket lookup hash table. The listen socket lookup hash table includes a number of buckets configured to store listen socket lookup data for network connections. The connection socket lookup hash table includes a number of buckets configured to store connection socket lookup data for the network connections. The buckets in each of the hash tables may be individually locked. In certain examples, a third table may store binding data based on the data stored in the listen socket lookup hash table and the connection socket lookup hash table. | 2013-07-18 |
20130185431 | Uniform Definition, Provision, and Access of Software Services on the Cloud - A method for accessing cloud computing services provided by service providers includes uniformly defining, provisioning, and accessing cloud computing services of multiple genres such as single-tenant, multi-tenant and third party cloud services. The method defines cloud services across genres in a standard manner, acquires cloud services across genres, and provides a unified access and view of all cloud services subscribed by a user. The cloud services are acquired across genres by identifying a provisioning mechanism based on the cloud service genre requested by the user, automatically activating necessary task flow for the identified mechanism, and enabling the user to access the requested service by providing access with a unified Identity and Access Management System. | 2013-07-18 |
20130185432 | METHOD AND SYSTEM FOR SHARING ROUTER RESOURCES VIA A MOBILE VIRTUAL ROUTER - An approach is provided for creating a single mobile virtual router (MVR) to share pooled resources. A mobile virtual router is formed to utilize resources of multiple routers of a network, wherein the routers include one or more physical routers, one or more virtual routers, one or more other mobile virtual routers, or a combination thereof. The resources of the single mobile virtual router are dynamically partitioned in response to an operational criterion of the network. | 2013-07-18 |
20130185433 | PERFORMANCE INTERFERENCE MODEL FOR MANAGING CONSOLIDATED WORKLOADS IN QOS-AWARE CLOUDS - The workload profiler and performance interference (WPPI) system uses a test suite of recognized workloads, a resource estimation profiler and influence matrix to characterize un-profiled workloads, and affiliation rules to identify optimal and sub-optimal workload assignments to achieve consumer Quality of Service (QoS) guarantees and/or provider revenue goals. The WPPI system uses a performance interference model to forecast the performance impact to workloads of various consolidation schemes usable to achieve cloud provider and/or cloud consumer goals, and uses the test suite of recognized workloads, the resource estimation profiler and influence matrix, affiliation rules, and performance interference model to perform off-line modeling to determine the initial assignment selections and consolidation strategy to use to deploy the workloads. The WPPI system uses an online consolidation algorithm, the offline models, and online monitoring to determine virtual machine to physical host assignments responsive to real-time conditions to meet cloud provider and/or cloud consumer goals. | 2013-07-18 |
20130185434 | Cloud-based Content Management System - Methods and apparatus, including computer program products, implementing and using techniques for providing content management services in a Cloud computing environment. A content management application and associated content is distributed across a set of servers in a Cloud computing environment. Requests for Cloud content management services are received from requesters that are using the Cloud computing environment. The received requests are analyzed to determine an amount of resources needed for responding to the requests. Based on the results of the analysis and a predetermined set of rules, the content management application is dynamically replicated to additional servers within the Cloud computing environment. Any instance of the content management application is capable of replying to any received request so as to maintain a high throughput of the Cloud content management services. | 2013-07-18 |
20130185435 | EFFICIENTLY RELATING ADJACENT MANAGEMENT APPLICATIONS MANAGING A SHARED INFRASTRUCTURE - A linkage controller analyzes, for a first management application managing at least one common resource with a second management application adjacent to the first management application within a computing environment comprising multiple resources and relationships, a resource and relationship model known by the first management application of a selection of resources and relationships managed by the first management application from among the plurality of resources and relationships in the computing environment. The linkage controller identifies, for the first management application, only a minimal set of resources and relationships within the resource and relationship model providing at least one optimal linkage point between the first management application and the second management application as to the at least one common resource. The linkage controller outputs the minimal set of resources and relationships to the second management application for relating to the first management application. | 2013-07-18 |
20130185436 | Optimizing Allocation Of On-Demand Resources Using Performance Zones - In one embodiment, the present invention can be used to efficiently allocate on-demand resources to a customer of a data center such as a multi-tenant data center having resources dedicated to given customers, as well as on-demand resources that can be flexibly provisioned to customers using a performance zone concept realized via logical switches to present a single logical network to the customer. | 2013-07-18 |
20130185437 | NOC-ORIENTED CONTROL OF A DEMAND COORDINATION NETWORK - An apparatus, including a plurality of devices, a network operations center (NOC), and a plurality of control nodes. Each of the plurality of devices consumes a portion of the resource when turned on, and performs a corresponding function within an acceptable operational margin by cycling on and off. The NOC is disposed external to the facility, and generates a plurality of run time schedules that coordinates run times for the each of the plurality of devices to control the peak demand of the resource. Each of the plurality of control nodes is coupled to a corresponding one of the plurality of devices. The plurality of control nodes transmits sensor data and device status to the NOC via the demand coordination network for generation of the plurality of run time schedules, and executes selected ones of the run time schedules to cycle the plurality of devices on and off. | 2013-07-18 |
20130185438 | Policy-Aware Based Method for Deployment of Enterprise Virtual Tenant Networks - A method for policy-aware mapping of an enterprise virtual tenant network includes receiving inputs from a hosting network and tenants, translating resource demand and policies of the tenants into a network topology and bandwidth demand on each link in the network; pre-arranging a physical resource of a physical topology for clustering servers on the network to form an allocation unit before a VTN allocation; allocating resources of the hosting network to satisfy demand of the tenants in response to a VTN demand request; and conducting a policy aware VTN mapping for enumerating all feasibly resource mappings, bounded by a predetermined counter for outputting optimal mapping with policy-compliant routing paths in the hosting network. | 2013-07-18 |
20130185439 | CLOUD-BASED CONTENT MANAGEMENT SYSTEM - Methods for providing content management services in a Cloud computing environment. A content management application and associated content is distributed across a set of servers in a Cloud computing environment. Requests for Cloud content management services are received from requesters that are using the Cloud computing environment. The received requests are analyzed to determine an amount of resources needed for responding to the requests. Based on the results of the analysis and a predetermined set of rules, the content management application is dynamically replicated to additional servers within the Cloud computing environment. Any instance of the content management application is capable of replying to any received request so as to maintain a high throughput of the Cloud content management services. | 2013-07-18 |
20130185440 | Ice Based Nat Traversal - An originating P-CSCF node receives a SIP INVITE request from first user equipment (UE) that originates a call to a second UE. If a relay candidate address for the first UE is not present in the SIP INVITE request, the SIP INVITE request is modified to include a first address provided by an originating IMS-AGW node as a relay candidate for the first UE and forwarded to the second UE. The originating P-CSDF node receives a SIP INVITE response message from the second UE in response to the SIP INVITE request. If a relay candidate address for the second UE is not present in the SIP INVITE response, the SIP invite response is modified to include a second address provided by an originating IMS-AGW node as a relay candidate for the second UE and forwarded t the first UE. The address information is used by both UEs in ICE operations. | 2013-07-18 |
20130185441 | MOBILE RADIO COMMUNICATION DEVICE AND METHOD OF MANAGING CONNECTIVITY STATUS FOR THE SAME - The present invention provides a method of managing connection status for a channel connecting a server device to a mobile radio communication device including a client/server pair ( | 2013-07-18 |
20130185442 | METHODS AND APPARATUS FOR USER PERSONA MANAGEMENT - Systems and techniques for managing a user persona presented in a communication session. In response to a request from an originating user for a communication session, a persona manager for the originating user is invoked, examining request details and the nature and context of the requested communication session and selecting a persona for the user, selection of the persona being employed to indicate services associated with the communication. Similarly, in response to a request from an originating user for a communication session, a persona manager for the receiving user to whom the request is directed examines details of the request and the nature and context of the communication session and makes decisions relating to persona selection for the receiving user. The decision may involve accepting a persona indicated in the originating user's request, or selecting a different persona and creating routing a request to be routed to the receiving user. | 2013-07-18 |
20130185443 | METHODS AND SYSTEMS FOR AGGREGATING PRESENCE INFORMATION TO PROVIDE A SIMPLIFIED UNIFIED PRESENCE - Methods and systems for providing simplified presence for a user are described. The user has a plurality of associated communication devices registered with a communications server, and each communication device enables at least one communication service class. The server has a user data entry associating the user with each of the plurality of communication devices. To hide the details of the user-associated devices from third parties, a virtual device is defined and associated with the user. Presence information received at the server from the various devices is aggregated together to create aggregated presence information that indicates at least the service classes available from the user-associated devices based on the received presence information. A virtual device presence document is generated containing the aggregated presence information and is provided to a presence server as presence information associated with the user. | 2013-07-18 |
20130185444 | SWITCHING BETWEEN CONNECTIVITY TYPES TO MAINTAIN CONNECTIVITY - Techniques are provided for leveraging narrowband connectivity (such as dial-up communications or other types of low bandwidth communications) to provision or configure broadband connectivity between a broadband access provider and a broadband device, such as a DSL modem or a cable modem. Specifically, because narrowband connectivity does not require advance configuration or provisioning by the host system of connectivity parameters for an access-seeking device, a modem at an access-seeking device may be leveraged to establish a narrowband connection between that device and a host system and to enable an exchange or negotiation of connectivity parameters necessary to enable future broadband connectivity. Thus, once established, the narrowband connection may be used as a conduit for communicating required provisioning information between the broadband-enabling host and the access-seeking device to enable broadband connectivity by the device in the future. | 2013-07-18 |
20130185445 | Method and System for Managing a SIP Server - A method, system and computer program product are described for managing network communications to a Session Initiation Protocol (SIP) server capable of SIP processing using a SIP stack. A data packet is received from a network device. It is determined, from the data packet, whether the network device is a device recognised by the SIP server. Responsive to this determination, and before SIP processing using the SIP stack, it is determined whether the data packet conforms to a permitted configuration. The permitted configuration includes that data of the data packet indicates an unfragmented User Datagram Protocol (UDP) packet and that data indicative of SIP data in the received data packet matches a parsing rule. If the data packet conforms to the permitted configuration, it is passed to the SIP stack, if not it is discarded. | 2013-07-18 |
20130185446 | METHOD AND DEVICE FOR CONNECTING TO VIRTUAL PRIVATE NETWORK ACROSS DOMAINS - Embodiments of the present invention disclose a method for connecting to a VPN across domains. The method includes: receiving, by a PE, a request message for connecting a VDC to a VPN sent by a DCG, determining an RD/RT list corresponding to a VPN User ID in the request message according to the User ID, so as to configure a VPN instance, determining a connection identity at a local end according to a connection identity at a DCG end of an AC in the request message, and binding a logical port in the connection identity at the local end with the VPN instance so that the VDC is connected to the VPN. Accordingly, the present invention further provides a PE and a DCG device for connecting to a VPN across domains. | 2013-07-18 |
20130185447 | SYSTEMS AND METHODS FOR ESTABLISHING A WI-FI DISPLAY (WFD) SESSION - Systems, methods, apparatus, and techniques are provided for establishing an application layer communications session over a layer 2 (L2) communications connection. In particular, a discovery request frame is transmitted from a first device. A discovery response frame is received at the first device, where the discovery response frame is transmitted from a second device in response to having received the discovery request frame. An application layer communications session is established between the first device and the second device while maintaining an existing L2 communications connection between the first device and the second device. | 2013-07-18 |
20130185448 | Systems and Methods for Managing Emulation Sessions - A method and system for managing an emulation session of a computer product. The method and system involves receiving a request from a user device to establish the emulation session; establishing an electronic communication link between the user device and an emulation server for providing the emulation session to the user device; operating at least one server processor, the at least one server processor being in electronic communication with the user device and the emulation server and being separate from the user device processor, to determine emulation session data based on the received request and by monitoring the emulation session; to determine a plurality of emulation session parameters based on the received request; to determine a session action to be applied to the emulation session based on the plurality of emulation session parameters and the emulation session data; and to control the emulation session based on the session action. | 2013-07-18 |
20130185449 | Method and Apparatus for Providing Connectivity in a Network with Multiple Packet Protocols - Methods and systems are provided for routing or forwarding packet data conforming to two different communication protocols simultaneously in a computer network. The first protocol may be a legacy protocol, such as IPv4, with routing being performed in a manner that maintains legacy behavior and functions. Such functions may include network address translation. The second protocol may be a newer protocol, such as IPv6, with the routing or forwarding being performed through reduced complexity bridging that enables simplified connectivity of second protocol devices. The bridging performed typically requires less memory and processing power than traditional routing techniques such as those implemented for the first protocol. Reduced memory and processing power requirements enable the second protocol routing functions to be added to legacy equipment that would not otherwise be able to support routing of the second protocol packet data. | 2013-07-18 |
20130185450 | METHODS AND SYSTEMS FOR CONTENT CONTROL - Methods and system for providing content based on an embedded signal are disclosed. A method can comprise generating a placement signal based on an event, repeatedly embedding the placement signal into a data stream, and transmitting the data stream comprising the repeatedly embedded placement signal. | 2013-07-18 |
20130185451 | SYSTEM FOR MANAGING LOSSLESS FAILOVER IN AN AVB NETWORK - A network communication system includes a listener that receives a plurality of data streams from a talker. The data in the data streams may be identical. The listener identifies one of the data streams as a primary data stream and another of the plurality of data streams as a non-primary data stream. The listener may process data in the primary data stream, and may buffer a minimal amount of data in the non-primary data stream. In the event of a failure or disruption in reception of the primary data stream, the listener may switch over to processing the data in the non-primary data stream. The switch over to processing the non-primary data may be performed in a manner that ensures lossless failover. | 2013-07-18 |
20130185452 | HYPERTEXT TRANSFER PROTOCOL LIVE STREAMING - Illustrative embodiments disclose receiving a command to play a selected audio visual media on a client device. The client device determines portions of audio visual media from elected audio visual media and a sequence identifying each portion of the portions in a particular order for playing the portions. The portions and the sequence are determined according to a policy for playing each portion on the client device. The client device retrieves the portions to play in sequence and plays at least a partially retrieved first portion of the portions of the selected audio visual media on the client device. The first portion is identified based on the particular order in the sequence. | 2013-07-18 |
20130185453 | SYSTEM, METHOD AND APPARATUS FOR PROVIDING MULTIMEDIA SOCIAL STREAMING - The present invention generally relates to social media streaming. In particular, embodiments of the invention relate to an apparatus and a method for providing streaming of one or more forms of content across one or more networks. In a preferred embodiment, the apparatus is a wearable device comprising a computing device configured to process and transmit one or more forms of content (e.g., audio, video) to one or more remote social networks. | 2013-07-18 |
20130185454 | TECHNIQUE FOR OBTAINING, VIA A FIRST NODE, INFORMATION RELATING TO PATH CONGESTION - A method for obtaining, by a first node, information relating to a congestion on a path allowing the routing of packets from said first node destined for a second node in a packet communications network, said congestion potentially degrading said routing. | 2013-07-18 |
20130185455 | SYSTEMS AND METHODS FOR ROUTING NETWORK INFORMATION - A network routing system is described herein. The network routing system comprises a traffic router and a plurality of proxy gateways. The traffic router is configured to receive at least one request for a network object from a requester. The request includes a network address of a target web host. One or more proxy servers from a plurality of proxy servers are assigned to each proxy gateway. In operation, if there is a proxy server having a current connection with the target web host, the traffic router selects the proxy server and forwards the request to a proxy gateway that the proxy server is assigned to. In operation, the proxy gateway receives the request for the network object, converts the request into a translated request based on a protocol type of the proxy server, and sends the translated request to the proxy server. | 2013-07-18 |
20130185456 | METHOD FOR RETRIEVING A DATA POINT OF A SWITCH - A method is disclosed for retrieving a data point (operating data) of a switch via a device which does not recognize the data point of the switch and which can only retrieve such data points of a switch. The device is connected to the switch via a data connection for retrieval. In an embodiment, the switch has a data set in which the data point is described including the use of said data point. The device retrieves the data set during the data connection and extracts at least the description and the use of the data point of the switch from the data set. The device retrieves said data point that is recognized by the device at least for the duration of the data connection and processes said data point using the extracted description and use. | 2013-07-18 |
20130185457 | Command API Using Legacy File I/O Commands for Management of Data Storage - Methods and systems are disclosed that relate to file management operations. One method includes receiving from a first computing device at a target computing device a command string including a command and a path. The command is defined in a set of commands recognizable on the first computing device and the path is associated with the command and indicating execution on the target computing device, the path including a modifier. The method also includes, in response to receiving the command string, interpreting the command as a second command recognizable on the target computing device but not provided by a set of command supported by the first computing device. The second command is defined at least in part by the modifier. The method further includes performing the second command at the target computing device. | 2013-07-18 |
20130185458 | COMPRESSION BLOCK INPUT/OUTPUT REDUCTION - Exemplary method, system, and computer program product embodiments compression blocks input/output (I/O) reduction are provided. In one embodiment, by way of example only, data blocks are arranged into groups to provide a single I/O. Lists indicating the available block space for the data blocks are organized in advance according to space size. The data blocks required for a single command are allocated as the single I/O. The data blocks are sequentially ordered. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2013-07-18 |
20130185459 | Method and Apparatus for Performing Device Configuration Rediscovery - A data processing system and computer instructions in a data processing system for identifying device configurations. Unique identification information is identified for a set of devices in the data processing system. The identified unique identification information is compared with previously identified unique identification information. Configuration data is moved to a memory for devices in the set of devices in which a match exists between the identified unique identification information and the previously identified unique identification information for devices. Configuration information is obtained from a device in which configuration information is absent in the memory after configuration data has been moved to the memory for the devices to form a current set of configuration data for the set of devices. | 2013-07-18 |
20130185460 | OPERATING SYSTEM STATE COMMUNICATION - A service processor communication method includes establishing a communication channel between a service processor and a central processor in a computing system, wherein communication on the communication channel is independent of processing by the central processor, monitoring an operating system that is under control by the central processor, defining a source designator associated with state information of the operating system, and that is passed via the communication channel between the service processor and the central processor; and announcing the state information to a resource external to the computing system. | 2013-07-18 |
20130185461 | DEVICE MANAGEMENT APPARATUS, DEVICE MANAGEMENT SYSTEM, AND COMPUTER PROGRAM PRODUCT - A device management apparatus manages a device. The device management apparatus includes a contract information acquiring unit that acquires contract information on quality assurance of the device; a state information acquiring unit that acquires state information indicating a state of the device; a determining unit that determines whether the state of the device satisfies a content of a contract based on the contract information and the state information; and a notifying unit that notifies a determination result. | 2013-07-18 |
20130185462 | USB 3.0 DEVICE AND CONTROL METHOD THEREOF - A control unit of a USB 3.0 device controls the USB 3.0 device that has entered an SS.Disabled state to transition to an Rx.Detect state when a USB 2.0 connection is not established after a predetermined time, in which the USB 2.0 connection is one of an HS (High Speed) connection, an FS (Full Speed) connection, and an LS (Low Speed) connection. This enables quick return to the Rx.Detect state for the USB 3.0 device that entered the SS.Disabled state due to an error in the host. | 2013-07-18 |
20130185463 | ASSIGNMENT OF CONTROL OF PERIPHERALS OF A COMPUTING DEVICE - Techniques for enabling software-assisted assignment of control of peripherals (e.g., assigning ownership of or assigning access to the peripherals) by a computing device. In accordance with techniques described herein, assignment of control of peripherals is aided by input from software facilities that instruct a peripheral management facility regarding assignment of peripherals. Software facilities may instruct the peripheral management facility in different ways. In some cases, a software facility may instruct the peripheral management facility how to assign control of a peripheral in a particular way, while in other cases a software facility may instruct the peripheral management facility how to assign control of a group of peripherals. In other cases, a software facility may not instruct a peripheral management facility how to assign control of peripherals, but may identify one or more groups of peripherals for which control should be assigned as a group. | 2013-07-18 |
20130185464 | ELECTRONIC APPARATUS, DATA TRANSFER CONTROL METHOD, AND PROGRAM - An electronic apparatus includes an information obtaining section that obtains performance information from an external storage device, and a control section that performs data transfer control based on the performance information that is obtained by the information obtaining section. | 2013-07-18 |
20130185465 | Fencing Direct Memory Access Data Transfers In A Parallel Active Messaging Interface Of A Parallel Computer - Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints. | 2013-07-18 |
20130185466 | APPLICATION OF ALTERNATE ALIGN PRIMITIVES DURING SAS RATE MATCHING TO IMPROVE CONTINUOUS ADAPTATION - The present invention is directed to a method which allows for substitution of standard SAS ALIGN primitives with an alternative, more spectrally pure set of SAS ALIGN primitives that allows for enhanced continuous adaptation performance. Two consenting SAS devices which are connected to each other may negotiate for and start communicating using the alternate set of ALIGN primitives, which may allow for improved jitter tolerance and reduced bit error rate. | 2013-07-18 |
20130185467 | MANAGING DATA PATHS BETWEEN COMPUTER APPLICATIONS AND DATA STORAGE DEVICES - Provided is a computer-implemented method of managing data paths between a computer application and a storage device. The I/O (input/output) load data of a computer application is obtained. If the I/O load data of the computer application is above a pre-determined threshold, data paths are provisioned between the computer application and the storage device based on a pre-defined policy applicable to the computer application. | 2013-07-18 |
20130185468 | SEMICONDUCTOR DEVICE - A semiconductor device according to the present invention includes a first module that issues a first transaction from a first interface unit to be a bus master, a second module that includes a second interface unit to be a bus slave and a third interface unit to be a bus master, and issues a second transaction in response to the first transaction, a third module that receives the second transaction by a fourth interface unit to be a bus slave, a bus master stop request control unit that asserts a bus master stop request and completes an assertion process in response to assertion of a bus master stop acknowledgement, and a code addition unit that adds to the first transaction a compulsory process request code for forcing issuance of the second transaction regardless of the bus master stop request. | 2013-07-18 |
20130185469 | INTERRUPT SIGNAL ACCEPTING APPARATUS AND COMPUTER APPARATUS - An interrupt signal accepting apparatus manages two OSs, relates devices sharing the same interrupt number respectively with an OS caused to perform an interrupt processing and an interrupt priority unique to a device, and manages an interrupt number priority conversion table showing the relation between the interrupt number and the interrupt priority. Each device continuously outputs an interrupt request having the same interrupt number until the interrupt processing is completed. An interrupt controller converts the interrupt number into the interrupt priority in accordance with the interrupt number priority conversion table when there is an interrupt signal from the devices. An interrupt signal control section causes a running OS to perform the interrupt processing to change the interrupt priority in the interrupt number priority conversion table when the converted interrupt priority matches an interrupt priority related to the running OS, and stops the running OS and starts the other OS when the interrupt priorities do not match. | 2013-07-18 |
20130185470 | DETECTION METHOD AND APPARATUS FOR HOT-SWAPPING OF SD CARD - A detection method for hot-swapping of a Secure Digital (SD) card is provided. The detection method includes steps of: transmitting an inquiry command to an card reader at a predetermined frequency; receiving a command return message replied in response to the inquiry command; determining whether the SD card is removed or plugged according to the command return message; and detecting a hot-swapping status of the SD card in real-time to provide an accurate status of the SD card for upper-layer applications. | 2013-07-18 |
20130185471 | DETECTION METHOD AND APPARATUS FOR HOT-SWAPPING OF SD CARD - A detection method for detecting a hot-swapping status of a Secure Digital (SD) card is provided. The detection method includes steps of: transmitting an inquiry command to the SD card at a predetermined frequency when an application requiring the hot-swapping status of the SD card is activated; receiving a current command return message replied in response to the inquiry command, wherein the current command return message includes information indicative of a presence or information indicative of an absence of the SD card; determining the hot-swapping status according to a previous command return message and the current command return message; and replying the determined hot-swapping status to the application. | 2013-07-18 |
20130185472 | TECHNIQUES FOR IMPROVING THROUGHPUT AND PERFORMANCE OF A DISTRIBUTED INTERCONNECT PERIPHERAL BUS - A method for accelerating execution of read operations in a distributed interconnect peripheral bus is provided. The method comprises generating a first number of speculative read requests addressed to an address space related to a last read request served on the bus; sending the speculative read requests to a root component connected to the bus; receiving a second number of read completion messages from the root component of the bus; and sending a read completion message out of the received read completion messages component to the endpoint component only if the read completion message is respective of a real read request or a valid speculative read request out of the speculative read requests, wherein a real read request is issued by the endpoint component. | 2013-07-18 |
20130185473 | Method for Filtering Traffic to a Physically-Tagged Data Cache - Embodiments of a data cache are disclosed that substantially decrease a number of accesses to a physically-tagged tag array of the data cache are provided. In general, the data cache includes a data array that stores data elements, a physically-tagged tag array, and a virtually-tagged tag array. In one embodiment, the virtually-tagged tag array receives a virtual address. If there is a match for the virtual address in the virtually-tagged tag array, the virtually-tagged tag array outputs, to the data array, a way stored in the virtually-tagged tag array for the virtual address. In addition, in one embodiment, the virtually-tagged tag array disables the physically-tagged tag array. Using the way output by the virtually-tagged tag array, a desired data element in the data array is addressed. | 2013-07-18 |
20130185474 | TECHNIQUES USED BY A VIRTUAL MACHINE IN COMMUNICATION WITH AN EXTERNAL MACHINE AND RELATED VIRTUAL MACHINE SYSTEM - A method used by a virtual machine in communication with an external machine includes providing a single sharing page that is shared between a plurality of virtual machines and a particular virtual machine, wherein the particular virtual machine and the plurality of virtual machines run on a same physical machine; writing into the single sharing page a data packet to be sent by the virtual machine to the external machine; scheduling a page swap between the single sharing page and a blank memory page of the particular virtual machine; and sending, to the external machine, the data packet in the memory page of the particular virtual machine subsequent to the page swap. | 2013-07-18 |
20130185475 | SYSTEMS AND METHODS FOR CACHE PROFILING - A cache module leverages a logical address space and storage metadata of a storage module (e.g., virtual storage module) to cache data of a backing store. The cache module maintains access metadata to track access characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not currently in the cache. The access metadata may be separate from the storage metadata maintained by the storage module. The cache module may calculate a performance metric of the cache based on profiling metadata, which may include portions of the access metadata. The cache module may determine predictive performance metrics of different cache configurations. An optimal cache configuration may be identified based on the predictive performance metrics. | 2013-07-18 |
20130185476 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. | 2013-07-18 |
20130185477 | VARIABLE LATENCY MEMORY DELAY IMPLEMENTATION - A method includes receiving, from a processor, a first read request mapped including a first read request address to a first memory location of a register array and a second read request including a second read request address to a second memory location of a register array. The method includes assigning a first simulated time delay to the first read request and assigning a second simulated time delay to the second read request. The method includes, in response to a first elapsed time being equal to the first simulated time delay, outputting a first read request response including first data. The first elapsed time commences upon receipt of the first read request. The method includes, in response to a second elapsed time being equal to the second simulated time delay, outputting a second read request response including second data. The second elapsed time commences upon receipt of the second read request. | 2013-07-18 |
20130185478 | POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. | 2013-07-18 |
20130185479 | DATA PROTECTING METHOD, MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS - A data protecting method for protecting a sub-directory and at least one pre-stored file in a rewritable non-volatile memory module is provided. The method includes receiving a write command from a host system and determining whether a write address indicated by the write command is an address storing a file description block of the sub-directory. The method also includes, when the write address is the address storing a file description block of the sub-directory, determining whether a portion of data streams corresponding to the write command is the same as a corresponding content recorded in the file description block of the sub-directory. The method further includes, when the portion of data streams corresponding to the write command is not the same as the corresponding content recorded in the file description block of the sub-directory, transmitting a write failure signal to the host system. | 2013-07-18 |
20130185480 | STORAGE BALLOONING - One embodiment of the present invention provides a system for managing storage space in a mobile device. During operation, the system detects a decrease in available disk space in a host file system, wherein an image file for a guest system is stored in the host file system. In response to the detected decrease, the system increases a size of a balloon file in a storage of a guest system. The system then receives an indication of a TRIM or discard communication and intercepts the TRIM or discard communication. Next, the system determines that at least one block is free based on the intercepted TRIM or discard communication. Subsequently, the system frees a physical block corresponding to the at least one block in a storage of the host system and reduces a size of the image file for the guest system in accordance with the intercepted TRIM or discard communication. | 2013-07-18 |
20130185481 | SWITCHING DRIVERS BETWEEN PROCESSORS - Systems, methods, and computer software for operating a device can be used to operate the device in multiple modes. The device can be operated in a first operating mode adapted for processing data, in which a first processor executes a driver for a nonvolatile memory and a second processor performs processing of data stored in files on the nonvolatile memory. An instruction can be received to switch the device to a second operating mode adapted for reading and/or writing files from or to the nonvolatile memory. The driver for the nonvolatile memory can be switched from the first processor to the second processor in response to the instruction, and the driver for the nonvolatile memory can be executed on the second processor after performing the switch. A communications driver can be executed on the first processor in response to the instruction to switch the device to the second operating mode. | 2013-07-18 |
20130185482 | MEMORY SYSTEM USING A STORAGE HAVING FIRMWARE WITH A PLURALITY OF FEATURES - A memory system includes a host including a configuration controller to receive an input command and to output a configuration command corresponding to the input command, and a storage to be driven by firmware including a plurality of features, the storage including an adaptation controller to receive the configuration command from the configuration controller and to determine whether to enable each of the features. | 2013-07-18 |
20130185483 | DATA STORAGE SYSTEM, MEMORY CONTROLLER, NONVOLATILE MEMORY DEVICE, AND METHOD OF OPERATING THE SAME - A nonvolatile memory device includes: first through m-th word lines arranged sequentially and first through m-th pages connected respectively to the first through m-th word lines; a redundant array of inexpensive disks (RAID) controller generating first RAID parity data from first through (m−1)-th data; and an access controller connected to the RAID controller and capable of accessing the nonvolatile memory device, wherein the access controller programs the first through (m−1)-th data to the first through (m−1)-th pages and programs the first RAID parity data to the m-th page. | 2013-07-18 |
20130185484 | FILE PROGRAMMING METHOD AND ASSOCIATED DEVICE FOR NAND FLASH - A file programming method for a flash memory is provided. The method includes steps of: obtaining a section description file and corresponding allocation information and user data while generating burning files, wherein the section description file includes section description information of at least one section and the allocation information includes the number that the burning file is to be generated; determining a file type corresponding to a section file according to the section description information; and generating the burning files utilizing the user data according to section description information, the number that the burning file is to be generated corresponding to the section description information, and the file types corresponding to the section files. | 2013-07-18 |
20130185485 | Non-Volatile Memory Devices Using A Mapping Manager - Provided are storage devices that may include a non-volatile memory. The storage devices may also include a controller configured to perform a read operation on a physical page of the non-volatile memory in response to a read request on a logical page of the non-volatile memory from a host. The controller may include a mapping manager configured to manage a plurality of logical blocks by a logical unit. The mapping manager may include a unit map table including a correlation between the logical unit and a physical unit corresponding to the logical unit. Additionally, the mapping manager may be configured to change a mapping method according to whether the unit map table includes a physical unit corresponding to a logical unit including a logical page requested by the host. Related user devices and electronic devices are also provided. | 2013-07-18 |
20130185486 | STORAGE DEVICE, STORAGE SYSTEM, AND INPUT/OUTPUT CONTROL METHOD PERFORMED IN STORAGE DEVICE - A storage device includes a storage unit including a plurality of regions in which data is stored, the storage unit configured to input and output the data through channels and ways corresponding to the plurality of regions; an interface unit including a multi-entry queue, the multi-entry queue including a plurality of entries in which received commands are entered, the interface unit being configured to transmit data to be written in and read from the storage unit in response to the commands entered in the plurality of entries of the multi-entry queue; and a firmware unit configured to allocate the plurality of entries of the multi-entry queue corresponding to the commands received by the interface unit. | 2013-07-18 |
20130185487 | MEMORY SYSTEM AND MOBILE DEVICE INCLUDING HOST AND FLASH MEMORY-BASED STORAGE DEVICE - A memory system is provided which includes a storage device including a flash memory; and a host configured to request a storage device state and user pattern information via a user interface, to analyze the user pattern information, to set up a parameter of the storage device such that the storage device operates an optimization operation, according to the analyzing result, and to provide a command for the optimization operation to the storage device. | 2013-07-18 |
20130185488 | SYSTEMS AND METHODS FOR COOPERATIVE CACHE MANAGEMENT - A cache module leverages storage metadata to cache data of a backing store on a non-volatile storage device. The cache module maintains access metadata pertaining to access characteristics of logical identifiers in the logical address space, including access characteristics of un-cached logical identifiers (e.g., logical identifiers associated with data that is not stored on the non-volatile storage device). The access metadata may be separate and/or distinct from the storage metadata. The cache module determines whether to admit data into the cache and/or evict data from the cache using the access metadata. A storage module may provide eviction candidates to the cache module. The cache module may select candidates for eviction. The storage module may leverage the eviction candidates to improve the performance of storage recovery and/or grooming operations. | 2013-07-18 |
20130185489 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 2013-07-18 |
20130185490 | SEMICONDUCTOR MEMORY SYSTEM HAVING A SNAPSHOT FUNCTION - In a semiconductor memory computer equipped with a flash memory, use of backed-up data is enabled. The semiconductor memory computer includes an address conversion table for detecting physical addresses of at least two pages storing data by designating a logical address from one of logical addresses to be designated by a reading request. The semiconductor memory computer includes a page status register for detecting one page status allocated to each page, and page statuses to be detected include the at least following four statuses: (1) a latest data storage status, (2) a not latest data storage status, (3) an invalid data storage status, and (4) an unwritten status. By using the address conversion table and the page status register, at least two data s (latest data and past data) can be read for one designated logical address from a host computer. | 2013-07-18 |
20130185491 | MEMORY CONTROLLER AND A METHOD THEREOF - A memory controller includes a mixed buffer and an arbiter. The mixed buffer includes at least one single-port buffer and at least one multi-port buffer for managing data flow between a host and a storage device. The arbiter determines an order of access to the mixed buffer among a plurality of masters. The data to be written or read are partitioned into at least two parts, which are then moved to the single-port buffer and the multi-port buffer, respectively. | 2013-07-18 |
20130185492 | Memory Watch - A method can include receiving memory configuration information that specifies a memory configuration; receiving memory usage information for the memory configuration; analyzing the received memory usage information for a period of time; and, responsive to the analyzing, controlling notification circuitry configured to display a graphical user interface that presents information for physically altering a specified memory configuration. Various other apparatuses, systems, methods, etc., are also disclosed. | 2013-07-18 |
20130185493 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 2013-07-18 |
20130185494 | POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. | 2013-07-18 |
20130185495 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 2013-07-18 |
20130185496 | Vector Processing System - A vector processing system provides high performance vector processing using a System-On-a-Chip (SOC) implementation technique. One or more scalar processors (or cores) operate in conjunction with a vector processor, and the processors collectively share access to a plurality of memory interfaces coupled to Dynamic Random Access read/write Memories (DRAMs). In typical embodiments the vector processor operates as a slave to the scalar processors, executing computationally intensive Single Instruction Multiple Data (SIMD) codes in response to commands received from the scalar processors. The vector processor implements a vector processing Instruction Set Architecture (ISA) including machine state, instruction set, exception model, and memory model. | 2013-07-18 |
20130185497 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 2013-07-18 |
20130185498 | PREVENTION OF DATA LOSS DUE TO ADJACENT TRACK INTERFERENCE - For limiting data loss due to ATI or ATE, an apparatus may include a storage module, a tracking module, and a refresh module. The storage module is configured to store a risk value for a tracked storage division. The risk value indicates a risk level of data loss for the tracked storage division. The tracked storage division is one of a plurality of storage divisions of a data storage device. The tracking module is configured to update the risk value to indicate a higher risk level based on a write to a physically proximal storage division. The physically proximal storage division is within an interference range of the tracked storage division. The tracking module is configured to reset the risk value based on a write to the tracked storage division. The refresh module is configured to refresh the tracked storage division based on the risk value meeting a threshold value. | 2013-07-18 |
20130185499 | FAST EXIT FROM SELF-REFRESH STATE OF A MEMORY DEVICE - A system provides for a signal to indicate when a memory device exits from self-refresh. Thus, substantially at the same time (before or after) the memory device exits self-refresh, an indicator signal can be triggered to indicate normal operation or standard refresh operation and normal memory access of the memory device. A memory controller can access the indicator signal to determine whether the memory device is in self-refresh. Thus, the memory controller can more carefully manage the timing of sending a command to the memory device while reducing the delay time typically associated with detecting a self-refresh condition. | 2013-07-18 |
20130185500 | AUTONOMIC RECLAMATION PROCESSING FOR TAPES - Various embodiments for autonomic reclamation processing for tapes are provided. Instructions are received to perform reclamation processing on the formatted tape. Formatted tape is loaded into a tape drive for buffering active data during reclamation processing and consolidating all of the active data in capacity optimized manner on the same formatted tape. The formatted tape comprises metadata denoting active and inactive data blocks for files. The meta data of the formatted tape is read into a reclamation memory. The table is sorted and a starting block address is sorted. All active files ordered in the table starting at the starting block address are read into the reclamation memory. The files are written from the reclamation memory to the formatted tape from the starting block address and updating the table with new block addresses of the files. The meta data is updated with the updated table. | 2013-07-18 |
20130185501 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 2013-07-18 |
20130185502 | DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. | 2013-07-18 |
20130185503 | METHOD FOR METADATA PERSISTENCE - Providing automatic updating of the mapped and unmapped extents in the metadata disk layout for a transaction. A transaction contains mapped and unmapped extents changes. The mapped extents changes can be anywhere in the metadata data disk area of 10 MB and also the unmapped area. A write journal will be added to every migration. For every migration a transaction is created and that contains mapped and unmapped changes. After reboot write journal will be applied. For providing greater data integrity block level sequence number, block number and CRC is maintained for secondary and primary copy. | 2013-07-18 |
20130185504 | DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. | 2013-07-18 |
20130185505 | STORAGE SYSTEM PROVIDING VIRTUAL VOLUMES - Multiple storage area groups into which multiple storage areas provided by multiple storage devices are classified with reference to storage area attributes are managed. The multiple logical volumes to which, in accordance with a write request to at least one address included in multiple addresses in the logical volume, at least one storage area included in the multiple storage areas is allocated are provided. In accordance with the access condition of the at least one address in the logical volume, the data written to the at least one address by the write request is migrated from the at least one storage area included in one of the multiple storage area groups to at least one storage area in another storage area group included in the multiple storage area groups. | 2013-07-18 |
20130185506 | Controlling a Storage System - A method, computer-readable storage medium and computer system for controlling a storage system, the storage system comprising a plurality of logical storage volumes, the method comprising: monitoring, for each of the logical storage volumes, one or more load parameters; receiving, for each of the logical storage volumes, one or more load parameter threshold values; comparing, for each of the logical storage volumes, the first load parameter values of said logical storage volume with the corresponding one or more load parameter threshold values; in case at least one of the first load parameter values of one of the logical storage volumes violates the load parameter threshold value it is compared with, automatically executing a corrective action. | 2013-07-18 |
20130185507 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 2013-07-18 |
20130185508 | SYSTEMS AND METHODS FOR MANAGING CACHE ADMISSION - A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache. | 2013-07-18 |
20130185509 | COMPUTING MACHINE MIGRATION - Systems and methods for migration between computing machines are disclosed. The source and target machines can be either physical or virtual; the source can also be a machine image. The target machine is connected to a snapshot or image of the source machine file system, and a redo-log file is created on the file system associated with the target machine. The target machine begins operation by reading data directly from the snapshot or image of the source machine file system. Thereafter, all writes are made to the redo-log file, and subsequent reads are made from the redo-log file if it contains data for the requested sector or from the snapshot or image if it does not. The source machine continues to be able to run separately and simultaneously after the target machine begins operation. | 2013-07-18 |
20130185510 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 2013-07-18 |
20130185511 | Hybrid Write-Through/Write-Back Cache Policy Managers, and Related Systems and Methods - Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active. | 2013-07-18 |
20130185512 | MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. | 2013-07-18 |
20130185513 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 2013-07-18 |
20130185514 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 2013-07-18 |
20130185515 | Utilizing Negative Feedback from Unexpected Miss Addresses in a Hardware Prefetcher - Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined. | 2013-07-18 |
20130185516 | Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching - Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines. | 2013-07-18 |
20130185517 | TECHNIQUES FOR IMPROVING THROUGHPUT AND PERFORMANCE OF A DISTRIBUTED INTERCONNECT PERIPHERAL BUS CONNECTED TO A HOST CONTROLLER - A method for accelerating execution of read operations in a distributed interconnect peripheral bus, the distributed interconnect peripheral bus is coupled to a host controller being connected to a universal serial bus (USB) device. The method comprises synchronizing on at least one ring assigned to the USB device; pre-fetching transfer request blocks (TRBs) maintained in the at least one ring, wherein the TRBs are saved in a host memory; saving the pre-fetched TRBs in an internal cache memory; upon reception of a TRB read request from the host controller, serving the request by transferring the requested TRB from the internal cache memory to the host controller; and sending a TRB read completion message to the host controller. | 2013-07-18 |
20130185518 | DETERMINING DATA CONTENTS TO BE LOADED INTO A READ-AHEAD CACHE IN A STORAGE SYSTEM - Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. Each read request sent from the client agent to the storage system includes positions and size for reading. A read-ahead cache is maintained for each client read session. The read-ahead cache is partitioned into two buffers. Data is loaded into the logical buffers according to the changes of the positions in the read requests of the client read session and loading of new data into the buffers is triggered by the read requests positions exceeding a position threshold in the data covered by the second logical buffer. | 2013-07-18 |
20130185519 | MANAGING GLOBAL CACHE COHERENCY IN A DISTRIBUTED SHARED CACHING FOR CLUSTERED FILE SYSTEMS - Systems. Methods, and Computer Program Products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (CFS). The CFS manages access permissions to an entire space of data segments by using the DSM module. In response to receiving a request to access one of the data segments, a calculation operation is performed for obtaining most recent contents of one of the data segments. The calculation operation performs one of providing the most recent contents via communication with a remote DSM module which obtains the one of the data segments from an associated external cache memory, instructing by the DSM module to read from storage the one of the data segments, and determining that any existing contents of the one of the data segments in the local external cache are the most recent contents. | 2013-07-18 |
20130185520 | Determining Cache Hit/Miss of Aliased Addresses in Virtually-Tagged Cache(s), and Related Systems and Methods - Apparatuses and related systems and methods for determining cache hit/miss of aliased addresses in virtually-tagged cache(s) are disclosed. In one embodiment, virtual aliasing cache hit/miss detector for a VIVT cache is provided. The detector comprises a TLB configured to receive a first virtual address and a second virtual address from the VIVT cache resulting from an indexed read into the VIVT cache based on the first virtual address. The TLB is further configured to generate first and second physical addresses translated from the first and second virtual addresses, respectively, The detector further comprises a comparator configured to receive the first and second physical addresses and effectuate a generation of an aliased cache hit/miss indicator based on a comparison of the first and second physical addresses. In this manner, the virtual aliasing cache hit/miss detector correctly generates cache hits and cache misses, even in the presence of aliased addressing. | 2013-07-18 |
20130185521 | MULTIPROCESSOR SYSTEM AND SCHEDULING METHOD - A multiprocessor system includes a master processor, at least one slave processor, and a synchronization unit. The master processor has a first flag indicating whether the master processor is in a task activation accepting state and a second flag reflective of a flag of a slave processor, iteratively updates the first flag at a frequency based on the volume of tasks processed by the master processor, and activates a task on the master processor or the slave processor based on the first flag and the second flag. Each slave processor has a third flag indicating whether the slave processor is in the task activation accepting state and iteratively updates the third flag at a frequency based on the volume of tasks processed by the slave processor. Tasks are allocated to the slave processor by the master processor. The synchronization unit synchronizes the third flag and the second flag. | 2013-07-18 |
20130185522 | ALLOCATION AND WRITE POLICY FOR A GLUELESS AREA-EFFICIENT DIRECTORY CACHE FOR HOTLY CONTESTED CACHE LINES - Methods and apparatus relating to allocation and/or write policy for a glueless area-efficient directory cache for hotly contested cache lines are described. In one embodiment, a directory cache stores data corresponding to a caching status of a cache line. The caching status of the cache line is stored for each of a plurality of caching agents in the system. An write-on-allocate policy is used for the directory cache by using a special state (e.g., snoop-all state) that indicates one or more snoops are to be broadcasted to all agents in the system. Other embodiments are also disclosed. | 2013-07-18 |
20130185523 | DECOUPLED METHOD FOR TRACKING INFORMATION FLOW AND COMPUTER SYSTEM THEREOF - A computer system and a method for tracking information flow are provided. The computer system divides an information flow tracking task into two decoupled tasks executed by two procedures. The first procedure emulates execution of instructions and divides the instructions into code blocks according to an instruction executing sequence. The first procedure translates the instructions of the code blocks into information flow codes and transmits them to the second procedure. The first procedure further translates the instructions into dynamic emulation instructions and executes the dynamic emulation instructions to generate addressing results of the dynamic addressing instructions. The second procedure executes the information flow codes according to the addressing results to emulate the instructions of the code blocks. Moreover, the method also tries to reduce the amount of data transmission between the two procedures when the first procedure executes the emulation task. Therefore, the efficiency of tracking information flow is enhanced. | 2013-07-18 |
20130185524 | METHOD AND DEVICE FOR DETECTING A RACE CONDITION - A method for detecting a race condition, comprising storing a seed value to a first global variable D; detecting a race condition when the second global variable A does not equal a first predefined value V | 2013-07-18 |
20130185525 | SEMICONDUCTOR CHIP AND METHOD OF CONTROLLING MEMORY - Disclosed herein are a semiconductor chip for adaptively processing a plurality of commands to request memory access, and a method of controlling memory. The semiconductor chip includes a storage unit ad a control unit. The storage unit stores a memory access request to be currently processed and a plurality of memory access requests received before the memory access request to be currently processed in received order. The control unit processes the memory access request to be currently processed and the plurality of memory access requests received before the memory access request to be currently processed, which have been stored in the storage unit, in received order, except that memory access requests attempting to access the same bank and the same row are successively processed. | 2013-07-18 |
20130185526 | SYSTEM FOR INCREASING UTILIZATION OF STORAGE MEDIA - A storage system creates an abstraction of flash Solid State Device (SSD) media allowing random write operations of arbitrary size by a user while performing large sequential write operations of a uniform size to an SSD array. This reduces the number of random write operations performed in the SSD array and as a result increases performance of the SSD array. A control element determines when blocks from different buffers should be combined together or discarded based on fragmentation and read activity. This optimization scheme increases memory capacity and improves memory utilization and performance. | 2013-07-18 |