Patent application number | Description | Published |
20080298309 | PROVIDING ADVANCED COMMUNICATIONS FEATURES - Advanced communications features are provided in a mobile communications network having at least one mobile switching center and at least one mobile station subsystem. The mobile switching center and mobile station subsystem each communicate signaling messages according to a mobile signaling protocol. | 12-04-2008 |
20090086742 | PROVIDING VIRTUAL SERVICES WITH AN ENTERPRISE ACCESS GATEWAY - Systems and methods to virtually and securely extend voice, data, and video services as well as applications on communication networks is provided. An access gateway device is used to provide interworking and extension of services from an enterprise network or a hosted enterprise network to a public network such as an IP Multimedia Subsystem (IMS) network. The access gateway device can also enable handoffs between an enterprise access point and the service provider's radio network while maintain the user's session. The access gateway can also extend services from the enterprise network to the service providers network and vice versa. | 04-02-2009 |
20090141625 | SYSTEM AND METHOD FOR REDUCING LATENCY IN CALL SETUP AND TEARDOWN - Systems and methods for reducing latency in call setup and teardown are provided. A network device with integrated functionalities and a cache is provided that stores policy information to reduce the amount of signaling that is necessary to setup and teardown sessions. By handling various aspects of the setup and teardown within a network device, latency is reduced and the amount of bandwidth needed for setup signaling is also reduced. | 06-04-2009 |
20090156213 | INTERWORKING GATEWAY FOR MOBILE NODES - Systems and methods are provided that allow inter-working between communication networks for the delivery of service to mobile nodes. A gateway is provided that communicates with a femto cell to extend service to an area that otherwise does not receive coverage from a service provider. The femto cell is a small scale base station used to provide coverage over a small area (such as a home or business), and connect to a home or enterprise network. The femto cell provides service for a mobile node and a gateway permits communication over a broadband network. The gateway integrates the mobile nodes connecting via a femto cell into the service provider's network. The gateway also allows provisioning of services and applications, control of service levels, and provides seamless handoffs to macro base stations and other types of access technologies such as Wi-Fi. | 06-18-2009 |
20100279670 | TRANSFERRING SESSIONS IN A COMMUNICATIONS NETWORK - Systems and methods for transferring a session or components of a session in a communication network are provided. The components of the session include media flows and control over the media flows. The user can initiate a transfer of a existing session with a mobile device such as user equipment (UE) to one or more devices that may lead to fan-out or fan-in to multiple devices. This can include separating the delivery of media from the control of the delivery. For example, a UE can be designated a controller to control another UE, such as a television (TV). In providing the capability to transfer these sessions, a gateway is used to implement network functions that allow the streaming to be controlled and delivered to the respective UEs. In some embodiments, the gateway can be flexible and its operation modified according to messages it receives. | 11-04-2010 |
20100285797 | INTERWORKING FUNCTION FOR COMMUNICATION NETWORKS - Systems and methods for providing voice communications and data communications are provided, including: receiving an attach message from a mobile node indicating the mobile node can fallback from a first radio access technology to a second radio access technology and including location information identifying a location of the mobile node, the attach message received via the first radio access technology; sending a translated location updated message to a remote switching device; receiving a service request message from the mobile node, the service request message requesting initiation of a voice call, the service request message received via the first radio access technology; and based on the service request message and the indication that the mobile node can fallback to the second radio access technology, setting up the voice call with the mobile node via the second radio access technology. | 11-11-2010 |
20100291897 | SYSTEM AND METHOD FOR FEMTO COVERAGE IN A WIRELESS NETWORK - Systems and methods are disclosed that provide femto-based wireless coverage in a communication network. This can involve providing an interworking function that communications with a femto base station or femto cell to provide connectivity to the core network. The interworking function can provide service and mobility management where a femto cell (such as a home node B (HNB)) is served concurrently by an IMS core and a legacy core. The interworking function can also provide service through a femto cell to a variety of mobile nodes such as legacy devices and IMS capable devices. The interworking function also provides the ability for handoffs to occur between the core networks and between a femto cell and a macro cell. | 11-18-2010 |
20120044908 | INTERWORKING GATEWAY FOR MOBILE NODES - Systems and methods are provided that allow inter-working between communication networks for the delivery of service to mobile nodes. A gateway is provided that communicates with a femto cell to extend service to an area that otherwise does not receive coverage from a service provider. The femto cell is a small scale base station used to provide coverage over a small area (such as a home or business), and connect to a home or enterprise network. The femto cell provides service for a mobile node and a gateway permits communication over a broadband network. The gateway integrates the mobile nodes connecting via a femto cell into the service provider's network. The gateway also allows provisioning of services and applications, control of service levels, and provides seamless handoffs to marco base stations and other types of access technologies such as Wi-Fi. | 02-23-2012 |
20120195196 | SYSTEM AND METHOD FOR QoS CONTROL OF IP FLOWS IN MOBILE NETWORKS - A system and methods for application control which enable the delivery of Rich Internet Applications such as HD/Video Stream, Gaming, and Webservice delivery over mobile operator's PLMN is disclosed. The methods define Application Program Interfaces for Application Service Providers for delivering rich applications such as NetFlix Video service, Interactive Network Gaming etc., over wireless mobile network using state of the art web protocols such as HTTP, RTMP etc. The platform that incorporates these methods interacts with 3GPP/UMTS/LTE/CDMA defined standard compliant network devices using the standard network interfaces and present application specific control function. It further identifies extensions to the logical interfaces defined by the corresponding standards (3GPP, 3GPP2 etc.). Additionally, methods and procedures for controlling QoS in the transit network gateways, such as SGSN, GGSN, or P-GW, while delivering application traffic, are also disclosed. | 08-02-2012 |
20120244861 | PROVIDING LOCATION BASED SERVICES FOR MOBILE DEVICES - Systems and methods are provided that allow the delivery of location based services within a communication network. The location information can be retrieved using information from the mobile node when the mobile node registers in the network. The location information can then be cached or stored in one or more places in the communication network and correlated with the mobile node's addressing information. If a request for location based services is received without location based information, the gateway can use location based information regarding the mobile node to provide location based services. The gateway can enable non IMS mobile nodes to obtain IMS location based services or incompatible mobile nodes to obtain location based services. | 09-27-2012 |
20130021933 | RAN Analytics, Control And Tuning Via Multi-Protocol, Multi-Domain, And Multi-RAT Analysis - The present invention identifies methods and procedures for correlating control plane and user plane data, consolidating and abstracting the learned and correlated data in a form convenient for minimizing and exporting to other network devices, such as those in the Core Network and the Access Network, or the origin server, CDN devices or client device. These correlation methods may use Control Plane information from a plurality of interfaces in the RAN, and User plane information from other interfaces in the RAN or CN. IF the device is deployed as an inline proxy, this information may be exported using in-band communication, such as HTTP extension headers in HTTP Request or Response packets, or another protocol header, such as the IP or GTP-U header field. Alternatively, this information can be exported out-of-band using a separate protocol between the RAN Transit Network Device (RTND) and the receiving device. | 01-24-2013 |
20130143542 | Content And RAN Aware Network Selection In Multiple Wireless Access And Small-Cell Overlay Wireless Access Networks - Methods for steering the access technology selection by a mobile device in an overlay Small-Cell and Macro Network, such as UMTS, LTE, CDMA, or WIFI are disclosed. This selection determination is based on the observed, real-time correlated and estimated network congestion, content-awareness, application/service expectations, and other criteria. Methods and procedures to influence network selection or control currently selected networks by propagating real-time correlated and consolidated information on a plurality of Radio Access Technologies to Access Points, or modifying the list of alternative Radio Access Technologies available at a location using standards defined mechanisms and parameters are identified. Additionally, steering content access and delivery through alternative access technologies, based on anticipated network usage by user's service activation, and the knowledge of the type, state and resource usage of a plurality of access networks when a mobile device connects to multiple access technologies through in-band or out-of-band mechanisms is identified. | 06-06-2013 |
20130308438 | HIGHLY SCALABLE MODULAR SYSTEM WITH HIGH RELIABILITY AND LOW LATENCY - A computing system for processing network traffic includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades. | 11-21-2013 |
20130308439 | HIGHLY SCALABLE MODULAR SYSTEM WITH HIGH RELIABILITY AND LOW LATENCY - A computing system for processing network traffic includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades. | 11-21-2013 |
20130308459 | HIGHLY SCALABLE MODULAR SYSTEM WITH HIGH RELIABILITY AND LOW LATENCY - A computing system for processing network traffic includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades. | 11-21-2013 |
20140052860 | IP ADDRESS ALLOCATION - Systems and methods are described for IP Address allocation. A computerized method includes receiving at a wireless access gateway a request from a subscriber to connect to a network, allocating a first IP address to the subscriber from a first pool of IP addresses at the wireless access gateway, and assigning a second IP address to the subscriber from a second pool of IP addresses at the wireless access gateway when the subscriber requests a network service. | 02-20-2014 |
20140098762 | APPLICATION AND CONTENT AWARENESS FOR SELF OPTIMIZING NETWORKS - Systems and methods are described for providing application and content awareness for self-optimizing networks. A computerized method includes receiving at a mobile gateway a session request from a mobile device, establishing a session between the mobile device and the mobile gateway, receiving a request from the mobile device at the mobile gateway to access a remote resource, establishing a connection between the mobile device and the remote resource via the mobile gateway, detecting application and content information of a service data flow of the connection, and sending the application and content information of the service data flow to a network server for network optimization. | 04-10-2014 |
20140136660 | EXTENDING MULTICAST/BROADCAST SERVICES TO WIDE AREA NETWORKS - Systems and methods are described for extending multicast/broadcast service to wide area networks. A computerized method includes receiving a multicast/broadcast discovery message from a client, encapsulating the multicast/broadcast discovery message at a gateway, forwarding the encapsulated multicast/broadcast discovery message to a multicast/broadcast server, receiving a multicast/broadcast discovery response message from the multicast/broadcast server with a server IP address, generating a server alias IP address for the multicast/broadcast server at the gateway, replacing the server IP address with the server alias IP address in the multicast/broadcast discovery response message, encapsulating the multicast/broadcast discovery response message at the gateway, and forwarding the encapsulated multicast/broadcast discovery response message to the client. | 05-15-2014 |
20140172947 | CLOUD-BASED VIRTUAL LOCAL NETWORKS - Systems and methods are described for providing cloud-based virtual local networks. A computerized method for providing cloud-based virtual local networks includes receiving at a network gateway a request for a network address from a network switch, communicating with a user device management entity (uDME) server to authorize the network switch, receiving an authorization response from the uDME server for the network switch, receiving a network address pool at the network gateway from the uDME server, and creating at the network gateway a virtual home router containing a virtual home router context that is unique to the virtual home router and associated with the network address pool. | 06-19-2014 |
20140287760 | APPARATUS, SYSTEMS, AND METHODS FOR PROVIDING INTERWORKING GATEWAY - Systems and methods are provided that allow inter-working between communication networks for the delivery of service to mobile nodes. A gateway is provided that communicates with a femto cell to extend service to an area that otherwise does not receive coverage from a service provider. The femto cell is a small scale base station used to provide coverage over a small area (such as a home or business), and connect to a home or enterprise network. The femto cell provides service for a mobile node and a gateway permits communication over a broadband network. The gateway integrates the mobile nodes connecting via a femto cell into the service provider's network. The gateway also allows provisioning of services and applications, control of service levels, and provides seamless handoffs to marco base stations and other types of access technologies such as Wi-Fi. | 09-25-2014 |
20140344449 | IP ADDRESS ALLOCATION FOR WI-FI CLIENTS - Computerized systems and computerized methods are provided for internet protocol (IP) address allocation for Wi-Fi clients in a manner that avoids assigning a public IP address to a device if the device is not first activated to use services provided by the network. A private IP network address is allocated to a device, wherein the private IP network address is only valid for a predetermined period, and only allows the device to activate itself with the network instead of providing the device full access to the network. The device is monitored during the predetermined period so that if the device is activated to use the network during the predetermined period, the computing device assigns a public IP address to the device so that the device can access a full set of services provided by the network. | 11-20-2014 |
20140369354 | SCALABILITY OF PROVIDING PACKET FLOW MANAGEMENT - Systems and methods for managing packet flows in a communication network are provided. Packet information can be cached on different levels and used to avoid external queries. The cache information can also be correlated with other types of information, such as location information, to be able to serve that information quicker than if one or more external queries were to be made. A demux manager can provide routing and session setup, by routing packets that already have a session to the session manager and assigning packets to a session manager if they are not already assigned to a session. The tiered architecture also provides scalability to many users and minimizes delays even during high call volumes because the load can be distributed well across the gateway's resources. | 12-18-2014 |
20150049714 | CENTRALLY MANAGED WI-FI - Described herein are techniques for providing centrally managed Wi-Fi using internet protocol (IP) connections between a central Wi-Fi access gateway and one or more radio nodes. The Wi-Fi access gateway establishes an IP connection with a radio node across a wide area network, wherein the radio node is configured to wirelessly connect to one or more Wi-Fi devices located near the radio node. The Wi-Fi access gateway receives Layer 2 traffic over the IP connection, wherein the Layer 2 traffic is associated with a Wi-Fi device from the one or more Wi-Fi devices connected to the radio node. The Wi-Fi access gateway controls one or more Wi-Fi services for the Wi-Fi device based on the Layer 2 traffic so that the Wi-Fi access gateway can provide centrally managed Wi-Fi for the Wi-Fi device. | 02-19-2015 |
20150117409 | COMBINATION CELLULAR AND WI-FI HARDWARE DEVICE - A combination cellular and Wi-Fi hardware device has an IP interface that is configured to communicate with a first cellular network, a Wi-Fi network, or both. The combination cellular and Wi-Fi hardware device is configured to provide the network functionality (e.g., virtualized network cloud) that is required to facilitate communication and data flow between the IP interface and either a second cellular network or a cloud computing infrastructure network. The IF interface is accessed using a single IP address by a mobile device in wireless communication with the combination cellular and Wi-Fi hardware device. The single IP address is maintained when the mobile device switches between using the first cellular network and the Wi-Fi network to communicate with the combination cellular and Wi-Fi hardware device, and vice versa. | 04-30-2015 |
20150120890 | SYSTEM AND METHOD FOR CONFIGURING A UNIVERSAL DEVICE TO PROVIDE DESIRED NETWORK HARDWARE FUNCTIONALITY - A method and system for automatically configuring and transforming a universal device into a feature specific device to provide desired network hardware functionality for a plurality of different conventionally single purpose devices. The universal device receives data configured for handling by a network hardware device capable of providing desired network hardware functionality. Based on the data received, the universal device identifies the desired network hardware functionality from network hardware functionalities. The universal device selects one or more virtual ports capable of providing the desired network hardware functionality. The universal device automatically configures and transforms itself into a feature specific device to provide the desired network hardware functionality by implementing the selected one or more virtual ports to handle the data received. | 04-30-2015 |
20150237003 | COMPUTERIZED TECHNIQUES FOR NETWORK ADDRESS ASSIGNMENT - Computer-implemented systems, methods, and computer-readable media are provided for assigning an IP address to a client device through an authentication process without needing to receive a dynamic host configuration protocol (DHCP) discover message to trigger the authentication process. In accordance with some embodiments, a message requesting assignment of an IP address to the client device is received, and a determination is made that identification information for the client device is not stored in a storage device. A request for authentication of the client device is then sent in response to the determination. An indication that the server authenticated the client device is received in response to the request, and the network address is assigned to the client device in response to the indication. | 08-20-2015 |
20150237519 | CLOUD CONTROLLER FOR SELF-OPTIMIZED NETWORKS - A management system implemented in a cloud computing environment for automatically managing a plurality of Wi-Fi access points in a network can receive information from each of the plurality of Wi-Fi access points. The system can analyze the received information from each Wi-Fi access point to determine at least one operation condition of at least one Wi-Fi access and determine at least one new operation setting for the at least one Wi-Fi access point based on the analyzed information. The system can remotely configure the at least one Wi-Fi access point based on the at least one new operation setting. | 08-20-2015 |
20150237667 | SYSTEM AND METHOD OF PROVIDING ADVANCED SERVICES IN A VIRTUAL CPE DEPLOYMENT - Described herein are techniques for providing a virtual Wi-Fi service using internet protocol (IP) connections between a central Wi-Fi access gateway (WAG) and one or more radio nodes. The WAG establishes an IP connection with a first radio node across a network, where the first radio node is configured to connect to one or more Wi-Fi devices located near the first radio node. The WAG receives network traffic over the IP connection, where the network traffic is associated with a Wi-Fi device from the one or more Wi-Fi devices connected to the first radio node. The WAG provides a virtual Wi-Fi service through the network to the Wi-Fi device based on the network traffic such that the Wi-Fi device connects to the virtual Wi-Fi service as if the virtual Wi-Fi service is a physical device locally connected to the first radio node. | 08-20-2015 |
20150304275 | CARRIER GRADE NAT - Described herein are techniques for providing carrier grade dynamic network address translation (NAT). The disclosed techniques allow for dynamic switching from regular NAT to network address ports translation (NAPT) based on system load. Under the NAPT mode, the disclosed techniques allow the ports of a public IP address to be broken up into contiguous blocks of ports (e.g., of the same size and/or of varying size) such that each block can be assigned to an associated (e.g., different) private IP address. For each new connection from the private IP address, if the port used is the next port sequentially, the NAT device can store an offset from the starting public/private IP address ports. If the port is not the next port sequentially, the network address translation device can associate a new block of public ports to the private IP address. | 10-22-2015 |
20150319092 | CONTENT AWARE WI-FI QoS - Described herein are techniques for providing content aware quality of service (QoS) metadata for a Wi-Fi connection to nodes in a network by incorporating the QoS metadata into a packet header so that the nodes in the network can access the QoS metadata. The Wi-Fi access gateway receives a data packet for an internet protocol (IP) connection with a radio node across the network, wherein the radio node is configured to connect to one or more Wi-Fi devices located near the radio node The Wi-Fi access gateway classifies underlying data content of the IP connection to determine QoS metadata for the IP connection based on the underlying data content. The Wi-Fi access gateway incorporates the QoS metadata into a packet header of the data packet so that nodes in the network can access the QoS metadata for the IP connection. | 11-05-2015 |
20150327052 | Techniques for Managing Network Access - Computer-implemented systems, methods, and computer-readable media are provided for managing access of a wireless device to a network based on one or more policies. In accordance with some embodiments, a request for authorization to associate a wireless device with a network is received. A policy associated with the network is retrieved from a storage device in response to the request. A determination is then made as to whether information included in the request satisfies a condition in the policy. If the information satisfies the condition, a response is transmitted granting authorization to associate the wireless device with the network. If the information does not satisfy the condition, a response is transmitted denying authorization to associate the wireless device with the network. | 11-12-2015 |
Patent application number | Description | Published |
20080294950 | DOUBLE DRAM BIT STEERING FOR MULTIPLE ERROR CORRECTIONS - A method and system is presented for correcting a data error in a primary Dynamic Random Access Memory (DRAM) in a Dual In-line Memory Module (DIMM). Each DRAM has a left half (for storing bits | 11-27-2008 |
20100268883 | Information Handling System with Immediate Scheduling of Load Operations and Fine-Grained Access to Cache Memory - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. The control logic determines the size requirement of each load operation or store operation. When the cache memory system performs a store operation or load operation, the memory system accesses the portion of a cache line it needs to perform the operation instead of accessing an entire cache line. | 10-21-2010 |
20100268887 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH DUAL DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory. | 10-21-2010 |
20100268890 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH SINGLE DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory. | 10-21-2010 |
20100268895 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption. | 10-21-2010 |
20130262769 | DATA CACHE BLOCK DEALLOCATE REQUESTS - A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss. | 10-03-2013 |
20130262770 | DATA CACHE BLOCK DEALLOCATE REQUESTS IN A MULTI-LEVEL CACHE HIERARCHY - In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core. | 10-03-2013 |
20130262777 | DATA CACHE BLOCK DEALLOCATE REQUESTS - A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss. | 10-03-2013 |
20130262778 | DATA CACHE BLOCK DEALLOCATE REQUESTS IN A MULTI-LEVEL CACHE HIERARCHY - In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core. | 10-03-2013 |
20140164710 | VIRTUAL MACHINES FAILOVER - Disclosed is a computer system ( | 06-12-2014 |
20140165056 | VIRTUAL MACHINE FAILOVER - Disclosed is a computer system ( | 06-12-2014 |
20150052311 | MANAGEMENT OF TRANSACTIONAL MEMORY ACCESS REQUESTS BY A CACHE MEMORY - In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request. | 02-19-2015 |
20150052312 | PROTECTING THE FOOTPRINT OF MEMORY TRANSACTIONS FROM VICTIMIZATION - A processing unit includes a processor core and a cache memory. Entries in the cache memory are grouped in multiple congruence classes. The cache memory includes tracking logic that tracks a transaction footprint including cache line(s) accessed by transactional memory access request(s) of a memory transaction. The cache memory, responsive to receiving a memory access request that specifies a target cache line having a target address that maps to a congruence class, forms a working set of ways in the congruence class containing cache line(s) within the transaction footprint and updates a replacement order of the cache lines in the congruence class. Based on membership of the at least one cache line in the working set, the update promotes at least one cache line that is not the target cache line to a replacement order position in which the at least one cache line is less likely to be replaced. | 02-19-2015 |
20150052313 | PROTECTING THE FOOTPRINT OF MEMORY TRANSACTIONS FROM VICTIMIZATION - A processing unit includes a processor core and a cache memory. Entries in the cache memory are grouped in multiple congruence classes. The cache memory includes tracking logic that tracks a transaction footprint including cache line(s) accessed by transactional memory access request(s) of a memory transaction. The cache memory, responsive to receiving a memory access request that specifies a target cache line having a target address that maps to a congruence class, forms a working set of ways in the congruence class containing cache line(s) within the transaction footprint and updates a replacement order of the cache lines in the congruence class. Based on membership of the at least one cache line in the working set, the update promotes at least one cache line that is not the target cache line to a replacement order position in which the at least one cache line is less likely to be replaced. | 02-19-2015 |
20150052315 | MANAGEMENT OF TRANSACTIONAL MEMORY ACCESS REQUESTS BY A CACHE MEMORY - In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request. | 02-19-2015 |
20150127906 | Techniques for Logging Addresses of High-Availability Data - A technique for operating a high-availability (HA) data processing system includes, in response to receiving an HA logout indication at a cache, initiating a walk of the cache to locate cache lines in the cache that include HA data. In response to determining that a cache line includes HA data, an address of the cache line is logged in a first portion of a buffer in the cache. In response to the first portion of the buffer reaching a determined fill level, contents of the first portion of the buffer are logged to another memory. In response to all cache lines in the cache being walked, the cache walk is terminated. | 05-07-2015 |
20150127908 | Logging Addresses of High-Availability Data Via a Non-Blocking Channel - A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel. | 05-07-2015 |
20150127909 | Logging Addresses of High-Availability Data - A technique for operating a high-availability (HA) data processing system includes, in response to receiving an HA logout indication at a cache, initiating a walk of the cache to locate cache lines in the cache that include HA data. In response to determining that a cache line includes HA data, an address of the cache line is logged in a first portion of a buffer in the cache. In response to the first portion of the buffer reaching a determined fill level, contents of the first portion of the buffer are logged to another memory. In response to all cache lines in the cache being walked, the cache walk is terminated. | 05-07-2015 |
20150127910 | Techniques for Logging Addresses of High-Availability Data Via a Non-Blocking Channel - A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel. | 05-07-2015 |
20150161053 | BYPASSING A STORE-CONDITIONAL REQUEST AROUND A STORE QUEUE - In response to receipt of a store-conditional (STCX) request of a processor core, the STCX request is buffered in an entry of a store queue for eventual service by a read-claim (RC) machine by reference to a cache array, and the STCX request is concurrently transmitted via a bypass path bypassing the store queue. In response to dispatch logic dispatching the STCX request transmitted via the bypass path to the RC machine for service by reference to the cache array, the entry of the STCX request in the store queue is updated to prohibit selection of the STCX request in the store queue for service. In response to the STCX request transmitted via the bypass path not being dispatched by the dispatch logic, the STCX is thereafter transmitted from the store queue to the dispatch logic and dispatched to the RC machine for service by reference to the cache array. | 06-11-2015 |
20150161054 | BYPASSING A STORE-CONDITIONAL REQUEST AROUND A STORE QUEUE - In response to receipt of a store-conditional (STCX) request of a processor core, the STCX request is buffered in an entry of a store queue for eventual service by a read-claim (RC) machine by reference to a cache array, and the STCX request is concurrently transmitted via a bypass path bypassing the store queue. In response to dispatch logic dispatching the STCX request transmitted via the bypass path to the RC machine for service by reference to the cache array, the entry of the STCX request in the store queue is updated to prohibit selection of the STCX request in the store queue for service. In response to the STCX request transmitted via the bypass path not being dispatched by the dispatch logic, the STCX is thereafter transmitted from the store queue to the dispatch logic and dispatched to the RC machine for service by reference to the cache array. | 06-11-2015 |