Patent application number | Description | Published |
20120287785 | DATA TRAFFIC HANDLING IN A DISTRIBUTED FABRIC PROTOCOL (DFP) SWITCHING NETWORK ARCHITECTURE - A switching network includes an upper tier having a master switch and a lower tier including a plurality of lower tier entities. The master switch, which has a plurality of ports each coupled to a respective lower tier entity, implements on each of the ports a plurality of virtual ports each corresponding to a respective one of a plurality of remote physical interfaces (RPIs) at the lower tier entity coupled to that port. Data traffic communicated between the master switch and RPIs is queued within virtual ports that correspond to the RPIs with which the data traffic is communicated. The master switch applies data handling to the data traffic in accordance with a control policy based at least upon the virtual port in which the data traffic is queued, such that the master switch applies different policies to data traffic queued to two virtual ports on the same port of the master switch. | 11-15-2012 |
20120287936 | EFFICIENT SOFTWARE-BASED PRIVATE VLAN SOLUTION FOR DISTRIBUTED VIRTUAL SWITCHES - Packet processing logic of a host system's virtualization manager detects packets on the ingress or the egress path to/from a virtual port having three bitmap arrays for processing packets within a virtual local area network (VLAN). The logic checks the VLAN identifier (VID) of the packet to determine, based on an offset position within the corresponding bitmap array, whether the port supports the VLAN. Both the ingress array offset position and egress array offset positions correspond to the value of the VID, and are set within the specific bitmap array during configuration of the VLAN on the port. When the VLAN is supported by the port, the logic enables the packet to be processed by the port. Otherwise, the logic discards the packet. A strip bitmap array indicates when a packet's VID should be removed prior to forwarding the packet on the egress of a port (or destination port). | 11-15-2012 |
20120291025 | TECHNIQUES FOR OPERATING VIRTUAL SWITCHES IN A VIRTUALIZED COMPUTING ENVIRONMENT - A technique for operating a virtual switch includes determining network connection requirements for virtual machines controlled by a virtual machine monitor. Resources available, for processing data traffic of the virtual machines, are also determined. Finally, based on the network connection requirements and the resources available, a port of a virtual switch is selected to operate as a virtual Ethernet bridge or a virtual Ethernet port aggregator. | 11-15-2012 |
20120291029 | OPERATING VIRTUAL SWITCHES IN A VIRTUALIZED COMPUTING ENVIRONMENT - A technique for operating a virtual switch includes determining network connection requirements for virtual machines controlled by a virtual machine monitor. Resources available, for processing data traffic of the virtual machines, are also determined. Finally, based on the network connection requirements and the resources available, a port of a virtual switch is selected to operate as a virtual Ethernet bridge or a virtual Ethernet port aggregator. | 11-15-2012 |
20120307684 | METHOD FOR PROVIDING LOCATION INDEPENDENT DYNAMIC PORT MIRRORING ON DISTRIBUTED VIRTUAL SWITCHES - A method for providing location independent dynamic port mirroring on distributed virtual switches is disclosed. A controller is provided to configure one or more virtual switches within a group of physical machines to appear as a set of distributed virtual switches. In response to the receipt of a data packet at a port of a physical machine, a determination is made whether or not the port has a monitor port located on the physical machine. If the port has a monitor port located on the same physical machine, a copy of the data packet is sent to the monitor port of the physical machine. If the port has a monitor port located on a different physical machine, a copy of the data packet along with an identification (ID) of the port and an ID of the monitor port are encapsulated, and the encapsulated information are sent to a controller. | 12-06-2012 |
20120320749 | DATA TRAFFIC HANDLING IN A DISTRIBUTED FABRIC PROTOCOL (DFP) SWITCHING NETWORK ARCHITECTURE - A switching network includes an upper tier having a master switch and a lower tier including a plurality of lower tier entities. The master switch, which has a plurality of ports each coupled to a respective lower tier entity, implements on each of the ports a plurality of virtual ports each corresponding to a respective one of a plurality of remote physical interfaces (RPIs) at the lower tier entity coupled to that port. Data traffic communicated between the master switch and RPIs is queued within virtual ports that correspond to the RPIs with which the data traffic is communicated. The master switch applies data handling to the data traffic in accordance with a control policy based at least upon the virtual port in which the data traffic is queued, such that the master switch applies different policies to data traffic queued to two virtual ports on the same port of the master switch. | 12-20-2012 |
20130044629 | VIRTUAL NETWORK OVERLAYS AND METHODS OF FORMING THEREOF - Systems are provided for overlaying a virtual network on a physical network in a data center environment. An overlay system is arranged in an overlay virtual network to include an overlay agent and an overlay helper. The overlay agent is implemented in an access switch. The overlay helper is implemented in an end station that is in communication with the access switch. Overlay parameters in compliance with an in-band protocol are transmitted between the overlay agent and the overlay helper. | 02-21-2013 |
20130044631 | METHODS OF FORMING VIRTUAL NETWORK OVERLAYS - Methods are provided for overlaying a virtual network on a physical network in a data center environment. An overlay system is arranged in an overlay virtual network to include an overlay agent and an overlay helper. The overlay agent is implemented in an access switch. The overlay helper is implemented in an end station that is in communication with the access switch. Overlay parameters in compliance with an in-band protocol are transmitted between the overlay agent and the overlay helper. | 02-21-2013 |
20130070761 | SYSTEMS AND METHODS FOR CONTROLLING A NETWORK SWITCH - Systems and methods are provided for controlling a network switch. At least one forwarding element of the distributed switch is positioned at a first location of a network. A control element of the distributed switch is positioned at a second location of the network. The at least one forwarding element is controlled from the control element by establishing a communication between the forwarding element and the control element via the network. | 03-21-2013 |
20130322446 | VIRTUAL ETHERNET PORT AGGREGATION (VEPA)-ENABLED MULTI-TENANT OVERLAY NETWORK - In accordance with one embodiment, a system that may be used for enabling Virtual Ethernet Port Aggregation (VEPA) in an overlay network includes a host server providing a virtual switch, the virtual switch including logic adapted for receiving a packet from a first virtual machine (VM) on the host server, logic adapted for determining that a destination of the packet is a second VM common to the host server, logic adapted for encapsulating the packet with a tunnel header to form an overlay packet, logic adapted for sending the overlay packet via a tunnel to a physical networking element to have inspection services performed thereon, logic adapted for receiving the overlay packet from the physical networking element, logic adapted for de-encapsulating the overlay packet to retrieve a serviced packet, and logic adapted for forwarding the serviced packet to the second VM, wherein the tunnel header includes tenant specific information. | 12-05-2013 |
20140050091 | LOAD BALANCING OVERLAY NETWORK TRAFFIC USING A TEAMED SET OF NETWORK INTERFACE CARDS - A system includes a server including: logic adapted for receiving traffic from a virtual machine (VM), the traffic including at least one packet, logic adapted for hashing at least a portion of the at least one packet according to a hashing algorithm to obtain a hash value, and logic adapted for selecting an uplink based on the hash value; at least one accelerated network interface card (NIC), each accelerated NIC including: network ports including multiple Peripheral Component Interconnect express (PCIe) ports adapted for communicating with the server and a network, each network port including an uplink, logic adapted for encapsulating the at least one packet into an overlay-encapsulated packet, logic adapted for storing a media access control (MAC) address corresponding to the selected uplink as a source MAC (SMAC) address in an outer header of the overlay-encapsulated packet, and logic adapted for sending the overlay-encapsulated packet via the selected uplink. | 02-20-2014 |
20140207969 | ADDRESS MANAGEMENT IN AN OVERLAY NETWORK ENVIRONMENT - Embodiments of the invention relate to overlay network address management. One embodiment includes an overlay gateway including an overlay network manager associated with a physical network. The overlay network manager prevents duplicate address assignment for overlay domains having a first sharing status and performs address translation for overlay domains having a second sharing status. Address translation is avoided for overlay domains having the first sharing status. | 07-24-2014 |
20140254603 | INTEROPERABILITY FOR DISTRIBUTED OVERLAY VIRTUAL ENVIRONMENTS - Embodiments of the invention relate to providing interoperability between hosts supporting multiple encapsulation. One embodiment includes a method that includes mapping packet encapsulation protocol type information for virtual switches. Each virtual switch is associated with one or more virtual machines (VMs). It is determined whether one or more common encapsulation protocol types exist for a first VM associated with a first virtual switch and a second VM associated with a second virtual switch based on the mapping. A common encapsulation protocol type is selected if it is determined that one or more common encapsulation protocol types exist for the first virtual switch and the second virtual switch. A packet is encapsulated for communication between the first VM and the second VM using the selected common encapsulation protocol type. | 09-11-2014 |
20140279885 | DATA REPLICATION FOR A VIRTUAL NETWORKING SYSTEM - Embodiments of the invention provide a method for data replication in a networking system comprising multiple computing nodes. The method comprises maintaining a data set on at least two computing nodes of the system. The method further comprises receiving a data update request for the data set, wherein the data update request includes a data update for the data set. The data set on the at least two computing nodes is updated based on the data update request received. | 09-18-2014 |
20140280949 | LOAD BALANCING FOR A VIRTUAL NETWORKING SYSTEM - Embodiments of the invention provide a method for load balancing a networking system comprising multiple computing nodes. The method comprises maintaining one or more data sets on at least one computing node. The method further comprises receiving, from each computing node, a load information unit for the computing node, wherein the load information unit relates to resource usage on the computing node. For each computing node, the method determines whether the load information for the computing node exceeds a corresponding load threshold for the computing node. A data set on at least one computing node is transferred to another computing node when the load information for the at least one computing node exceeds a corresponding load threshold for the at least one computing node. | 09-18-2014 |
20150026102 | DIRECTORY SERVICE DISCOVERY AND/OR LEARNING - In the context of a client sub-system that requires the use of directory services on behalf of a tenant (such as an overlay tenant), learning an identity of a server node, that can provide such directory services by: (i) sending, by the client sub-system to a first server node, a first directory service request for directory service for a first tenant; (ii) receiving, by the client sub-system, a first acknowledgement from a second server node; and (iii) learning, by the client sub-system, that the second server node can provide directory service for the first tenant based upon the first acknowledgement. | 01-22-2015 |
20150095468 | SYNCHRONIZING CONFIGURATIONS AMONGST MULTIPLE DEVICES - A data handling network includes a management system and a plurality of devices in communication with the management system. Each device may operate under various configurations. The management system includes a configuration version table that includes a device identifier and an intended configuration version number. A configuration manager within a device queries the management system with a query that includes a device identifier and a current device operating configuration version number. The management system may interrogate the configuration version table to determine if the current device operating configuration version number is similar to the intended configuration version number. | 04-02-2015 |
20150100670 | TRANSPORTING MULTI-DESTINATION NETWORKING TRAFFIC BY SENDING REPETITIVE UNICAST - In a distributed network environment, a first virtual machine sends a first virtual machine control information to a first network system. The first network system sends a first control information to a first network control system in response to receiving the first virtual machine control information. The first network control system sends a portion of the first control information to a number of network systems. The first network control system sends a second control information to the first network system. The first virtual machine sends a first packet to the first network system which generates a unicast packet using a portion of the first packet and a portion of the second control information. A second network system receives and processes the unicast packet. The second network system sends a copy of the processed unicast packet to a second virtual machine associated with a second tenant. | 04-09-2015 |
20150100958 | TRAFFIC MIGRATION ACCELERATION FOR OVERLAY VIRTUAL ENVIRONMENTS - Embodiments of the invention relate to providing acceleration for traffic migration for virtual machine (VM) migration in overlay networks. One embodiment includes a method that includes migrating of a VM from a first hypervisor to a second hypervisor. The first hypervisor detects incoming encapsulated traffic sent from a third hypervisor that is targeted for the VM. The first hypervisor indicates to a service of incorrect information in the incoming encapsulated traffic for the VM. The third hypervisor is notified with updated information for the VM. | 04-09-2015 |
20150112955 | MECHANISM FOR COMMUNICATION IN A DISTRIBUTED DATABASE - In a method for providing communication integrity within a distributed database computer system, a first node of a plurality of nodes transmits a change notification to a second node of the plurality of nodes. The second node is a neighbor of the first node. The first node receives at least one change confirmation from the second node. The change confirmation confirms acknowledgment of the change notification by the second node and by a third node of the plurality of nodes. The third node is not a neighbor of the first node. Responsive to receiving the at least one change confirmation, the first node determines that all the plurality of nodes have acknowledged the change notification. | 04-23-2015 |
20150131669 | VIRTUAL NETWORK OVERLAYS - Systems and methods are provided for overlaying a virtual network on a physical network in a data center environment. An overlay system is arranged in an overlay virtual network to include an overlay agent and an overlay helper. The overlay agent is implemented in an access switch. The overlay helper is implemented in an end station that is in communication with the access switch. Overlay parameters in compliance with an in-band protocol are transmitted between the overlay agent and the overlay helper. | 05-14-2015 |
20150281118 | INTEROPERABILITY FOR DISTRIBUTED OVERLAY VIRTUAL ENVIRONMENT - A method includes forwarding a request to a distributed overlay virtual Ethernet (DOVE) connectivity service (DCS) cluster for tunnel information by a source switch. In response to the request for tunnel information, the tunnel information and end point information are received. A common tunnel type supported by the source switch and a destination switch is selected. A packet is encapsulated with the common tunnel type supported by the source switch and the destination switch for a destination virtual machine (VM). | 10-01-2015 |
Patent application number | Description | Published |
20150347013 | Using Sub-Region I/O History to Cache Repeatedly Accessed Sub-Regions in a Non-Volatile Storage Device - Systems, methods and/or devices are used to enable using sub-region I/O history to cache repeatedly accessed sub-regions in a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests including read requests and write requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including, for each sub-region of a plurality of sub-regions of the region: (a) determining whether the sub-region is accessed more than a predetermined threshold number of times during a predetermined time period, and (b) if so, caching, from a storage medium of the storage device to a cache of the storage device, data from the sub-region. | 12-03-2015 |
20150347028 | Real-TIme I/O Pattern Recognition to Enhance Performance and Endurance of a Storage Device - Systems, methods and/or devices are used to enable real-time I/O pattern recognition to enhance performance and endurance of a storage device. In one aspect, the method includes (1) at a storage device, receiving from a host a plurality of input/output (I/O) requests, the I/O requests specifying operations to be performed in a plurality of regions in a logical address space of the host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) maintaining a history of I/O request patterns in the region for a predetermined time period, and (b) using the history of I/O request patterns in the region to adjust subsequent I/O processing in the region. | 12-03-2015 |
20150347029 | Identification of Hot Regions to Enhance Performance and Endurance of a Non-Volatile Storage Device - Systems, methods and/or devices are used to enable identification of hot regions to enhance performance and endurance of a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) determining whether the region is accessed by the plurality of I/O requests more than a predetermined threshold number of times during a predetermined time period, (b) if so, marking the region with a hot region indicator, and (c) while the region is marked with the hot region indicator, identifying open blocks associated with the region, and marking each of the identified open blocks with a hot block indicator. | 12-03-2015 |
20150347030 | Using History of Unaligned Writes to Cache Data and Avoid Read-Modify-Writes in a Non-Volatile Storage Device - Systems, methods and/or devices are used to enable using history of unaligned writes to cache data and avoid read-modify-writes in a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests including read requests and write requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) determining whether the region has a history of unaligned write requests during a predetermined time period, and (b) if so: (i) determining one or more sub-regions within the region that are accessed more than a predetermined threshold number of times during the predetermined time period, and (ii) caching data from the determined one or more sub-regions. | 12-03-2015 |
20150347040 | Using History of I/O Sizes and I/O Sequences to Trigger Coalesced Writes in a Non-Volatile Storage Device - Systems, methods and/or devices are used to enable using history of I/O sizes and I/O sequences to trigger coalesced writes in a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) determining whether the region has a history of I/O requests to access data of size less than a predefined small-size threshold during a predetermined time period, (b) determining whether the region has a history of sequential write requests during the predetermined time period, and (c) if both determinations are true, coalescing subsequent write requests to the region. | 12-03-2015 |
20150347296 | Prioritizing Garbage Collection and Block Allocation Based on I/O History for Logical Address Regions - Systems, methods and/or devices are used to enable prioritizing garbage collection and block allocation based on I/O history for logical address regions. In one aspect, the method includes (1) receiving, at a storage device, a plurality of input/output (I/O) requests from a host, the plurality of I/O requests including read requests and write requests to be performed in a plurality of regions in a logical address space of the host, (2) in accordance with the plurality of I/O requests over a predetermined time period, identifying an idle region of the plurality of regions in the logical address space of the host, and (3) in accordance with the identification of the idle region, enabling garbage collection of data storage blocks, in the storage device, that store data in the idle region. | 12-03-2015 |
Patent application number | Description | Published |
20080285483 | Client Operation For Network Access - A network traffic device for a managed network can operate as a client host, to receive packets from the managed network and forward them to an uplinked external network, thereby operating as a gateway to the uplink network and performing a network address translation (NAT) function for the managed network relative to the uplinked network. | 11-20-2008 |
20080285575 | System and Method For Remote Monitoring And Control Of Network Devices - A managed network provides unique network addresses that are assigned to nodes such that no two nodes will have the same address in the managed network and such that each node will always have the same network address regardless of changing its location or changing the network to which it is joined. The nodes, communicating together, comprise a mesh network. Remote management and control of the nodes is possible from the host server, which is located outside of the mesh network, even if a node is located behind a firewall or network address translator (NAT), because server management messages are encapsulated within headers so that a persistent connection between the node and the external host server is maintained once the node sends a message to the host. | 11-20-2008 |
20080288614 | Client Addressing And Roaming In A Wireless Network - A managed network receives client device requests for network addresses for communications over the managed network and computes a network address for a client device based on a hardware address of the client device, such as the MAC address of the client device, and returns the network address to the client device along with a predetermined gateway address for communications over the managed network with external networks. The MAC address is hashed to the network address that is assigned such that the client address will always receive the same network address whenever it accesses the managed network. | 11-20-2008 |
20080294759 | System and Method For Hosted Network Management - A hosted network management solution for communications over a computer network supports data communication across a network in accordance with a network message protocol such that communications are established between a network host and a node device. The and the node device performs a self-configuring operation in which the network host identifies a network owner associated with the hosted network, and maintains a persistent network connection path between the network host and the node device for the exchange of network packet messages. The network host retrieves message data from the network packet messages it receives from the node device and performs network management operations to provide a user management interface to the identified network owner. The hosted network management enables more convenient setup and configuration for the network owner and provides more complete and effective network management tools. | 11-27-2008 |
20080304427 | Node Self-Configuration And Operation In A Wireless Network - A device performs a self-configure process for operations in a managed network to allocate a network address for the device by determining if the device will operate as a gateway of the managed network, obtaining a network address for communication with external devices outside of the managed network in response to determining that the device will operate as a gateway, scanning for neighbor devices operating in the managed network and maintaining a database of neighbor devices located in the scanning, and selecting a managed network to join based on the database of neighbor devices in response to determining that the device will operate as a node. | 12-11-2008 |
20090086459 | Electronic device with weather-tight housing - An electronic device includes a housing with electrical circuitry that is sealed against penetration by dust, moisture, water, and the like, and permits convenient mounting and reconfiguration during operation. The electronic device can be reconfigured to add or delete a connecting plug and cable without compromising the seal. Mounting brackets are provided for mounting to both horizontal and vertical support structures, depending on orientation of the brackets. | 04-02-2009 |
20120317191 | SYSTEM AND METHOD FOR REMOTE MONITORING AND CONTROL OF NETWORK DEVICES - A managed network provides unique network addresses that are assigned to nodes such that no two nodes will have the same address in the managed network and such that each node will always have the same network address regardless of changing its location or changing the network to which it is joined. The nodes, communicating together, comprise a mesh network. Remote management and control of the nodes is possible from the host server, which is located outside of the mesh network, even if a node is located behind a firewall or network address translator (NAT), because server management messages are encapsulated within headers so that a persistent connection between the node and the external host server is maintained once the node sends a message to the host. | 12-13-2012 |
20130318233 | SYSTEM AND METHOD FOR REMOTE MONITORING AND CONTROL OF NETWORK DEVICES - A managed network provides unique network addresses that are assigned to nodes such that no two nodes will have the same address in the managed network and such that each node will always have the same network address regardless of changing its location or changing the network to which it is joined. The nodes, communicating together, comprise a mesh network. Remote management and control of the nodes is possible from the host server, which is located outside of the mesh network, even if a node is located behind a firewall or network address translator (NAT), because server management messages are encapsulated within headers so that a persistent connection between the node and the external host server is maintained once the node sends a message to the host. | 11-28-2013 |
20140156824 | SYSTEM AND METHOD FOR HOSTED NETWORK MANAGEMENT - A hosted network management solution for communications over a computer network supports data communication across a network in accordance with a network message protocol such that communications are established between a network host and a node device. The and the node device performs a self-configuring operation in which the network host identifies a network owner associated with the hosted network, and maintains a persistent network connection path between the network host and the node device for the exchange of network packet messages. The network host retrieves message data from the network packet messages it receives from the node device and performs network management operations to provide a user management interface to the identified network owner. The hosted network management enables more convenient setup and configuration for the network owner and provides more complete and effective network management tools. | 06-05-2014 |
Patent application number | Description | Published |
20080307276 | Memory Controller with Loopback Test Interface - In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation and, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. | 12-11-2008 |
20080307286 | Combined Single Error Correction/Device Kill Detection Code - In one embodiment, an apparatus comprises a check/correct circuit coupled to a control circuit. The check/correct circuit is coupled to receive a block of data and corresponding check bits. The block of data is received as N transmissions, each transmission comprising M data bits and L check bits. The check/correct circuit is configured to detect one or more errors in each of a plurality of non-overlapping windows of K bits in the M data bits, responsive to the M data bits and the L check bits. The control circuit is configured to record which of the plurality of windows have had errors detected and, if a given window of the plurality of windows has had errors detected in each of the N transmissions of the block, the control circuit is configured to signal a device failure. Each of K, L, M, and N are integers greater than one. | 12-11-2008 |
20110035560 | Memory Controller with Loopback Test Interface - In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation and, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. | 02-10-2011 |
20110040998 | Oversampling-Based Scheme for Synchronous Interface Communication - In one embodiment, an apparatus to synchronously communicate on an interface that has an associated interface clock for a circuit that has an internal clock used internal to the circuit comprises a control circuit coupled to receive the internal clock and the interface clock. The control circuit is configured to sample the interface clock multiple times per clock cycle of the internal clock and to detect a phase difference, to a granularity of the samples, between the internal clock and the interface clock. The apparatus comprises a data path that is configured to transport data between an internal clock domain and an interface clock domain. The data path is configured to provide at least two different timings on the transported data relative to the internal clock. The control circuit is coupled to the data path and is configured to select one of the timings responsive to a detected phase difference. | 02-17-2011 |
20120017135 | Combined Single Error Correction/Device Kill Detection Code - In one embodiment, an apparatus includes a check/correct circuit coupled to a control circuit. The check/correct circuit is coupled to receive a block of data and corresponding check bits. The block of data is received as N transmissions, each transmission including M data bits and L check bits. The check/correct circuit is configured to detect one or more errors in each of a plurality of non-overlapping windows of K bits in the M data bits, responsive to the M data bits and the L check bits. The control circuit is configured to record which of the plurality of windows have had errors detected and, if a given window of the plurality of windows has had errors detected in each of the N transmissions of the block, the control circuit is configured to signal a device failure. Each of K, L, M, and N are integers greater than one. | 01-19-2012 |
20120046930 | Controller and Fabric Performance Testing - In an embodiment, a model may be created using a register-transfer level (RTL) representation (or other cycle-accurate representation) of the controller and the circuitry in the communication fabric to the controller. The request sources may be replaced by transactors, which may generate transactions to test the performance of the fabric and controller. Accordingly, only the designs of the controller and the fabric circuitry may be needed to model performance in this embodiment. In an embodiment, at least some of the transactors may be behavioral transactors that attempt to mimic the operation of corresponding request sources. Other transactors may be statistical distributions, in some embodiments. In an embodiment, the transactors may include a transaction generator (e.g. behavioral or statistical) and a protocol translator configured to convert generated transactions to the communication protocol in use at the point that the transactor is connected to the fabric. | 02-23-2012 |
20120069034 | QoS-aware scheduling - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to scheduled operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 03-22-2012 |
20120072677 | Multi-Ported Memory Controller with Ports Associated with Traffic Classes - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to scheduled operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 03-22-2012 |
20120072678 | Dynamic QoS upgrading - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to scheduled operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 03-22-2012 |
20120072679 | Reordering in the Memory Controller - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to scheduled operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 03-22-2012 |
20120072787 | Memory Controller with Loopback Test Interface - In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation and, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. | 03-22-2012 |
20120137078 | Multiple Critical Word Bypassing in a Memory Controller - In one embodiment, a memory controller may be configured to transmit two or more critical words (or beats) corresponding to two or more different read requests prior to returning the remaining beats of the read requests. Such an embodiment may reduce latency to the sources of the memory requests, which may be stalled awaiting the critical words. The remaining words may fill a cache block or other buffer, but may not be required by the sources as quickly as the critical words in order to support higher performance. In some embodiments, once a remaining beat of a block is transmitted, all of the remaining beats may be transmitted contiguously. In other embodiments, additional critical words may be forwarded between remaining beats of a block. | 05-31-2012 |
20120137090 | Programmable Interleave Select in Memory Controller - In one embodiment, a memory controller may be configured to perform a logic operation, such as a hash function, on selected address bits to produce a bit of channel or bank select. The selected address bits for each select bit may differ, and may be programmable in some embodiments. By combining selected address bits to produce the select bits, the distribution of addresses in a set of regular access patterns may be somewhat randomized to the channels/banks. In one implementation, each select bit may have a corresponding programmable bit vector that specifies the address bits to be included for that select bit. Accordingly, any subset of the address bits may be included in any select bit generation. | 05-31-2012 |
20120182889 | Quality of Service (QoS)-Related Fabric Control - In an embodiment, one or more fabric control circuits may be inserted in a communication fabric to control various aspects of the communications by components in the system. The fabric control circuits may be included on the interface of the components to the communication fabric, in some embodiments. In other embodiments that include a hierarchical communication fabric, fabric control circuits may alternatively or additionally be included. The fabric control circuits may be programmable, and thus may provide the ability to tune the communication fabric to meet performance and/or functionality goals. | 07-19-2012 |
20130046938 | QoS-Aware Scheduling - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 02-21-2013 |
20130054901 | PROPORTIONAL MEMORY OPERATION THROTTLING - A memory controller receives memory operations via an interface which may include multiple ports. Each port is coupled to real-time or non-real-time requestors, and the received memory operations are classified as real-time or non-real-time and stored in queues prior to accessing memory. Within the memory controller, pending memory operations from the queues are scheduled for servicing. Logic throttles the scheduling of non-real-time memory operations in response to detecting a number of outstanding memory operations has exceeded a threshold. The throttling is proportional to the number of outstanding memory operations. | 02-28-2013 |
20130054902 | ACCELERATING BLOCKING MEMORY OPERATIONS - A memory controller, system, and method for accelerating blocking memory operations. A memory controller reorders memory operations so as to maximize efficient use of the memory device bus. When data for a newer memory operation is retrieved from memory and ready to be returned to a source device, the newer memory operation can be held up waiting for an older memory operation to be completed. In response, the memory controller forwards a push request for the older memory operation to a memory channel unit. The memory channel unit then sets a push bit of the older memory operation, which expedites the scheduling of the older memory operation. | 02-28-2013 |
20140006743 | QoS-Aware Scheduling | 01-02-2014 |
20140052937 | Dynamic QoS Upgrading - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 02-20-2014 |
20140059297 | SYSTEM CACHE WITH STICKY ALLOCATION - Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs. | 02-27-2014 |
20140075118 | SYSTEM CACHE WITH QUOTA-BASED CONTROL - Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used. | 03-13-2014 |
20140075125 | SYSTEM CACHE WITH CACHE HINT CONTROL - Methods and apparatuses for utilizing a cache hint mechanism in which a requesting agent can provide hints as to how data corresponding to a request should be cached in a system cache within a memory controller. The way the system cache responds to received requests is determined by the cache hint provided by the originating requesting agent. When a request is accompanied by a de-allocate cache hint, the system cache causes a cache line hit by the request to be de-allocated. For a request with a do not allocate cache hint, the system cache does not allocate a cache line if the request misses in the system cache, and the system cache maintains a given cache line in its current state if the requests hits the given cache line. | 03-13-2014 |
20140086070 | Bandwidth Management - In some embodiments, a system includes a shared, high bandwidth resource (e.g. a memory system), multiple agents configured to communicate with the shared resource, and a communication fabric coupling the multiple agents to the shared resource. The communication fabric may be equipped with limiters configured to limit bandwidth from the various agents based on one or more performance metrics measured with respect to the shared, high bandwidth resource. For example, the performance metrics may include one or more of latency, number of outstanding transactions, resource utilization, etc. The limiters may dynamically modify their limit configurations based on the performance metrics. In an embodiment, the system may include multiple thresholds for the performance metrics, and exceeding a given threshold may include modifying the limiters in the communication fabric. There may be hysteresis implemented in the system as well in some embodiments, to reduce the frequency of transitions between configurations. | 03-27-2014 |
20140089590 | SYSTEM CACHE WITH COARSE GRAIN POWER MANAGEMENT - Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive. | 03-27-2014 |
20140089592 | SYSTEM CACHE WITH SPECULATIVE READ ENGINE - Methods and apparatuses for processing speculative read requests in a system cache within a memory controller. To expedite a speculative read request, the request is sent on parallel paths through the system cache. A first path goes through a speculative read engine to determine if the speculative read request meets the conditions for accessing memory. A second path involves performing a tag lookup to determine if the data referenced by the request is already in the system cache. If the speculative read request meets the conditions for accessing memory, the request is sent to a miss queue where it is held until a confirm or cancel signal is received from the tag lookup mechanism. | 03-27-2014 |
20140089600 | SYSTEM CACHE WITH DATA PENDING STATE - Methods and apparatuses for utilizing a data pending state for cache misses in a system cache. To reduce the size of a miss queue that is searched by subsequent misses, a cache line storage location is allocated in the system cache for a miss and the state of the cache line storage location is set to data pending. A subsequent request that hits to the cache line storage location will detect the data pending state and as a result, the subsequent request will be sent to a replay buffer. When the fill for the original miss comes back from external memory, the state of the cache line storage location is updated to a clean state. Then, the request stored in the replay buffer is reactivated and allowed to complete its access to the cache line storage location. | 03-27-2014 |
20140089602 | SYSTEM CACHE WITH PARTIAL WRITE VALID STATES - Methods and apparatuses for processing partial write requests in a system cache within a memory controller. When a write request that updates a portion of a cache line misses in the system cache, the write request writes the data to the system cache without first reading the corresponding cache line from memory. The system cache includes error correction code bits which are redefined as word mask bits when a cache line is in a partial dirty state. When a read request hits on a partial dirty cache line, the partial data is written to memory using a word mask. Then, the corresponding full cache line is retrieved from memory and stored in the system cache. | 03-27-2014 |
20140095777 | SYSTEM CACHE WITH FINE GRAIN POWER MANAGEMENT - Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations. | 04-03-2014 |
20140095800 | SYSTEM CACHE WITH STICKY REMOVAL ENGINE - Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID. | 04-03-2014 |
20140197870 | RESET EXTENDER FOR DIVIDED CLOCK DOMAINS - A clock divider may provide a lower speed clock to a logic block portion, but during reset, the clock divider may not operate properly, causing the logic block portion to be reset at a clock frequency greater than the frequency for which that logic was designed. However, an extended reset may be employed in which the clock divider is reset normally first before the logic block portion, allowing that logic to be reset according to the divided clock (e.g., rather than a higher speed clock). An asynchronous reset may also be employed in which one or more clock dividers first emerge from reset before being provided with a (synchronized) high speed clock signal, causing the clock dividers to be in phase with each other. This may enable communication between different areas of an IC that might not otherwise be in proper phase with each other. | 07-17-2014 |
20140237276 | Method and Apparatus for Determining Tunable Parameters to Use in Power and Performance Management - Various method and apparatus embodiments for selecting tunable operating parameters in an integrated circuit (IC) are disclosed. In one embodiment, an IC includes a number of various functional blocks each having a local management circuit. The IC also includes a global management unit coupled to each of the functional blocks having a local management circuit. The management unit is configured to determine the operational state of the IC based on the respective operating states of each of the functional blocks. Responsive to determining the operational state of the IC, the management unit may provide indications of the same to the local management circuit of each of the functional blocks. The local management circuit for each of the functional blocks may select one or more tunable parameters based on the operational state determined by the management unit. | 08-21-2014 |
20140244920 | SCHEME TO ESCALATE REQUESTS WITH ADDRESS CONFLICTS - Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request. | 08-28-2014 |
20140297959 | ADVANCED COARSE-GRAINED CACHE POWER MANAGEMENT - Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and each way is powered independently of the other ways. A target active way count is maintained and the system cache attempts to keep the number of currently active ways equal to the target active way count. The bandwidth and allocation intention of the system cache is monitored. Based on these characteristics, the system cache adjusts the target active way count up or down, which then causes the number of currently active ways to rise or fall in response to the adjustment to the target active way count. | 10-02-2014 |
20140298058 | ADVANCED FINE-GRAINED CACHE POWER MANAGEMENT - Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple sections, and each section is supplied with power from one of two supply voltages. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. The cache utilizes a maximum allowed active section policy to limit the number of sections that are active at any given time to reduce leakage power. Each section includes a corresponding idle timer and break-even timer. The idle timer keeps track of how long the section has been idle and the break-even timer is used to periodically wake the section up from retention mode to check if there is a pending request that targets the section. | 10-02-2014 |
20140317355 | CACHE ALLOCATION SCHEME OPTIMIZED FOR BROWSING APPLICATIONS - Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power. | 10-23-2014 |
20140337649 | Memory Power Savings in Idle Display Case - In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache. | 11-13-2014 |
20150143044 | MECHANISM FOR SHARING PRIVATE CACHES IN A SOC - Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache. | 05-21-2015 |