Patent application number | Description | Published |
20080307078 | SYSTEM AND METHOD FOR INTERFACING WITH A MANAGEMENT SYSTEM - Systems and methods that interface with a management system are provided. In one embodiment, a system and a method may provide a command protocol and format for communication between a network interface card (NIC) and a management device such as, for example, an intelligent management device (IMD). An interface may be adapted to allow the management device to merge its traffic with that of the NIC to provide a fully integrated management solution. The fully integrated management solution may be implemented, for example, without additional network connections. | 12-11-2008 |
20090147677 | SYSTEM, METHOD, AND APPARATUS FOR LOAD-BALANCING TO A PLURALITY OF PORTS - A system, method, and apparatus for load balancing to a plurality of ports is presented herein. A miniport driver is adapted to multiplex and demultiplex traffic workload across the ports. The miniport driver classifies outgoing packet streams and distributes each packet stream to a communication ring, such as an Ethernet ring, for example, associated with at least one of the ports. Additionally, the miniport driver can be configured to configure a operation of the plurality of ports in one of several modes, including a mode wherein the plurality of ports are operable and act as a single logical interface for the operation. | 06-11-2009 |
20090254647 | SYSTEM AND METHOD FOR NETWORK INTERFACING - Systems and methods for network interfacing may include a communication data center with a first tier, a second tier and a third tier. The first tier may include a first server with a first single integrated convergent network controller chip. The second server may include a second server with a second single integrated convergent network controller chip. The third tier may include a third server with a third single integrated convergent network controller chip. The second server may be coupled to the first server via a single fabric with a single connector. The third server may be coupled to the second server via the single fabric with the single connector. The respective first, second and third server, each processes a plurality of different traffic types concurrently via the respective first, second and third single integrated convergent network chip over the single fabric that is coupled to the single connector. | 10-08-2009 |
20100138584 | Method and System for Addressing a Plurality of Ethernet Controllers Integrated into a Single Chip Which Utilizes a Single Bus Interface - A system for arbitrating access to a shared resource is disclosed and may include a bus interface, a first network controller for handling a first host function associated with a first host process, a second network controller for handling a second host function associated with a second host process, and an arbitrator for granting access to the shared resource for one of the first host process and the second host process. The arbitrator may facilitate a transfer of information to and from the bus interface and the shared resource. The first network controller and the second network controller may be integrated within a single chip. The shared resource may be a nonvolatile memory, flash memory interface, an EEPROM interface, and/or a Serial Programming Interface (SPI). | 06-03-2010 |
20100250783 | SYSTEM AND METHOD FOR TCP/IP OFFLOAD INDEPENDENT OF BANDWIDTH DELAY PRODUCT - A network interface device may include an offload engine that receives control of state information while a particular connection is offloaded. Control of the state information for the particular connection may be split between the network interface device and a host. The at least one connection variables may be updated and provided to the host. | 09-30-2010 |
20110016245 | Method and System for Addressing a Plurality of Ethernet Controllers Integrated into a Single Chip Which Utilizes a Single Bus Interface - A method for processing network data is disclosed and may include receiving data via a single bus interface to which each of a plurality of Ethernet controllers are coupled, where the Ethernet controllers are integrated within a single chip. A particular one of the integrated Ethernet controllers may be identified based on information within the received data. The particular one of the integrated Ethernet controllers may be granted access to a shared resource within the single chip. The access to the shared resource may be granted using at least one semaphore register within the shared resource. The particular one of the integrated Ethernet controllers may be granted access to the single bus interface. The information may include a bus identifier, a bus device identifier and/or a bus function identifier. The shared resource may include a nonvolatile memory (NVM). | 01-20-2011 |
20110040891 | System and Method for TCP Offload - A system for processing packets is disclosed and may including a network interface card (NIC). The NIC may include a TCP enabled Ethernet controller (TEEC). The TEEC may include an internal elastic buffer. The TEEC may process received incoming TCP packets once and may temporarily buffer at least a portion of the incoming TCP packets in the internal elastic buffer. The processing may occur without reassembly or retransmission. The internal elastic buffer may include a receive internal elastic buffer and a transmit internal elastic buffer. The receive internal elastic buffer may temporarily buffer at least a portion of the received incoming TCP packets. The transmit internal elastic buffer may temporarily buffer at least a portion of TCP packets to be transmitted. The TEEC may place at least a portion of the received incoming TCP packets data into at least a portion of a host memory. | 02-17-2011 |
20110185076 | System and Method for Network Interfacing - Systems and methods for network interfacing may include a communication data center with a first tier, a second tier and a third tier. The first tier may include a first server with a first single integrated convergent network controller chip. The second server may include a second server with a second single integrated convergent network controller chip. The third tier may include a third server with a third single integrated convergent network controller chip. The second server may be coupled to the first server via a single fabric with a single connector. The third server may be coupled to the second server via the single fabric with the single connector. The respective first, second and third server, each processes a plurality of different traffic types concurrently via the respective first, second and third single integrated convergent network chip over the single fabric that is coupled to the single connector. | 07-28-2011 |
20110246662 | SYSTEM AND METHOD FOR TCP OFFLOAD - Aspects of the invention may comprise receiving an incoming TCP packet at a TEEC and processing at least a portion of the incoming packet once by the TEEC without having to do any reassembly and/or retransmission by the TEEC. At least a portion of the incoming TCP packet may be buffered in at least one internal elastic buffer of the TEEC. The internal elastic buffer may comprise a receive internal elastic buffer and/or a transmit internal elastic buffer. Accordingly, at least a portion of the incoming TCP packet may be buffered in the receive internal elastic buffer. At least a portion of the processed incoming packet may be placed in a portion of a host memory for processing by a host processor or CPU. Furthermore, at least a portion of the processed incoming TCP packet may be DMA transferred to a portion of the host memory. | 10-06-2011 |
20110314171 | SYSTEM AND METHOD FOR PROVIDING POOLING OR DYNAMIC ALLOCATION OF CONNECTION CONTEXT DATA - A method for processing of packetized data is disclosed and includes allocating a plurality of partitions of a single context memory for handling data for a corresponding plurality of network protocol connections. Data for at least one of the plurality of network protocol connections may be processed utilizing a corresponding at least one of the plurality of partitions of the single context memory. The at least one of the plurality of partitions of the single context memory may be de-allocated, when the corresponding at least one of the plurality of network protocol connections is terminated. The data for the at least one of the plurality of network protocol connections may be received. The data may be associated with a single network protocol or with a plurality of network protocols. The data for the at least one of the plurality of network protocol connections includes context data. | 12-22-2011 |
20140129737 | SYSTEM AND METHOD FOR NETWORK INTERFACING IN A MULTIPLE NETWORK ENVIRONMENT - Systems and methods that network interface in a multiple network environment are provided. In one embodiment, the system includes, for example, a network connector, a processor, a peripheral component interface (PCI) bridge and a unified driver. The processor may be coupled to the network connector and to the PCI bridge. The processor may be adapted, for example, to process a plurality of different types of network traffic. The unified driver may be coupled to the PCI bridge and may be adapted to provide drivers associated with the plurality of different types of network traffic. | 05-08-2014 |
Patent application number | Description | Published |
20090100280 | METHOD AND SYSTEM FOR IMPROVING PCI-E L1 ASPM EXIT LATENCY - The disclosed systems and methods relate to improving PCI Express (PCI-E) L1 Active State Power Management (ASPM) exit latency by speculatively initiating early L1 exit based on a network stimulus. Aspects of the present invention may enable a higher level of performance and responsiveness while supporting the benefits of ASPM. Aspects of the present invention may minimize operational cost by reducing latency in processes that utilize a PCI-E interface. Aspects of the present invention may be embodied in a Network Interface Controller (NIC) or any other device with a PCI-E interface that supports ASPM. | 04-16-2009 |
20090110051 | METHOD AND SYSTEM FOR REDUCING THE IMPACT OF LATENCY ON VIDEO PROCESSING - The disclosed systems and methods relate to reducing the effect of video processing latency in devices that utilize PCI Express Active State Power Management (PCI-E ASPM). Power state transition delay may be reduced by initiating an early L1 exit based on a video processing stimulus. Aspects of the present invention may enable a higher level of performance and responsiveness while supporting the benefits of ASPM. Aspects of the present invention may be embodied in a video processing device that uses a video accelerator with a PCI-E interface. | 04-30-2009 |
20100121978 | SYSTEM AND METHOD FOR INTERFACING WITH A MANAGEMENT SYSTEM - A network controller may split, via a pass-through driver, processing of transmit and/or receive network traffic handled by the network controller. Physical layer (PHY) processing and/or Medium Access Control (MAC) processing of the management traffic may be performed internally via the network controller. The pass-through driver may route at least a portion of management traffic carried via the transmit and/or receive network traffic externally to said network controller for processing. In this regard, the pass-through driver may enable routing of data and/or messages to enable performing the external processing of management traffic. An application processor may be used to perform the external processing of management traffic. | 05-13-2010 |
20100192218 | METHOD AND SYSTEM FOR PACKET FILTERING FOR LOCAL HOST-MANAGEMENT CONTROLLER PASS-THROUGH COMMUNICATION VIA NETWORK CONTROLLER - A network controller in a communication device may be operable to provide pass-through communication of local host-management traffic between a local host and a management controller within the communication device, wherein the local host may be operable to utilize its network processing resources during communication of the local host-management traffic. The network controller may use packet filtering to provide the pass-through communication, wherein the network controller may utilize a plurality filtering rules during filtering of packets received in the network controller. The filtering rules may specify packet processing and/or forwarding actions by said network controller based on one or more specified conditions. The specified conditions may based on one or more match criteria; wherein the match criteria comprising source address, destination address, and/or traffic type data in the received packets. Address learning mechanisms may be used in the network controller to enable configuring and/or performing packet filtering transparently. | 07-29-2010 |
20110035489 | SYSTEM AND METHOD FOR INTERFACING WITH A MANAGEMENT SYSTEM - Systems and methods that interface with a management system are provided. In one embodiment, a system and a method may provide a command protocol and format for communication between a network interface card (NIC) and a management device such as, for example, an intelligent management device (IMD). An interface may be adapted to allow the management device to merge its traffic with that of the NIC to provide a fully integrated management solution. The fully integrated management solution may be implemented, for example, without additional network connections. | 02-10-2011 |
20120213118 | METHOD AND SYSTEM FOR NETWORK INTERFACE CONTROLLER (NIC) ADDRESS RESOLUTION PROTOCOL (ARP) BATCHING - A NIC of a host system may provide batching services to enable reducing and/or optimizing overall system power consumption. Batching servicing may comprise buffering received packet within the NIC for an extended period of time—longer than buffering time during normal handling of received packets—based on determination that delaying handling of the received packet by the host system is permitted. Delaying handling of received packets may enable at least one component of the host system, such as a processor, utilized during that handling to remain in power saving states. The received packet may comprise a broadcast ARP packet that does not require a response from the host system. Packets buffered in the NIC may be forwarded to the host system when one or more flushing conditions occur. Flushing conditions may comprise reception of unicast packets destined for the host system or broadcast packets requiring response from the host system. | 08-23-2012 |
20130198538 | Enhanced Buffer-Batch management for Energy Efficient Networking - Various methods and systems are provided for buffer-batch management for energy efficient networking. In one embodiment, among others, a system includes a host device including an interface with a network. A device driver monitors requests to transmit packets from the host device to the network, buffers the packets in memory of the host device when the host device network interface is estimated to be in a low power mode, and initiates transition of the host device network interface to a full power mode based at least in part upon predefined criteria associated with the buffered packets. The host device network interface may begin transmission of the buffered packets when the host device network interface enters the full power mode. The host device network interface may be a network interface controller such as, e.g., an Ethernet controller configured for Energy Efficient Ethernet operation. | 08-01-2013 |
Patent application number | Description | Published |
20090138639 | NETWORK ADAPTER WITH TCP WINDOWING SUPPORT - A network adapter and corresponding method for its use are disclosed. The network adapter has an operational mode that allows a host CPU to offload transmission of a block of data to the adapter. The adapter segments the block into fragments, and builds a data packet for each fragment. The adapter transmits these packets with an adapter-implemented flow control. This flow control uses: a context engine that tracks flow control variables for a “context” established for the block; a context memory for storing the variables; and a receive filter that updates flow control information for the block based on ACK packets received from the remote endpoint receiving the data packets. Because the network adapter implements flow control for data blocks that the network adapter segments, intermediate ACK packets corresponding to that block can be intercepted by the adapter, before they pass to the host, conserving host resources. An added advantage is that the host CPU can offload data blocks larger than the remote endpoint's receive window size, since the adapter can follow the transmit window and transmit packets at appropriate intervals. This further decreases load on the host CPU, decreases latency, and improves bandwidth utilization. | 05-28-2009 |
20110179183 | NETWORK ADAPTER WITH TCP SUPPORT - A network adapter and corresponding method for its use are disclosed. The network adapter has an operational mode that allows a host CPU to offload transmission of a block of data to the adapter. The adapter segments the block into fragments, and builds a data packet for each fragment. The adapter transmits these packets with an adapter-implemented flow control. This flow control uses: a context engine that tracks flow control variables for a “context” established for the block; a context memory for storing the variables; and a receive filter that updates flow control information for the block based on ACK packets received from the remote endpoint receiving the data packets. Because the network adapter implements flow control for data blocks that the network adapter segments, intermediate ACK packets corresponding to that block can be intercepted by the adapter, before they pass to the host, conserving host resources. An added advantage is that the host CPU can offload data blocks larger than the remote endpoint's receive window size, since the adapter can follow the transmit window and transmit packets at appropriate intervals. This further decreases load on the host CPU, decreases latency, and improves bandwidth utilization. | 07-21-2011 |