Patent application number | Description | Published |
20080240140 | Network interface with receive classification - A network interface that provides improved processing of received packets in a networked computer by classifying packets as they are received. Further, both the characteristics used by the network interface to classify packets and the processing performed on those packets once classified may be programmed. The network interface contains multiple receive queues and one type of processing that may be performed is assigning packets to queues based on classification. A network stack within an operating system of the networked computer can route packets classified by the network interface to application level destinations with reduced processing. Additionally, the priority with which packets of certain classifications are processed may be used to allocate processing power to certain types of packets. As a specific example, a computer subjected to a particular type of denial of service attack sometimes called a “SYN attack” may lower the priority of processing SYN packets to reduce the effect of such an attack. | 10-02-2008 |
20090265720 | EXTENSIBLE PRIVATE DRIVER INTERFACE - A computer with an extensible framework for facilitating communication between a software component installed on the computer and a device driver that executes functions in response to vendor-specific command objects (e.g., OIDs). The framework defines data structures and a standardized format for defining and implementing private interfaces. After selecting a private interface that is commonly supported by a software component and a driver, a private communication path may be established by an operating system component to facilitate the transfer of command information from the software component to the driver. The private communication path allows commands packaged as OIDs to be routed from software components to intended drivers. By defining private interfaces which route commands from software components to intended drivers, the extensible framework mitigates potential incompatibilities that may arise when drivers created by different vendors include OIDs with the same OID value. | 10-22-2009 |
20090303921 | LOW COST MESH NETWORK CAPABILITY - A wireless device that utilizes a single network interface to simultaneously connect to an infrastructure network and a mesh network. The device has a driver layer with a media access control module for each network type. A multiplexing module and transceiver module within the driver can direct received information associated with one of the networks to an appropriate media access control and then to an appropriate network adapter. For transmitted data, the multiplexing module can receive data from the application layer through an appropriate network adapter and route it to an appropriate media access control module for processing. The processed data can be interleaved by the transceiver for transmission. | 12-10-2009 |
20100118868 | SECURE NETWORK OPTIMIZATIONS WHEN RECEIVING DATA DIRECTLY IN A VIRTUAL MACHINE'S MEMORY ADDRESS SPACE - Techniques are disclosed for increasing the security of a system where incoming network packets are directly placed into the memory space of a virtual machine (VM) operating system (OS) running on the system via direct memory access (DMA). In an embodiment, each packet is split into a first portion, which requires further processing, and a second portion, which may be immediately placed into the VM OS's memory address space. When the host OS running on the system completes processing the first portion, it places it directly before the second portion in the VM OS memory space and indicates to the VM OS that a packet is available. Techniques are further disclosed that mitigate the security risk in such systems related to VLAN ID configuration. | 05-13-2010 |
20100153514 | Non-disruptive, reliable live migration of virtual machines with network data reception directly into virtual machines' memory - Techniques are disclosed for the non-disruptive and reliable live migration of a virtual machine (VM) from a source host to a target host, where network data is placed directly into the VM's memory. When a live migration begins, a network interface card (NIC) of the source stops placing newly received packets into the VM's memory. A virtual server driver (VSP) on the source stores the packets being processed and forces a return of the memory where the packets are stored to the NIC. When the VM has been migrated to the target, and the source VSP has transferred the stored packets to the target host, the VM resumes processing the packets, and when the VM sends messages to the target NIC that the memory associated with a processed packet is free, a VSP on the target intercepts that message, blocking the target NIC from receiving it. | 06-17-2010 |
20100174808 | NETWORK PRESENCE OFFLOADS TO NETWORK INTERFACE - A computing device that has a network interface that performs a subset of possible networking functions while the computing device is in a sleep mode. The subset of functions may be simply implemented on the network interface, yet to substantially reduce the frequency with which the computing device has to wake up to perform networking functions. The subset of functions may be selected to maintain a network presence of the computing device while the device is in sleep mode, and may include responding to requests for MAC information, sending keep-alive messages or exchanging security information that, in accordance with network protocols, has a limited lifetime that would otherwise expire while the computing device is in sleep mode. | 07-08-2010 |
20120030674 | Non-Disruptive, Reliable Live Migration of Virtual Machines with Network Data Reception Directly into Virtual Machines' Memory - Techniques are disclosed for the non-disruptive and reliable live migration of a virtual machine (VM) from a source host to a target host, where network data is placed directly into the VM's memory. When a live migration begins, a network interface card (NIC) of the source stops placing newly received packets into the VM's memory. A virtual server driver (VSP) on the source stores the packets being processed and forces a return of the memory where the packets are stored to the NIC. When the VM has been migrated to the target, and the source VSP has transferred the stored packets to the target host, the VM resumes processing the packets, and when the VM sends messages to the target NIC that the memory associated with a processed packet is free, a VSP on the target intercepts that message, blocking the target NIC from receiving it. | 02-02-2012 |
20130019042 | MECHANISM TO SAVE SYSTEM POWER USING PACKET FILTERING BY NETWORK INTERFACEAANM Ertugay; Osman N.AACI BellevueAAST WAAACO USAAGP Ertugay; Osman N. Bellevue WA USAANM Thaler; David G.AACI RedmondAAST WAAACO USAAGP Thaler; David G. Redmond WA USAANM Hari; MahenderAACI RedmondAAST WAAACO USAAGP Hari; Mahender Redmond WA USAANM Ritz; Andrew J.AACI SammamishAAST WAAACO USAAGP Ritz; Andrew J. Sammamish WA USAANM Dabagh; AlirezaAACI KirklandAAST WAAACO USAAGP Dabagh; Alireza Kirkland WA US - A network interface that connects a computing device to a network may be configured to process incoming packets and determine an action to take with respect to each packet, thus decreasing processing demands on a processor of the computing device. The action may be indicating the packet to an operating system of the computing device immediately, storing the packet in a queue of one or more queues or discarding the packet. When the processor is interrupted, multiple packets aggregated on the network interface may be indicated to the operating system all at once to increase the device's power efficiency. Hardware of the network interface may be programmed to process the packets using filter criteria specified by the operating system based on information gathered by the operating system, such as firewall rules. | 01-17-2013 |
20130055270 | PERFORMANCE OF MULTI-PROCESSOR COMPUTER SYSTEMS - Embodiments of the invention may improve the performance of multi-processor systems in processing information received via a network. For example, some embodiments may enable configuration of a system such that information received is distributed among multiple processors for efficient processing. A user may select from among multiple configuration options, each configuration option being associated with a particular mode of processing information received. By selecting a configuration option, the user may specify how information received is processed to capitalize on the system's characteristics, such as by aligning processors on the system with certain NICs. As such, the processor(s) aligned with a NIC may perform networking-related tasks associated with information received by that NIC. If initial alignment causes one or more processors to become over-burdened, processing tasks may be dynamically re-distributed to other processors. | 02-28-2013 |
20130061047 | SECURE AND EFFICIENT OFFLOADING OF NETWORK POLICIES TO NETWORK INTERFACE CARDS - Techniques for efficient and secure implementation of network policies in a network interface controller (NIC) in a host computing device operating a virtualized computing environment. In some embodiments, the NIC may process and forward packets directly to their destinations, bypassing a parent partition of the host computing device. In particular, in some embodiments, the NIC may store network policy information to process and forward packets directly to a virtual machine (VM). If the NIC is unable to process a packet, then the NIC may forward the packet to the parent partition. In some embodiments, the NIC may use an encapsulation protocol to transmit address information in packet headers. In some embodiments, this address information may be communicated by the MC to the parent partition via a secure channel. The NIC may also obtain, and decrypt, encrypted addresses from the VMs for routing packets, bypassing the parent partition. | 03-07-2013 |
20130067466 | Virtual Switch Extensibility - An extensible virtual switch allows virtual machines to communicate with one another and optionally with other physical devices via a network. The extensible virtual switch includes an extensibility protocol binding, allowing different extensions to be added to the extensible virtual switch. The extensible virtual switch also includes a miniport driver on which the extensions are loaded, tying the lifetimes of the extensions to the lifetime of the extensible virtual switch. | 03-14-2013 |
20130239119 | Dynamic Processor Mapping for Virtual Machine Network Traffic Queues - An algorithm for dynamically adjusting the number of processors servicing Virtual Machine Queues (VMQ) and the mapping of the VMQ to the processors based on network load and processor usage in the system The algorithm determines the total load on a processor and depending on whether the total load exceeds or falls below a threshold respectively, the algorithm moves at least one of the VMQs to a different processor based on certain criteria such as whether the destination processor is the home processor to the VMQ or whether it shares a common NUMA node with the VMQ. By doing so, better I/O throughput and lower power consumption can be achieved. | 09-12-2013 |
20130343191 | ENSURING PREDICTABLE AND QUANTIFIABLE NETWORKING PERFORMANCE - The ensuring of predictable and quantifiable networking performance. Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates. A lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. A receiving hypervisor can detect congestion and communicate back to sending hypervisors that rates are to be regulated. In response, sending hypervisors can reduce transmission rate to mitigate congestion at the receiving hypervisor. In some embodiments, the principles are extended to any message processors communicating over a congestion free network. | 12-26-2013 |
20130343399 | OFFLOADING VIRTUAL MACHINE FLOWS TO PHYSICAL QUEUES - The present invention extends to methods, systems, and computer program products for offloading virtual machine flows to physical queues. A computer system executes one or more virtual machines, and programs a physical network device with one or more rules that manage network traffic for the virtual machines. The computer system also programs the network device to manage network traffic using the rules. In particular, the network device is programmed to determine availability of one or more physical queues at the network device that are usable for processing network flows for the virtual machines. The network device is also programmed to identify network flows for the virtual machines, including identifying characteristics of each network flow. The network device is also programmed to, based on the characteristics of the network flows and based on the rules, assign one or more of the network flows to at least one of the physical queues. | 12-26-2013 |
20140233427 | LOW COST MESH NETWORK CAPABILITY - A wireless device that utilizes a single network interface to simultaneously connect to an infrastructure network and a mesh network. The device has a driver layer with a media access control module for each network type. A multiplexing module and transceiver module within the driver can direct received information associated with one of the networks to an appropriate media access control and then to an appropriate network adapter. For transmitted data, the multiplexing module can receive data from the application layer through an appropriate network adapter and route it to an appropriate media access control module for processing. The processed data can be interleaved by the transceiver for transmission. | 08-21-2014 |
20140347998 | ENSURING PREDICTABLE AND QUANTIFIABLE NETWORKING PERFORMANCE - The ensuring of predictable and quantifiable networking performance. Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates. A lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. A receiving hypervisor can detect congestion and communicate back to sending hypervisors that rates are to be regulated. In response, sending hypervisors can reduce transmission rate to mitigate congestion at the receiving hypervisor. In some embodiments, the principles are extended to any message processors communicating over a congestion free network. | 11-27-2014 |