MELLANOX TECHNOLOGIES LTD. Patent applications |
Patent application number | Title | Published |
20150317251 | MAINTAINING A SYSTEM STATE CACHE - Methods, apparatuses and computer software products implement embodiments of the present invention that include storing, to a module memory in each of a plurality of modules having multiple sub-modules, a record containing record entries corresponding respectively to the sub-modules. Upon detecting changes in respective states of the sub-modules of a given module, the corresponding record entries are set in response to the detected changes in the states of the sub-modules of the given module. A cache containing cache entries corresponding respectively to the sub-modules in the plurality of the modules is stored to a controller memory, and the record in each of the modules is polled. Upon detecting that a given record entry of the given module has been set, from the current state information with respect to the given sub-module is requested and received, and a corresponding cache entry is updated in the cache with the current state information. | 11-05-2015 |
20150288624 | LOW-LATENCY PROCESSING IN A NETWORK NODE - A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices. | 10-08-2015 |
20150277970 | REDUCING PROCESSOR LOADING DURING HOUSEKEEPING OPERATIONS - A method includes, in a processor, receiving first and second operations for periodic execution with respective specified time periods. Respective actual time periods having no common divisor are derived from the specified time periods. The first and second operations are executed periodically with the respective actual time periods. | 10-01-2015 |
20150270899 | CONTROL OF COMMUNICATION NETWORK PERFORMANCE BY VARYING ACTIVE OPTICAL CABLE PARAMETERS - A method includes defining a target performance for a communication network that includes multiple network nodes interconnected by Active Optical Cables (AOCs). Respective parameters, which cause the communication network to achieve the target performance, are selected for the AOCs. Commands are sent to the AOCs to set the selected parameters. | 09-24-2015 |
20150263994 | BUFFERING SCHEMES FOR COMMUNICATION OVER LONG HAUL LINKS - A switching apparatus includes multiple ports, each including a respective buffer, and a switch controller. The switch controller is configured to concatenate the buffers of at least an input port and an output port selected from among the multiple ports for buffering traffic of a long-haul link, which is connected to the input port and whose delay exceeds buffering capacity of the buffer of the input port alone, and to carry out end-to-end flow control for the long haul link between the output port and the input port. | 09-17-2015 |
20150261720 | ACCESSING REMOTE STORAGE DEVICES USING A LOCAL BUS PROTOCOL - A method for data storage includes configuring a driver program on a host computer to receive commands in accordance with a protocol defined for accessing local storage devices connected to a peripheral component interface bus of the host computer. When the driver program receives, from an application program running on the host computer a storage access command in accordance with the protocol, specifying a storage transaction, a remote direct memory access (RDMA) operation is performed by a network interface controller (NIC) connected to the host computer so as to execute the storage transaction via a network on a remote storage device. | 09-17-2015 |
20150261434 | STORAGE SYSTEM AND SERVER - A data storage system includes a storage server, including non-volatile memory (NVM) and a server network interface controller (NIC), which couples the storage server to a network. A host computer includes a host central processing unit (CPU), a host memory and a host NIC, which couples the host computer to the network. The host computer runs a driver program that is configured to receive, from processes running on the host computer, commands in accordance with a protocol defined for accessing local storage devices connected to a peripheral component interface bus of the host computer, and upon receiving a storage access command in accordance with the protocol, to initiate a remote direct memory access (RDMA) operation to be performed by the host and server NICs so as to execute on the storage server, via the network, a storage transaction specified by the command. | 09-17-2015 |
20150103667 | DETECTION OF ROOT AND VICTIM NETWORK CONGESTION - A method in a communication network includes defining a root congestion condition for a network switch if the switch creates congestion in the network while switches downstream are congestion free, and a victim congestion condition if the switch creates the congestion as a result of one or more other congested switches downstream. A buffer fill level in a first switch, created by network traffic, is monitored. A binary notification is received from a second switch, which is connected to the first switch. A decision whether the first switch or the second switch is in a root or a victim congestion condition is made, based on both the buffer fill level and the binary notification. A network congestion control procedure is applied based on the decided congestion condition. | 04-16-2015 |
20150098466 | SIMPLIFIED PACKET ROUTING - A method for communication, includes routing unicast data packets among nodes in a network using respective Layer-3 addresses that are uniquely assigned to each of the nodes. Respective Layer-2 unicast addresses are assigned to the nodes in accordance with an algorithmic mapping of the respective Layer-3 addresses. The unicast data packets are forwarded within subnets of the network using the assigned Layer-2 addresses. | 04-09-2015 |
20150037029 | WAVELENGTH AUTO-NEGOTIATION - An apparatus includes a bank of optical detectors, an input optical filter and a selector. The optical detectors are configured to output respective detection indications in response to detecting a presence of an optical signal. The input optical filter is configured to receive an input optical signal having an input wavelength, and to route the input optical signal to one of the optical detectors in the bank depending on the input wavelength. The selector is configured to select an output wavelength based on the detection indications of the optical detectors, and to cause generation and transmission of an output optical signal at the selected output wavelength. | 02-05-2015 |
20140369651 | INTEGRATED OPTICAL COOLING CORE FOR OPTOELECTRONIC INTERCONNECT MODULES - An apparatus includes one or more optoelectronic transducers, driving circuitry, one or more cooling elements, and a light coupling module. The optoelectronic transducers are configured to convert between optical signals conveyed over optical fibers and respective electrical signals. The driving circuitry is configured to process the electrical signals. The cooling elements are configured to remove heat that is produced at least by the driving circuitry. The light coupling module is configured to couple the optical signals between the optical fibers and the optoelectronic transducers, and additionally serves as a baseplate for the cooling elements. | 12-18-2014 |
20140348468 | TRANSCEIVER SOCKET ADAPTER FOR PASSIVE OPTICAL CABLE - A communication device includes a mechanical shell, which is configured to be inserted into a Small Form-Factor Pluggable (SFP) receptacle and contains a notch configured to hold a ferrule for mating with a connector of a passive optical cable. The mechanical shell includes molded upper and lower covers, which are joined together along an assembly line. A pair of elastic clips are molded integrally with at least one of the upper and lower covers and are configured to receive and hold the connector when mated with the ferrule. Circuitry within the shell includes electrical terminals configured to mate with corresponding terminals of the receptacle. | 11-27-2014 |
20140313669 | LIQUID COOLING SYSTEM FOR MODULAR ELECTRONIC SYSTEMS - A system for cooling an integrated circuit of an electronic device includes a cooling body and a shelf that is positioned relative to the cooling body for the device to be reversibly inserted onto the shelf so that the cooling body is in thermal contact with the integrated circuit. The cooling body is cooled by introducing a fluid therein via an input conduit. The hot fluid is received from the cooling body by an output conduit and is cooled for recycling. The housing of the electronic device includes a rearward gap that admits the cooling body into the housing of the electronic device. Preferably, further cooling is provided by forcing a gas to flow past the output conduit. | 10-23-2014 |
20140294339 | COMPACT OPTICAL FIBER SPLITTERS - An apparatus includes one or more optical waveguides, one or more first micro-lenses, and one or more second micro-lenses. The one or more optical waveguides are formed in a substrate and are configured to convey respective optical signals between first ends and second ends of the optical waveguides. The one or more first micro-lenses are disposed on the respective first ends of the optical waveguides and are configured to couple the optical signals between the first ends and respective first optical elements. The one or more second micro-lenses are disposed on the respective second ends of the optical waveguides and are configured to couple the optical signals between the second ends and respective second optical elements. | 10-02-2014 |
20140281840 | METHODS AND SYSTEMS FOR ERROR-CORRECTION DECODING - Methods and systems for efficient Reed-Solomon (RS) decoding are provided. The RS decoding unit includes both an RS pseudo decoder and an RS decoder. The RS pseudo decoder is configured to correct a small number of errors in a received codeword, while the RS decoder is configured to correct errors that are recoverable by the RS code. The RS pseudo decoder runs in parallel with the RS decoder. Once the RS pseudo decoder successfully decodes the codeword, the RS decoder may stop its processing, thereby reducing the RS decoding latency. | 09-18-2014 |
20140269711 | COMMUNICATION OVER MULTIPLE VIRTUAL LANES USING A SHARED BUFFER - A method for communication includes, in a sender node that sends packets to a receiver node over a physical link, making a decision, for a packet that is associated with a respective virtual link selected from among multiple virtual links, whether the receiver node is to buffer the packet in a dedicated buffer assigned to the respective virtual link or in a shared buffer that is shared among the multiple virtual links. The packet is sent, and the decision is signaled, from the sender node to the receiver node. | 09-18-2014 |
20140269271 | METHODS AND SYSTEMS FOR NETWORK CONGESTION MANAGEMENT - Methods and systems are disclosed for network congestion management. The methods and systems receive a first packet complying with a first network protocol comprising a first congestion indicator representative of a presence or absence of network congestion and further comprising a first set of data associated with a second network protocol, and provide an indication of the presence or absence of network congestion generated based, at least in part, on the first congestion indicator. The methods and systems also receive a first packet complying with a first network protocol comprising a first set of data associated with a second network protocol, and output a second packet complying with the first network protocol comprising a first congestion indicator representative of a presence of network congestion. | 09-18-2014 |
20140258438 | NETWORK INTERFACE CONTROLLER WITH COMPRESSION CAPABILITIES - A method for communication includes receiving in a network interface controller (NIC) from a host processor, which has a local host memory and is connected to the NIC by a local bus, a remote direct memory access (RDMA) compress-and-write command, specifying a source memory buffer in the local host memory and a target memory address. In response to the command, data are read from the specified buffer into the NIC, compressed in the NIC, and conveyed from the NIC to the target memory address. | 09-11-2014 |
20140248794 | TRANSCEIVER RECEPTACLE CAGE - A connector cage includes a bezel, having a plurality of slots formed therein, and a cage structure including upper and lower sides and multiple partitions extending between the upper and lower sides to define receptacles for receiving cable connectors. Multiple tabs protrude out of at least one of the sides in locations at which the tabs fit into the slots in the bezel, and are folded over the slots so as to secure the cage structure to the bezel. The cage may also include multiple snap-on spring subassemblies, each spring subassembly secured to a front end of a respective partition and comprising leaves that bow outward to contact the shells of the connectors that are inserted into the receptacles adjacent to the partition. | 09-04-2014 |
20140247832 | Responding to dynamically-connected transport requests - A method for communication, includes allocating, in a network interface controller (NIC) a single dynamically-connected (DC) initiator context for serving requests from an initiator process running on the initiator host to transmit data to multiple target processes running on one or more target nodes. The NIC transmits a first connect packet directed to a first target process and referencing the DC initiator context so as to open a first dynamic connection with the first target process. The NIC receives over the packet network, in response to the first connect packet, a first acknowledgment packet containing a first session identifier (ID). Following receipt of the first acknowledgment packet, the NIC transmits one or more first data packets containing the first session ID over the first dynamic connection from the NIC to the first target process. Dynamic connections with other target processes may subsequently be handled in similar fashion. | 09-04-2014 |
20140241344 | DIRECT UPDATING OF NETWORK DELAY IN SYNCHRONIZATION PACKETS - A method includes receiving in a network element a packet, which includes a delay field that indicates an overall time delay accumulated by the packet until arriving at the network element. Upon receiving the packet, an interim value is substituted in the delay field. The interim value is indicative of a difference between the overall time delay and an arrival time of the packet at the network element. Before sending the packet from the network element, the overall time delay is updated in the delay field based on the interim value and on a departure time at which the packet is to exit the network element. The packet, including the updated overall time delay, is transmitted from the network element. | 08-28-2014 |
20140231956 | INTEGRATED CIRCUIT INDUCTOR - An inductive device is formed in a circuit structure that includes alternating conductive and insulating layers. The device includes, in a plurality of the conductive layers, traces forming a respective pair of interleaved loops and at least one interconnect segment in each of the plurality of the conductive layers. In each layer among the plurality of the conductive layers, at least one loop in the respective pair is closed by jumpers to an interconnect segment formed in another layer above or below the layer. | 08-21-2014 |
20140211808 | SWITCH WITH DUAL-FUNCTION MANAGEMENT PORT - Communication apparatus includes a switch, which includes switching logic, multiple ports for connection to a network, and a management port, and which is configured to assign both a first link-layer address and a second link-layer address to the management port. A host processor includes a memory and a central processing unit (CPU), which is configured to run software implementing a management agent for managing functions of the switch. A network interface controller (NIC) is connected to the management port and is configured to convey incoming management packets, which are directed by the switch to the first link-layer address, to the CPU for processing by the management agent, and to write directly to the memory data contained in incoming remote direct memory access (RDMA) packets, which are directed by the switch to the second link-layer address. | 07-31-2014 |
20140211631 | ADAPTIVE ROUTING USING INTER-SWITCH NOTIFICATIONS - A method includes receiving in a network switch of a communication network communication traffic that originates from a source node and arrives over a route through the communication network traversing one or more preceding network switches, for forwarding to a destination node. In response to detecting in the network switch a compromised ability to forward the communication traffic to the destination node, a notification is sent to the preceding network switches. The notification is to be consumed by the preceding network switches and requests the preceding network switches to modify the route so as not to traverse the network switch. | 07-31-2014 |
20140201260 | EFFICIENT ACCESS TO CONNECTIVITY INFORMATION USING CABLE IDENTIFICATION - Communication apparatus includes a memory and a communication interface, configured to send and receive messages to and from respective management agents in multiple items of communication equipment having ports that are interconnected by cables in a network, each of the cables having a unique identifier. A processor is configured to communicate with the management agents via the communication interface so as to collect physical connectivity information with respect to the cables and the ports, to store the physical connectivity information in the memory, and to provide the physical connectivity information to a user of the apparatus. | 07-17-2014 |
20140186029 | METHODS AND DEVICES FOR ACTIVE OPTICAL CABLE CALIBRATION - Methods and devices for laser driver calibration are disclosed. The methods and devices disclose determining first and second bit error rates for use in calibrating the laser driver. The methods and devices also disclose that if the first bit error rate associated with a first initial value is above a predetermined bit error rate, increasing the first initial value until the first bit error rate is not above the predetermined bit error rate, and if the second bit error rate associated with a second initial value is above a predetermined bit error rate, decreasing the second initial value until the second bit error rate is not above the predetermined bit error rate. In addition, the methods and devices disclose setting a calibrated parameter for the laser driver based, at least in part, on the increased first initial value and the decreased second initial value. | 07-03-2014 |
20140185616 | Network interface controller supporting network virtualization - A network interface device includes a host interface for connection to a host processor having a memory. A network interface is configured to transmit and receive data packets over a data network, which supports multiple tenant networks overlaid on the data network. Processing circuitry is configured to receive, via the host interface, a work item submitted by a virtual machine running on the host processor, and to identify, responsively to the work item, a tenant network over which the virtual machine is authorized to communicate, wherein the work item specifies a message to be sent to a tenant destination address. The processing circuitry generates, in response to the work item, a data packet containing an encapsulation header that is associated with the tenant network, and to transmit the data packet over the data network to at least one data network address corresponding to the specified tenant destination address. | 07-03-2014 |
20140185615 | SWITCH FABRIC SUPPORT FOR OVERLAY NETWORK FEATURES - A method for communication in a packet data network including a subnet containing multiple nodes having respective ports. The method includes assigning respective local identifiers to the ports in the subnet, such that each port receives a respective local identifier that is unique within the subnet to serve as an address for traffic within the subnet that is directed to the port. In addition to the local identifiers, respective port identifiers are assigned to the ports, such that at least one of the port identifiers is shared by a plurality of the ports, but not by all the ports, in the subnet. The plurality of the ports are addressed collectively using the at least one of the port identifiers. | 07-03-2014 |
20140177639 | ROUTING CONTROLLED BY SUBNET MANAGERS - A method for communication in a packet data network that includes at least first and second subnets interconnected by multiple routers and having respective first and second subnet managers. The method includes assigning respective local identifiers to ports for addressing of data link traffic within each subnet, such that the first subnet manager assigns the local identifiers in the first subnet, and the second subnet manager assigns the local identifiers in the second subnet. The routers are configured by transmitting and receiving control traffic between the subnet managers and the routers. Data packets are transmitted between network nodes in the first and second subnets via one or more of the configured routers under control of the subnet managers. | 06-26-2014 |
20140169170 | MAINTAINING CONSISTENT QUALITY OF SERVICE BETWEEN SUBNETS - Network apparatus includes a plurality of interfaces, which are coupled to a network so as to receive and transmit data packets having respective link-layer headers and network-layer headers. Each link-layer header includes respective source and destination link-layer addresses and a link-layer priority value. Switching and routing logic is configured, responsively to the network-layer headers, to transfer each data packet from a respective ingress interface to a respective egress interface and to modify the source and destination link-layer addresses of the transferred data packet while copying the link-layer priority value from the ingress interface to the egress interface without modification. | 06-19-2014 |
20140169169 | ROUTING SUPPORT FOR LOSSLESS DATA TRAFFIC - A method for communication in a packet data network including at least first and second subnets interconnected by routers. The method includes defining at least first and second classes of link-layer traffic within the subnets, such that the link-layer traffic in the first class is transmitted among nodes in the network without loss of packets, while at least some of the packets in the second class are dropped in case of network congestion. The routers are configured by transmitting control traffic over the network in the packets of the second class. Data traffic is transmitted between the nodes in the first and second subnets via the configured routers in the packets of the first class. | 06-19-2014 |
20140143455 | Efficient delivery of completion notifications - A computer peripheral device includes a host interface, which is configured to communicate over a bus with a host processor and with a system memory of the host processor. Processing circuitry in the peripheral device is configured to receive and execute work items submitted to the peripheral device by client processes running on the host processor, and responsively to completing execution of the work items, to generate completion reports and to write a plurality of the completion reports to the system memory via the bus together in a single bus transaction. | 05-22-2014 |
20140143454 | Reducing size of completion notifications - A computer peripheral device includes a host interface, which is configured to communicate over a bus with a host processor and with a system memory of the host processor. Processing circuitry in the peripheral device is configured to receive and execute work items submitted to the peripheral device by client processes running on the host processor, and responsively to completing execution of the work items, to write completion reports to the system memory, including first completion reports of a first data size and second completion reports of a second data size, which is smaller than the first data size. | 05-22-2014 |
20140133797 | Flip-chip optical interface with micro-lens array - An apparatus includes an optically opaque substrate, which includes first and second opposite surfaces and has one or more openings traversing through the substrate between the first and second surfaces. One or more optical transducers are attached to the first surface of the substrate so as to emit or detect light via the respective openings. One or more lenses are positioned against the respective openings on the second surface of the substrate, and are configured to couple the light between the optical transducers and respective optical fibers. | 05-15-2014 |
20140129784 | METHODS AND SYSTEMS FOR POLLING MEMORY OUTSIDE A PROCESSOR THREAD - A system and method of monitoring a memory address is disclosed which may replace a polling operation on a memory by determining a memory address to monitor, notifying a cache controller of the memory address, and cause execution on a polling thread to wait. The cache controller may then monitor the memory address and notify the processor to resume execution of the thread. While the processor is waiting to be notified, it may enter a power save state or allow more time to be allocated to other threads being executed. | 05-08-2014 |
20140129741 | PCI-EXPRESS DEVICE SERVING MULTIPLE HOSTS - A method includes establishing in a peripheral device at least first and second communication links with respective first and second hosts. The first communication link is presented to the first host as the only communication link with the peripheral device, and the second communication link is presented to the second host as the only communication link with the peripheral device. The first and second hosts are served simultaneously by the peripheral device over the respective first and second communication links. | 05-08-2014 |
20140122828 | Sharing address translation between CPU and peripheral devices - A method for memory access includes maintaining in a host memory, under control of a host operating system running on a central processing unit (CPU), respective address translation tables for multiple processes executed by the CPU. Upon receiving, in a peripheral device, a work item that is associated with a given process, having a respective address translation table in the host memory, and specifies a virtual memory address, the peripheral device translates the virtual memory address into a physical memory address by accessing the respective address translation table of the given process in the host memory. The work item is executed in the peripheral device by accessing data at the physical memory address in the host memory. | 05-01-2014 |
20140122556 | INTEGER DIVIDER MODULE - A method includes receiving a dividend and a divisor for performing a division operation. Numbers p and n are found, for which the divisor equals 2 | 05-01-2014 |
20140115206 | METHODS AND SYSTEMS FOR RUNNING NETWORK PROTOCOLS OVER PERIPHERAL COMPONENT INTERCONNECT EXPRESS - Methods and devices for running network protocols over Peripheral Component Interconnect Express are disclosed. The methods and devices may receive an electronic signal comprising data. The methods and devices may also determine the data corresponds to a protocol selected from a set comprising a PCIe protocol and a network protocol. In addition, the methods and devices may also configure a CPU based on the determined protocol. The methods and devices may also receive a second electronic signal comprising second data at a pin or land of the CPU, wherein the pin or land is connected to a PCIe lane and wherein the second data is formatted in accordance with determined protocol. In addition, the methods and devices may process the second data in accordance with the determined protocol. | 04-24-2014 |
20140095753 | Network interface controller with direct connection to host memory - A network interface device for a host computer includes a network interface, configured to transmit and receive data packets to and from a network. Packet processing logic transfers data to and from the data packets transmitted and received via the network interface by direct memory access (DMA) from and to a system memory of the host computer. A memory controller includes a first memory interface configured to be connected to the system memory and a second memory interface, configured to be connected to a host complex of the host computer. Switching logic alternately couples the first memory interface to the packet processing logic in a DMA configuration and to the second memory interface in a pass-through configuration. | 04-03-2014 |
20140089528 | Use of free pages in handling of page faults - A method for data transfer includes receiving in an input/output (I/O) operation data to be written to a specified virtual address in a host memory. Upon receiving the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. Responsively to detecting that the first page is swapped out, the received data are written to a second, free page in the host memory, and the specified virtual address is remapped to the free page. | 03-27-2014 |
20140089451 | Application-assisted handling of page faults in I/O operations - A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page. | 03-27-2014 |
20140089450 | Look-Ahead Handling of Page Faults in I/O Operations - A method for data transfer includes receiving in an input/output (I/O) operation a first segment of data to be written to a specified virtual address in a host memory. Upon receiving the first segment of the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. At least one second page of the host memory is identified, to which a second segment of the data is expected to be written. Responsively to detecting that the first page is swapped out and to identifying the at least one second page, at least the first and second pages are swapped into the host memory. After swapping at least the first and second pages into the host memory, the data are written to the first and second pages. | 03-27-2014 |
20140075436 | SYSTEM AND METHOD FOR ACCELERATING INPUT/OUTPUT ACCESS OPERATION ON A VIRTUAL MACHINE - A system and method for accelerating input/output (IO) access operation on a virtual machine, The method comprises providing a smart IO device that includes an unrestricted command queue (CQ) and a plurality of restricted CQs and allowing a guest domain to directly configure and control IO resources through a respective restricted CQ, the IO resources allocated to the guest domain. In preferred embodiments, the allocation of IO resources to each guest domain is performed by a privileged virtual switching element. In some embodiments, the smart IO device is a HCA and the privileged virtual switching element is a Hypervisor. | 03-13-2014 |
20140023084 | REDUCING POWER CONSUMPTION IN A FAT-TREE NETWORK - A method for communication includes configuring a multi-level fat-tree network to include at least three levels of switches, including multiple modules arranged externally in a tree topology. Each module contains a respective group of the switches arranged in an internal tree extending over at least two of the levels of the network. A subset of the modules is selected to be active in carrying the communication traffic. The network is operated so as to convey communication traffic among the switches via the active modules, while the modules that are not in the selected subset remain inactive. | 01-23-2014 |
20140003441 | Responding to dynamically-connected transport requests | 01-02-2014 |
20130315528 | HIGH-SPEED OPTICAL MODULE WITH FLEXIBLE PRINTED CIRCUIT BOARD - An apparatus includes a base substrate, a light rotation module and a flexible printed circuit board (PCB). The light rotation module has a bottom surface mounted on the base substrate and a top surface coupled to one or more optoelectronic transducers, and is configured to direct optical signals between the respective optoelectronic transducers and optical ports on a side perpendicular to the top surface. The flexible printed circuit board (PCB) includes a first end that is attached to the top surface of the light rotation module and has the optoelectronic transducers mounted thereon, a second end attached to the base substrate, and conductive traces disposed between the first and second ends to direct electrical signals between the optoelectronic transducers and the base substrate. | 11-28-2013 |
20130315237 | Prioritized Handling of Incoming Packets by a Network Interface Controller - A network interface controller includes a host interface, which is configured to be coupled to a host processor having a host memory. A network interface is configured to receive data packets from a network, each data packet including a header, which includes header fields, and a payload including data. Packet processing circuitry is configured to process one or more of the header fields and at least a part of the data and to select, responsively at least to the one or more of the header fields, a location in the host memory. The circuitry writes the data to the selected location and upon determining that the processed data satisfies a predefined criterion, asserts an interrupt on the host processor so as to cause the host processor to read the data from the selected location in the host memory. | 11-28-2013 |
20130311746 | SHARED MEMORY ACCESS USING INDEPENDENT MEMORY MAPS - A method includes defining a first mapping, which translates between logical addresses and physical storage locations in a memory with a first mapping unit size, for accessing the memory by a first processing unit. A second mapping is defined, which translates between the logical addresses and the physical storage locations with a second mapping unit size that is different from the first mapping unit size, for accessing the memory by a second processing unit. Data is exchanged between the first and second processing units via the memory, while accessing the memory by the first processing unit using the first mapping and by the second processing unit using the second mapping. | 11-21-2013 |
20130294780 | PLANAR OPTICAL INTERFACE AND SPLITTER - An apparatus includes an optical Input/Output (I/O) connector, which has a central axis that is mounted in a plane and which is configured to connect to external optical fibers for transferring input optical signals to the apparatus and output optical signals from the apparatus. A first optical ferrule is mounted perpendicularly to the optical I/O connector in the plane, and is configured to transfer the input optical signals from the optical I/O connector to respective optical detectors. A second optical ferrule is mounted perpendicularly to the optical I/O connector in the plane, and is configured to transfer the output optical signals from respective optical emitters to the optical connector. A light rotation module is configured to bend and transfer the input and output optical signals between the optical I/O connector and the perpendicularly-mounted first and second optical ferrules. | 11-07-2013 |
20130294725 | OPTICAL INTERFACE AND SPLITTER WITH MICRO-LENS ARRAY - An apparatus includes a connector that connects to optical fibers for connecting first and second optical signals to the apparatus. A first optical ferrule is mounted perpendicularly to the connector, and transfers the first optical signals between the connector and first optical transducers mounted on a first substrate, via first holes formed in the first substrate. A second optical ferrule is mounted perpendicularly to the connector, and transfers the second optical signals between the connector and second optical transducers mounted on a second substrate, via second holes formed in the second substrate. A light rotation module bends and transfers the first and second optical signals between the connector and the first and second ferrules. One or more lenses are mounted between the first ferrule and the first holes, so as to couple the first optical signals via the first holes between the first ferrule and the first optical transducers. | 11-07-2013 |
20130250760 | COMMUNICATION LINK WITH INTRA-PACKET FLOW CONTROL - A method for communication includes transmitting a data packet from a first port to a second port over a communication link. After transmission of a first portion of the data packet, the transmission is temporarily suspended, a flow-control message is sent from the first port to the second port over the communication link while the transmission is temporarily suspended, and then the transmission is resumed so as to transmit a second portion of the data packet. | 09-26-2013 |
20130243368 | OPTOELECTRONIC INTERCONNECTS USING L-SHAPED FIXTURE - An apparatus includes an L-shaped fixture, a first semiconductor die and a second semiconductor die. The L-shaped fixture includes first and second perpendicular faces. The first semiconductor die includes an array of optoelectronic transducers and is attached onto the first face. The second semiconductor die, which is mounted parallel to the second face, includes ancillary circuitry connected to the optoelectronic transducers by electronic interconnects configured within the fixture. | 09-19-2013 |
20130241050 | INTEGRATED OPTOELECTRONIC INTERCONNECTS WITH SIDE-MOUNTED TRANSDUCERS - A method for fabricating an optical interconnect includes producing a semiconductor wafer that includes multiple first dies. Each first die includes circuitry disposed over a surface of the wafer and connected to conductive vias arranged in rows. The multiple first dies are diced by cutting the wafer across the rows of the vias, such that, in each first die, the cut vias form respective contact pads on a side face of the first die that is perpendicular to the surface. A second semiconductor die including one or more optoelectronic transducers is attached to the contact pads, so as to connect the transducers to the circuitry. | 09-19-2013 |
20130209025 | INTEGRATED OPTICAL INTERCONNECT - A method for fabricating an integrated optical interconnect includes disposing a layer over a substrate on which at least one optoelectronic transducer has been formed. A groove is formed in the layer in alignment with the optoelectronic transducer. A slanted mirror is formed in the layer at an end of the groove adjacent to the optoelectronic transducer to direct light between the optoelectronic transducer and an optical fiber placed in the groove. | 08-15-2013 |
20130202247 | OPTICAL MODULE FABRICATED ON FOLDED PRINTED CIRCUIT BOARD - An optical interface module includes a single flexible Printed Circuit Board (PCB) including conductive traces. An electrical connector, one or more opto-electronic transducers and ancillary circuitry are disposed on the flexible PCB. The electrical connector is configured to mate with a corresponding connector on a substrate. The opto-electronic transducers are configured to be coupled to optical fibers carrying optical signals. The ancillary circuitry is coupled by the traces to the opto-electronic transducers and the electrical connector so as to convey electrical signals corresponding to the optical signals between the opto-electronic transducers and the electrical connector. | 08-08-2013 |
20130166793 | HOST CHANNEL ADAPTER WITH PATTERN-TYPE DMA - An input/output (I/O) device includes a memory buffer and off-loading hardware. The off-loading hardware is configured to accept from a host a scatter/gather list including one or more entries. The entries include at least a pattern-type entry that specifies a period of a periodic pattern of addresses that are to be accessed in a memory of the host. The off-loading hardware is configured to transfer data between the memory buffer of the I/O device and the memory of the host by accessing the addresses in the memory of the host in accordance with the periodic pattern at intervals indicated in the period. | 06-27-2013 |
20130159568 | Recovering dropped instructions in a network interface controller - A method for operating a peripheral device includes receiving at the peripheral device service orders, which are identified with respective service instances and are submitted to the peripheral device over the bus by software applications running on a host processor, which write copies of the service orders to a memory. The received service orders are queued for execution by the peripheral device. When one or more of the service orders have been dropped from the queue prior to execution, a recovery of a selected service instance is initiated by submitting a read request from the peripheral device to the memory over the bus to receive a copy of any unexecuted service order associated with the service instance. | 06-20-2013 |
20130142039 | Configurable Access Control Lists Using TCAM - A communication apparatus includes a Content-Addressable Memory (CAM) and packet processing circuitry. The packet processing circuitry is configured to store in respective regions of the CAM multiple Access Control Lists (ACLs) that are defined for respective packet types, to classify an input packet to a respective packet type selected from the packet types, to identify a region holding an ACL defined for the selected packet type, and to process the input packet in accordance with the ACL stored in the identified region. | 06-06-2013 |
20130135999 | DESTINATION-BASED CONGESTION CONTROL - A method for communication includes sending communication packets over a network from a first network interface. A notification, which originates from a second network interface and indicates a network congestion encountered by one or more of the packets, is received in the first network interface. A network address of the second network interface is identified based on the notification. A transmission rate of subsequent packets addressed to the network address is regulated responsively to the notification, irrespective of a transport service instance on which the subsequent packets are sent from the first network interface. | 05-30-2013 |
20130114599 | PACKET STEERING - A method for steering packets, including receiving a packet and determining parameters to be used in steering the packet to a specific destination, in one or more initial steering stages, based on one or more packet specific attributes. The method further includes determining an identity of the specific destination of the packet in one or more subsequent steering stages, governed by the parameters determined in the one or more initial stages and one or more packet specific attributes, and forwarding the packet to the determined specific destination. | 05-09-2013 |
20130103777 | NETWORK INTERFACE CONTROLLER WITH CIRCULAR RECEIVE BUFFER - A method for communication includes allocating in a memory of a host device a contiguous, cyclical set of buffers for use by a transport service instance on a network interface controller (NIC). First and second indices point respectively to a first buffer in the set to which the NIC is to write and a second buffer in the set from which a client process running on the host device is to read. Upon receiving at the NIC a message directed to the transport service instance and containing data to be pushed to the memory, the data are written to the first buffer that is pointed to by the first index, and the first index is advanced cyclically through the set. The second index is advanced cyclically through the set when the data in the second buffer have been read by the client process. | 04-25-2013 |
20130077489 | CREDIT-BASED FLOW CONTROL FOR ETHERNET - A method for communication includes sending a pause frame from a first node to a second node over a communication link between the nodes. In response to the pause frame, one or more data frames are immediately transmitted from the second node to the first node upon receipt of the pause frame at the second node. | 03-28-2013 |
20130077238 | LIQUID COOLING SYSTEM FOR MODULAR ELECTRONIC SYSTEMS - A system for cooling an integrated circuit of an electronic device includes a cooling body and a shelf that is positioned relative to the cooling body for the device to be reversibly inserted onto the shelf so that the cooling body is in thermal contact with the integrated circuit. The cooling body is cooled by introducing a fluid therein via an input conduit. The hot fluid is received from the cooling body by an output conduit and is cooled for recycling. The housing of the electronic device includes a rearward gap that admits the cooling body into the housing of the electronic device. Preferably, further cooling is provided by forcing a gas to flow past the output conduit. | 03-28-2013 |
20130067193 | NETWORK INTERFACE CONTROLLER WITH FLEXIBLE MEMORY HANDLING - An input/output (I/O) device includes a host interface for connection to a host device having a memory, and a network interface, which is configured to transmit and receive, over a network, data packets associated with I/O operations directed to specified virtual addresses in the memory. Processing circuitry is configured to translate the virtual addresses into physical addresses using memory keys provided in conjunction with the I/O operations and to perform the I/O operations by accessing the physical addresses in the memory. At least one of the memory keys is an indirect memory key, which points to multiple direct memory keys, corresponding to multiple respective ranges of the virtual addresses, such that an I/O operation referencing the indirect memory key can cause the processing circuitry to access the memory in at least two of the multiple respective ranges. | 03-14-2013 |
20130042242 | Interrupt Handling in a Virtual Machine Environment - A method for computing includes running a plurality of virtual machines on a computer having one or more cores and a memory. Upon occurrence of an event pertaining to a given virtual machine during a period in which the given virtual machine is unable to receive an interrupt, an interrupt message is written to a pre-assigned interrupt address in the memory. When the given virtual machine is able to receive the interrupt, after writing of the interrupt message, a context of the given virtual machine is copied from the memory to a given core on which the given virtual machine is running, and a hardware interrupt is automatically raised on the given core responsively to the interrupt message in the memory. | 02-14-2013 |
20130042236 | VIRTUALIZATION OF INTERRUPTS - A method for computing includes running a plurality of virtual machines on a computer having one or more cores and a memory. Respective interrupt addresses in the memory are assigned to the virtual machines. Upon occurrence on a device connected to the computer of an event pertaining to a given virtual machine during a period in which the given virtual machine is swapped out of operation, an interrupt message is written from the device to a respective interrupt address that is assigned to the given virtual machine in the memory. Upon activating the given virtual machine on a given core after writing of the interrupt message, a context of the given virtual machine is copied from the memory to the given core, and a hardware interrupt is automatically raised on the given core responsively to the interrupt message in the memory. | 02-14-2013 |
20130028256 | NETWORK ELEMENT WITH SHARED BUFFERS - A method for communication, in a network element that includes multiple ports, includes buffering data packets entering the network element via the ports in input buffers that are respectively associated with the ports. Storage of the data packets is shared among the input buffers by evaluating a condition related to the ports, and, when the condition is met, moving at least one data packet from a first input buffer of a first port to a second input buffer of a second port, different from the first port. Respective output ports, via which the buffered data packets are to exit the network element, are selected from among the ports. The buffered data packets are forwarded to the selected output ports. | 01-31-2013 |
20120314706 | PACKET SWITCHING BASED ON GLOBAL IDENTIFIER - A communication method in a network operating in accordance with a standard that allocates a given number of bits m for layer-2 addressing of nodes in the network. The method includes accepting at a layer-2 switch in the network an assignment to one or more nodes in the network of respective layer-2 extended addresses, each including n=m+k bits, k>0. A given data packet is received at the switch for forwarding. The given data packet includes a layer-2 destination address and a layer-3 destination address in accordance with the standard. The layer-3 destination address includes t bits, t≧k. The given data packet is forwarded from the switch to one of the nodes by reading from the given data packet and combining the layer-2 destination address and k bits from the layer-3 destination address so as to reconstruct the n bits of the extended layer-2 address of the one of the nodes. | 12-13-2012 |
20120311220 | COMPUTER BUS WITH ENHANCED FUNCTIONALITY - A method for computing includes connecting a host device to a peripheral device via a bus that is physically configured in accordance with a predefined standard and includes multiple connection pins that are specified by the standard, including a plurality of ground pins. At least one pin, selected from among the pins on the bus that are specified as the ground pins, is used in order to indicate to the peripheral device that the host device has an extended operational capability. | 12-06-2012 |
20120300669 | TOPOLOGY-BASED CONSOLIDATION OF LINK STATE INFORMATION - A method in a network element that forwards packets to destination nodes includes identifying groups of the destination nodes. Respective performance metrics of multiple different candidate network paths, over which the destination nodes in a given group are reachable from the network element, are estimated jointly for all the destination nodes in the given group. A network path is selected from among the candidate network paths based on the estimated performance metrics. The packets addressed to the destination nodes in the given group are forwarded over the selected network path. | 11-29-2012 |
20120292267 | MOUNTING RAIL WITH INTERNAL POWER CABLE - An adapter kit for mounting an electrical apparatus in a rack includes a pair of rails. The rails are configured to be fitted on respective, opposing outer sides of a case of the apparatus and to slide along corresponding tracks on respective inner sides of the rack. At least one of the rails includes a cable channel configured to contain a cable passing through the cable channel between front and rear faces of the case. | 11-22-2012 |
20120246535 | PROCESSING OF BLOCK AND TRANSACTION SIGNATURES - A network communication device includes a host interface, which is coupled to communicate with a host processor, having a host memory, so as to receive a work request to execute a transaction in which a plurality of data blocks are to be transferred over a packet network. Processing circuitry is configured to process multiple data packets so as to execute the transaction, each data packet in the transaction containing a portion of the data blocks, and the multiple data packets including at least first and last packets, which respectively contain the first and last data blocks of the transaction. The processing circuitry is configured to compute a transaction signature over the data blocks while processing the data packets so that at least the first data block passes out of the network communication device through one of the interfaces before computation of the transaction signature is completed. | 09-27-2012 |
20120207018 | REDUCING POWER CONSUMPTION IN A FAT-TREE NETWORK - A method for communication includes estimating a characteristic of communication traffic to be carried by a fat-tree network. Responsively to the estimated characteristic, a subset of the spine switches in the highest level of the network is selected, according to a predetermined selection order, to be active in carrying the communication traffic. In each of the levels of the spine switches below the highest level, the spine switches to be active are selected based on the selected spine switches in a next-higher level. The network is operated so as to convey the traffic between the leaf switches via the active spine switches, while the spine switches that are not selected remain inactive. | 08-16-2012 |
20120174102 | SYSTEM AND METHOD FOR ACCELERATING INPUT/OUTPUT ACCESS OPERATION ON A VIRTUAL MACHINE - A system and method for accelerating input/output (IO) access operation on a virtual machine, The method comprises providing a smart IO device that includes an unrestricted command queue (CQ) and a plurality of restricted CQs and allowing a guest domain to directly configure and control IO resources through a respective restricted CQ, the IO resources allocated to the guest domain. In preferred embodiments, the allocation of IO resources to each guest domain is performed by a privileged virtual switching element. In some embodiments, the smart IO device is a HCA and the privileged virtual switching element is a Hypervisor. | 07-05-2012 |
20120167119 | Low-latency communications - A method of handling communications by a computer. A system-call communication routine receives a request of an application to perform a socket-related task on a given socket in a blocking mode. The routine repeatedly performs in alternation polling of one or more input/output (I/O) devices servicing the computer and performing the socket-related task. | 06-28-2012 |
20120082164 | Cell-Based Link-Level Retry Scheme - A method for communication includes receiving a packet at a first node for transmission over a link to a second node. The data in the packet is divided into a sequence of cells of a predetermined data size. The cells have respective sequence numbers. The cells are transmitted in sequence over the link, while storing the transmitted cells in a buffer at the first node. The first node receives acknowledgments indicating the respective sequence numbers of the transmitted cells that were received at the second node. Upon receiving an indication at the first node that a transmitted cell having a given sequence number was not properly received at the second node, the stored cells are retransmitted from the buffer starting from the cell with the given sequence number. | 04-05-2012 |
20120071011 | ADAPTER FOR HIGH-SPEED ETHERNET - An adapter includes a mechanical frame, which is configured to be inserted into a SFP-type receptacle and contains a socket for receiving a plug of a twisted-pair-type cable. First electrical terminals, held by the mechanical frame, are configured to mate with a connector in the receptacle. Second electrical terminals, held within the socket, are configured to mate with electrical connections of the plug. Circuitry connects the first and second electrical terminals so as to enable interoperation of the plug with the receptacle. | 03-22-2012 |
20110286451 | METHOD, APPARATUS AND COMPUTER PRODUCT FOR SENDING OR RECEIVING DATA OVER MULTIPLE NETWORKS - A substantially transparent failover communication protocol comprises a sender sending data packets to one or more recipients. The sender and recipients may be connectable through two or more networks. The sender sends in some cases duplicate data packets, each addressed differently, such that a recipient may receive one copy. Both the recipients and the sender may perform predetermined actions in response to a network becoming unavailable, such that the data packets may still be received by the recipients. | 11-24-2011 |
20110270917 | NETWORK ADAPTER WITH SHARED DATABASE FOR MESSAGE CONTEXT INFORMATION - A network interface adapter includes a network interface and a client interface, for coupling to a client device so as to receive from the client device work requests to send messages over the network using a plurality of transport service instances. Message processing circuitry, coupled between the network interface and the client interface, includes an execution unit, which generates the messages in response to the work requests and passes the messages to the network interface to be sent over the network. A memory stores records of the messages that have been generated by the execution unit in respective lists according to the transport service instances with which the messages are associated. A completion unit receives the records from the memory and, responsive thereto, reports to the client device upon completion of the messages. | 11-03-2011 |
20110264968 | CABLE WITH FIELD-WRITEABLE MEMORY - A method includes monitoring a use of a cable assembly that includes a communication cable terminated by a termination module. Data indicative of the use is written to a writeable non-volatile memory in the termination module. The use of the cable assembly is acted upon by reading the data from the non-volatile memory. | 10-27-2011 |
20110173352 | Power Reduction on Idle Communication Lanes - A method for communication includes establishing a full-duplex communication link between first and second nodes. The link includes multiple first lanes for conveying first communication traffic in a first link direction and multiple second lanes for conveying second communication traffic in a second link direction. Signals are exchanged between the first and second nodes to indicate a requested change in lane activity in the first link direction. Responsively to the signals, a number of the first lanes that are active is changed so that the first node conveys the first communication traffic to the second node over a first number of the first lanes, while the second node conveys the second communication traffic to the first node over a second number of the second lanes, which is different from the first number. | 07-14-2011 |
20110119673 | CROSS-CHANNEL NETWORK OPERATION OFFLOADING FOR COLLECTIVE OPERATIONS - A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation. | 05-19-2011 |
20110116512 | Dynamically-Connected Transport Service - A method of communication includes receiving, in a network interface device, first and second requests from an initiator process running on an initiator host to transmit, respectively, first and second data to first and second target processes running on one or more target nodes, via a packet network. A single dynamically-connected initiator context is allocated for serving both the first and second requests. A first connect packet referencing the dynamically-connected (DC) initiator context is directed to the first target process so as to open a first dynamic connection with the first target process, followed by transmission of the first data over the first dynamic connection. The first dynamic connection is closed after the transmission of the first data, and a second connect packet is transmitted so as to open a second dynamic connection with the second target process, followed by transmission of the second data. | 05-19-2011 |
20110096668 | HIGH-PERFORMANCE ADAPTIVE ROUTING - A method for communication includes routing a first packet, which belongs to a given packet flow, over a first routing path through a communication network. A second packet, which follows the first packet in the given packet flow, is routed using a time-bounded Adaptive Routing (AR) mode, by evaluating a time gap between the first and second packets, routing the second packet over the first routing path if the time gap does not exceed a predefined threshold, and, if the time gap exceeds the predefined threshold, selecting a second routing path through the communication network that is potentially different from the first routing path, and routing the second packet over the second routing path. | 04-28-2011 |
20110083064 | PROCESSING OF BLOCK AND TRANSACTION SIGNATURES - A network communication device includes a host interface, which is coupled to communicate with a host processor, having a host memory, so as to receive a work request to execute a transaction in which a plurality of data blocks are to be transferred over a packet network. Processing circuitry is configured to process multiple data packets so as to execute the transaction, each data packet in the transaction containing a portion of the data blocks, and the multiple data packets including at least first and last packets, which respectively contain the first and last data blocks of the transaction. The processing circuitry is configured to compute a transaction signature over the data blocks while processing the data packets so that at least the first data block passes out of the network communication device through one of the interfaces before computation of the transaction signature is completed. | 04-07-2011 |
20110081807 | ADAPTER FOR PLUGGABLE MODULE - An adapter includes a mechanical frame, which is configured to be inserted into a four-channel Small Form-Factor Pluggable (SFP) receptacle and to receive inside the frame a single-channel SFP cable connector. First electrical terminals, held by the mechanical frame, are configured to mate with respective first pins of the receptacle. Second electrical terminals, held within the mechanical frame, are configured to mate with respective second pins of the connector. Circuitry couples the first and second electrical terminals so as to enable communication between the connector and one channel of the receptacle while terminating the remaining channels of the receptacle. | 04-07-2011 |
20110058571 | DATA SWITCH WITH SHARED PORT BUFFERS - A communication apparatus includes a plurality of switch ports, each switch port including one or more port buffers for buffering data that traverses the switch port. A switch fabric is coupled to transfer the data between the switch ports. A switch control unit is configured to reassign at least one port buffer of a given switch port to buffer a part of the data that does not enter or exit the apparatus via the given switch port, and to cause the switch fabric to forward the part of the data to a destination switch port via the at least one reassigned port buffer. | 03-10-2011 |
20110029847 | PROCESSING OF DATA INTEGRITY FIELD - A network communication device includes a host interface, which is coupled to communicate with a host processor, having a memory, so as to receive a work request to convey one or more data blocks over a network. The work request specifies a memory region of a given data size, and at least one data integrity field (DIF), having a given field size, is associated with the data blocks. Network interface circuitry is configured to execute an input/output (I/O) data transfer operation responsively to the work request so as to transfer to or from the memory a quantity of data that differs from the data size of the memory region by a multiple of the field size, while adding the at least one DIF to the transferred data or removing the at least one DIF from the transferred data. | 02-03-2011 |
20110010557 | CONTROL MESSAGE SIGNATURE FOR DEVICE CONTROL - A method of controlling a peripheral device includes generating, in a host processor, a control message for transmission to the peripheral device, and calculating a signature for the control message. The control message and the signature are written to an address in a system memory of the host processor, and the peripheral device is notified of the address, so as to cause the device to read the control message and the signature from the system memory. | 01-13-2011 |
20100274876 | NETWORK INTERFACE DEVICE WITH MEMORY MANAGEMENT CAPABILITIES - An input/output (I/O) device includes a host interface for connection to a host device having a memory and a network interface, which is configured to receive, over a network, data packets associated with I/O operations directed to specified virtual addresses in the memory. Packet processing hardware is configured to translate the virtual addresses into physical addresses and to perform the I/O operations using the physical addresses, and upon an occurrence of a page fault in translating one of the virtual addresses, to transmit a response packet over the network to a source of the data packets so as to cause the source to refrain from transmitting further data packets while the page fault is serviced. | 10-28-2010 |
20100189206 | Precise Clock Synchronization - A method for clock synchronization includes computing an offset value between a local clock time of a real-time clock circuit and a reference clock time, and loading the offset value into a register that is associated with the real-time clock circuit. The local clock time is then summed with the value in the register so as to give an adjusted value of the local clock time that is synchronized with the reference clock. | 07-29-2010 |
20100188140 | Accurate Global Reference Voltage Distribution System With Local Reference Voltages Referred To Local Ground And Locally Supplied Voltage - A system and method for accurately distributing a master reference voltage to a plurality of local circuits within a system. A central master reference voltage is distributed to a plurality of local circuits as a difference in the voltage of a pair of conductors oriented substantially spatially parallel. Local reference voltages are generated based on the master reference voltage and a local voltage source. | 07-29-2010 |
20100138840 | SYSTEM AND METHOD FOR ACCELERATING INPUT/OUTPUT ACCESS OPERATION ON A VIRTUAL MACHINE - A system and method for accelerating input/output (IO) access operation on a virtual machine, The method comprises providing a smart IO device that includes an unrestricted command queue (CQ) and a plurality of restricted CQs and allowing a guest domain to directly configure and control IO resources through a respective restricted CQ, the IO resources allocated to the guest domain. In preferred embodiments, the allocation of IO resources to each guest domain is performed by a privileged virtual switching element. In some embodiments, the smart IO device is a HCA and the privileged virtual switching element is a Hypervisor. | 06-03-2010 |
20100088437 | INFINIBAND ADAPTIVE CONGESTION CONTROL ADAPTIVE MARKING RATE - A device and a method for optimizing data transfer rate in an InfiniBand fabric is provided where a various number of transmitting devices aim data packets to a single receiving device or through a common link. The method which is implemented in an InfiniBand switch includes marking of packets in a rate corresponding to centrally configured marking rate, determination of the current number of data flows between the input ports and the output port of the switch and marking the data packet with Forward Explicit Congestion Notification according to an adaptive value of marking rate which depends on the initial value of the marking rate and is inversely proportional to the number of data flows. | 04-08-2010 |
20090302923 | TERMINATED INPUT BUFFER WITH OFFSET CANCELLATION CIRCUIT - A system and method for compensation of offset voltage in a digital differential input buffer driven by a terminated transmission line. Offset compensation currents are injected at the output of the first stage of the input buffer, which has a higher impedance than the terminated transmission line at the input of the buffer. The compensation current is determined by a network of MOS transistors, which saves die space compared to resistors. A pair of voltage multiplexers provides for compensation currents to correct offsets of either polarity. Offset correction currents are determined anew each time the system is powered up, compensating for component aging. The offset correction can also be performed while the input buffer is operating, during periods when the input is quiescent, and/or by adjusting the offset correction according to the duty cycle of the detected input. | 12-10-2009 |
20090201926 | FIBRE CHANNEL PROCESSING BY A HOST CHANNEL ADAPTER - A method for data storage includes mapping a queue pair (QP) of a channel adapter to a specified Fibre Channel (FC) exchange for communication with a storage device. Upon receiving at the channel adapter from a host computer a storage command directed to the storage device, the storage command is executed by transmitting data packets over a switched network from the channel adapter to the storage device using the specified exchange and performing a remote direct memory access (RDMA) operation on the channel adapter using the mapped QP. | 08-13-2009 |
20090182900 | NETWORK ADAPTER WITH SHARED DATABASE FOR MESSAGE CONTEXT INFORMATION - A network interface adapter includes a network interface and a client interface, for coupling to a client device so as to receive from the client device work requests to send messages over the network using a plurality of transport service instances. Message processing circuitry, coupled between the network interface and the client interface, includes an execution unit, which generates the messages in response to the work requests and passes the messages to the network interface to be sent over the network. A memory stores records of the messages that have been generated by the execution unit in respective lists according to the transport service instances with which the messages are associated. A completion unit receives the records from the memory and, responsive thereto, reports to the client device upon completion of the messages. | 07-16-2009 |
20090129392 | MULTIPLE QUEUE PAIR ACCESS WITH A SINGLE DOORBELL - A method for controlling access by processes running on a host device to a communication network includes assigning to each of the processes a respective doorbell address on a network interface adapter that couples the host device to the network and allocating instances of a communication service on the network, to be provided via the adapter, to the processes. Upon receiving a request submitted by a given one of the processes to its respective doorbell address to access one of the allocated service instances, the adapter conveys the data over the network using the specified instance of the service, subject to verifying, based on the doorbell address to which the request was submitted, that the specified instance was allocated to the given process. | 05-21-2009 |
20090006655 | NETWORK ADAPTER WITH SHARED DATABASE FOR MESSAGE CONTEXT INFORMATION - A network interface adapter includes a network interface and a client interface, for coupling to a client device so as to receive from the client device work requests to send messages over the network using a plurality of transport service instances. Message processing circuitry, coupled between the network interface and the client interface, includes an execution unit, which generates the messages in response to the work requests and passes the messages to the network interface to be sent over the network. A memory stores records of the messages that have been generated by the execution unit in respective lists according to the transport service instances with which the messages are associated. A completion unit receives the records from the memory and, responsive thereto, reports to the client device upon completion of the messages. | 01-01-2009 |
20080219150 | AUTO-NEGOTIATION BY NODES ON AN INFINIBAND FABRIC - A method and system for digital communication wherein nodes exchange messages at a first data rate in order to coordinate testing at a second, higher data rate. After testing is completed, the nodes exchange test results at the first data rate, and if conditions are satisfactory for operation at the second data rate user data are transmitted at the second data rate. Otherwise, user data are transmitted at the first data rate. | 09-11-2008 |