Patent application number | Description | Published |
20100290345 | GMPLS BASED OAM PROVISIONING - A method and network are described herein for provisioning Operations, Administration, and Maintenance (OAM) entities for a connection when setting-up the connection between an ingress edge node and an egress edge node. | 11-18-2010 |
20100316058 | SYSTEM AND METHOD OF DEMULTIPLEXING PROVIDER BACKBONE BRIDGING TRAFFIC ENGINEERING INSTANCES - A system and method of demultiplexing Provider Backbone Bridging Traffic Engineering (PBB-TE) service instances. The method is used when monitoring service instances between a first bridge port and a second bridge port by exchanging CFM frames over each service instance. The CFM frame is received by the second bridge port where the complete ESP-3-tuple is demultiplexed. The CCM frames may be demultiplexed by a Full Traffic Engineering Service Instance Multiplex Entity which demultiplexes both the source address value and destination address value of the CCM frames. | 12-16-2010 |
20130077475 | Optimizing Endpoint Selection of MRT-FRR Detour Paths - A method is described to be implemented by a node in a network. The method is for selecting an endpoint for a maximally redundant tree-fast reroute (MRT-FRR) detour path to optimize detour path cost or length across the network. The method defines a set of steps including selecting a destination node and next hop failure to calculate detour paths. A clean set of nodes for the network is then calculated, where the clean set of nodes are nodes in the network that are not impacted in reaching the destination node by the failure in the given next hop. A candidate node for the endpoint of the detour p path is selected from the set of clean nodes based on any one of a plurality of configured options and forwarding of data packets is configured to the selected candidate as the endpoint of the detour path to the destination node. | 03-28-2013 |
20130194973 | In-Service Upgrade of Provider Bridge Networks - A system and method for in-service migration for a Virtual Local Area Network, VLAN, service if a Provider Bridge Metro Ethernet Network, PB MEN, is upgraded to a Provider Backbone Bridge, PBB, MEN or an Internet Protocol/Multi Protocol Label Switching, IP/MPLS, MEN. After the deployment of the new PBB or IP/MPLS technology, a sequence of management actions are performed to configure PBB or IP/MPLS edge nodes to use the new technology as well as the old PB-based technology to support the VLAN service. Both old and new connectivity structures are maintained in the edge nodes during the entire migration process. Customer traffic is then redirected per edge node to the new technology. When each edge node entirely provides the VLAN service under the new technology, the migration is complete. | 08-01-2013 |
20130246635 | Technique for Bundling in Link Aggregation - Methods and apparatus are disclosed for applying multiple Link Aggregation Group (LAG) entities on the same set of physical links, thus making bundling of individual services or conversations possible by the different LAG entities within Link Aggregation. Each LAG entity is configured such that a single physical link is Active and all the other links are Standby. Each LAG entity may be regarded as a “bundle.” Thus the services/conversations are bundled into a LAG entity and are handed-off on the Active link during normal operation. If service hand-off is not possible on the Active link (e.g., due to a failure), then the LAG entity switches over to a Standby link thus the service/conversation is handed-off on that formerly Standby link. Bundling may simplify operations of control and signaling. | 09-19-2013 |
20130254327 | DATA PLANE FOR RESILIENT NETWORK INTERCONNECT - The present disclosure relates to a technique for forwarding frames received by a network interconnect node ( | 09-26-2013 |
20130301416 | MULTI-LEVEL BEARER PROFILING IN TRANSPORT NETWORKS - A method is provided for transporting data packets over a telecommunications transport network. The data packets are carried by a plurality of bearers, the bearers each carrying data packets that relate to different ones of a plurality of services. In the method a multi-level bandwidth profiling scheme is applied to the data packets of each bearer. A series of information rates are assigned to a bearer, the profiling scheme identifying and marking each data packet of the bearer based on a desired resource sharing according to the minimum information rate with which the packet is conformant. The marked data packets are forwarded for transport through the transport network. If there is insufficient bandwidth available in the transport network to transport all data packets, data packets identified by the profiling and marked as only being conformant with a higher information rate are discarded before any data packets marked as being conformant with a lower information rate. | 11-14-2013 |
20140112191 | Technique for Ensuring Congruency in Link Aggregation - The present disclosure provides a technique for ensuring that a service or conversation is carried in a congruent manner on a Link Aggregation Group (LAG). The Service ID (e.g., conversation ID) to link mapping is configured on both sides of the LAG independently of each other. The Service ID to link assignment is stored in a well-defined format, e.g. in an assignment table. A digest is then prepared on the assignment table. The digest is exchanged between the two sides of the LAG. If there is a mismatch between the digests, then the service is transmitted on a predefined and agreed-upon default link if congruency has to be enforced for that particular service. Furthermore, the digest exchange allows verification on the configuration to check whether all services to be handed-off are configured on both sides. | 04-24-2014 |
20140314095 | METHOD AND SYSTEM OF UPDATING CONVERSATION ALLOCATION IN LINK AGGREGATION - A method of updating conversation allocation in link aggregation is disclosed. The method starts with verifying that an implementation of a conversation-sensitive link aggregation control protocol (LACP) is operational at a network device of a network for an aggregation port. Then it is determined that operations through enhanced link aggregation control protocol data units (LACPDUs) are possible. The enhanced LACPDUs can be used for updating conversation allocation information, and the determination is based at least partially on a compatibility check between a first set of operational parameters of the network device and a second set of operational parameters of a partner network device. Then a conversation allocation state of an aggregation port of the link aggregation group is updated based on a determination that the conversation allocation state is incorrect, where the conversation allocation state indicates a list of conversations transmitting through the aggregation port. | 10-23-2014 |
20140321284 | Optimised Packet Delivery Across a Transport Network - A method of prioritising packets for delivery over a transport network interconnecting nodes of a mobile network, where a guaranteed minimum information rate over the transport network is specified for the mobile network. The method comprises, for each bearer to be injected into the transport network from the mobile network, specifying a bearer information rate and marking packets up to that rate as conformant with the bearer information rate and marking packets exceeding that rate as non-conformant with the bearer information rate. A plurality of traffic type streams from the mobile network are converged, each traffic type stream comprising a plurality of bearers. Packets of the converged traffic type streams are inspected to identify packets marked as conformant and non-conformant, and re-marking non-conformant packets, or at least a fraction of non-conformant packets, as conformant if the converged rate of conformant packets is less than said minimum information rate. The transport network prioritises the delivery of packets marked as conformant over those packets marked as non-conformant. | 10-30-2014 |
20140328170 | Enhanced Performance Service-Based Profiling for Transport Networks - A method is presented of transporting data packets over a telecommunications transport network. The data packets are carried by a plurality of bearers. For each of the bearers, independently of the other bearers, bandwidth profiling is applied to the data packets of the bearer to designate as ‘green’ data packets that are conformant with a predetermined maximum Information Rate for the bearer. One or more data packets is buffered for up to a predetermined maximum ‘green’ buffer time, during which if transporting the data packet would not cause the maximum information rate of the bearer to be exceeded, the data packet is designated as a ‘green’ data packet. The data packets are forwarded for transporting over the transport network. If there is insufficient bandwidth available in the transport network to transport all data packets, data packets that are not designated as ‘green’ data packets are discarded, so as not to be transported through the transport network. | 11-06-2014 |
20140341042 | Conditional Routing Technique - A technique for controlling network routing in a network ( | 11-20-2014 |
20150023156 | EXTENDED REMOTE LFA FAST REROUTE - A method is implemented by a network element or controller for determining a backup path for a fast reroute process to be utilized in response to a network event invalidating a primary path to a destination node. The method identifies at least one intermediate node that has a backup loop free alternative (LFA) path to a destination node in a network where no path meeting LFA conditions can be found for a point of local repair (PLR). | 01-22-2015 |
20150117218 | Feedback-based Profiling for Transport Networks - A method is provided of transporting data packets over a telecommunications transport network. The data packets are carried by a plurality of bearers, the bearers each carrying data packets that relate to different ones of a plurality of services. In the method, for each bearer independently of the other bearers, bandwidth profiling is applied to the data packets of the bearer to identify and mark the data packets of each of the bearers that are conformant with a determined maximum information rate for the bearer. The data packets are forwarded for transport through the transport network. If there is insufficient bandwidth available in the transport network to transport all data packets, data packets not identified by the profiling as being conformant are discarded, so as not to be transported through the transport network. The data packets of the bearer transported through the transport network are monitored to determine whether there has been any loss of data packets that should have been transported through the transport network, indicating congestion in the transport network. The maximum information rate of the bearer is adjusted based on the monitoring. | 04-30-2015 |
20150163148 | Packet Scheduling in a Communication Network - A method and apparatus for packet scheduling over a communication link in a communication network. A data packet scheduler accords scheduling weights to at least two sets of data packets to be transmitted, and the sending of the sets of data packets is scheduled in accordance with the scheduling weights. When it is determined that a change in available bandwidth over the communication link has occurred, the scheduler dynamically adjusts the scheduling weight for each set of data packets on the basis of the available bandwidth. This ensures more efficient resource sharing control and resource guarantees when the available bandwidth changes. | 06-11-2015 |
20150180766 | Technique for Network Routing - A technique for routing one or more service tunnels in a telecommunications backhaul network ( | 06-25-2015 |
20150312055 | METHODS AND ROUTERS FOR CONNECTIVITY SETUP BETWEEN PROVIDER EDGE ROUTERS - A router receives, from a BGP peer, a BGP update message including NLRI (VPN-NLRI) specific to a virtual private network, as well as path information including an address of a next hop for the VPN-NLRI. The router determines that there is no route to the next hop, modifies the filtering information to permit propagation of NLRI including the address of the next hop, and sends the modified filtering information to BGP neighbours. Alternatively, a router determines that such a BGP update message is due to be sent to a BGP peer. The router determines that the filtering information does not allow propagation, to the peer, of NLRI including the address of the next-hop, modifies the filtering information so as to permit propagation, to the BGP peer, of NLRI including the address of the next hop, and sends the BGP update message to the peer. | 10-29-2015 |
Patent application number | Description | Published |
20130156189 | Terminating SSL connections without locally-accessible private keys - An Internet infrastructure delivery platform (e.g., operated by a service provider) provides an RSA proxy “service” as an enhancement to the SSL protocol that off-loads the decryption of the encrypted pre-master secret (ePMS) to an external server. Using this service, instead of decrypting the ePMS “locally,” the SSL server proxies (forwards) the ePMS to an RSA proxy server component and receives, in response, the decrypted pre-master secret. In this manner, the decryption key does not need to be stored in association with the SSL server. | 06-20-2013 |
20130185387 | Host/path-based data differencing in an overlay network using a compression and differencing engine - A data differencing technique enables a response from a server to the request of a client to be composed of data differences from previous versions of the requested resource. To this end, data differencing-aware processes are positioned, one at or near the origin server (on the sending side) and the other at the edge closest to the end user (on the receiving side), and these processes maintain object dictionaries. The data differencing-aware processes each execute a compression and differencing engine. Whenever requested objects flow through the sending end, the engine replaces the object data with pointers into the object dictionary. On the receiving end of the connection, when the data arrives, the engine reassembles the data using the same object dictionary. The approach is used for version changes within a same host/path, using the data differencing-aware processes to compress data being sent from the sending peer to the receiving peer. | 07-18-2013 |
20130311433 | Stream-based data deduplication in a multi-tenant shared infrastructure using asynchronous data dictionaries - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. In this approach, data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. Because the compressed objects are treated as just objects, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network (CDN) procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are the re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. | 11-21-2013 |
20140189040 | Stream-based data deduplication with cache synchronization - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers. | 07-03-2014 |
20140189069 | MECHANISM FOR DISTINGUISHING BETWEEN CONTENT TO BE SERVED THROUGH FIRST OR SECOND DELIVERY CHANNELS - Described herein are methods, apparatus and systems for selectively delivering content through one of two communication channels, one being origin to client and the other being from or through a CDN to client. Thus a client may choose to request content from a CDN and/or from an origin server. This disclosure sets forth techniques for, among other things, distinguishing between which channel to use for a given object, using the CDN-client channel to obtain the performance benefit of doing so, and reverting to the origin-client channel where content may be private, sensitive, corrupted, or otherwise considered to be unsuitable from delivery from and/or through the CDN. | 07-03-2014 |
20140189070 | Stream-based data deduplication using directed cyclic graphs to facilitate on-the-wire compression - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers. | 07-03-2014 |
20140189071 | Stream-based data deduplication with peer node prediction - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers. | 07-03-2014 |
20150052349 | Splicing into an active TLS session without a certificate or private key - An origin server selectively enables an intermediary (e.g., an edge server) to shunt into and out of an active TLS session that is on-going between a client and the origin server. The technique allows for selective pieces of a data stream to be delegated from an origin to the edge server for the transmission (by the edge server) of authentic cached content, but without the edge server having the ability to obtain control of the entire stream or to decrypt arbitrary data after that point. The technique enables an origin to authorize the edge server to inject cached data at certain points in a TLS session, as well as to mathematically and cryptographically revoke any further access to the stream until the origin deems appropriate. | 02-19-2015 |
20150067338 | Providing forward secrecy in a terminating SSL/TLS connection proxy using ephemeral Diffie-Hellman key exchange - An infrastructure delivery platform provides a proxy service as an enhancement to the TLS/SSL protocol to off-load to an external server the generation of a digital signature, the digital signature being generated using a private key that would otherwise have to be maintained on a terminating server. Using this service, instead of digitally signing (using the private key) “locally,” the terminating server proxies given public portions of ephemeral key exchange material to the external server and receives, in response, a signature validating the terminating server is authorized to continue with the key exchange. In this manner, a private key used to generate the digital signature (or, more generally, to facilitate the key exchange) does not need to be stored in association with the terminating server. Rather, that private key is stored only at the external server, and there is no requirement for the pre-master secret to travel (on the wire). | 03-05-2015 |
20150106624 | Providing forward secrecy in a terminating TLS connection proxy - An infrastructure delivery platform provides a RSA proxy service as an enhancement to the TLS/SSL protocol to off-load, from an edge server to an external cryptographic server, the decryption of an encrypted pre-master secret. The technique provides forward secrecy in the event that the edge server is compromised, preferably through the use of a cryptographically strong hash function that is implemented separately at both the edge server and the cryptographic server. To provide the forward secrecy for this particular leg, the edge server selects an ephemeral value, and applies a cryptographic hash the value to compute a server random value, which is then transmitted back to the requesting client. That server random value is later re-generated at the cryptographic server to enable the cryptographic server to compute a master secret. The forward secrecy is enabled by ensuring that the ephemeral value does not travel on the wire. | 04-16-2015 |
20150281204 | Traffic on-boarding for acceleration through out-of-band security authenticators - A traffic on-boarding method is operative at an acceleration server of an overlay network. It begins at the acceleration server when that server receives an assertion generated by an identity provider (IdP), the IdP having generated the assertion upon receiving an authentication request from a service provider (SP), the SP having generated the authentication request upon receiving from a client a request for a protected resource. The acceleration server receives the assertion and forwards it to the SP, which verifies the assertion and returns to the acceleration server a token, together with the protected resource. The acceleration server then returns a response to the requesting client that includes a version of the protected resource that points back to the acceleration server and not the SP. When the acceleration server then receives an additional request from the client, the acceleration server interacts with the service provider using an overlay network optimization. | 10-01-2015 |
Patent application number | Description | Published |
20100051594 | MICRO-ARC ALLOY CLEANING METHOD AND DEVICE - A cleaning device and method utilizes an electric circuit including a power supply electrically attached through a first lead to the alloy structure and a second lead connected to a probe. A protective atmosphere is provided over the surface of the alloy that is to be cleaned. Electric energy supplied by the power source generates an electric arc between the probe and the surface of the structure within the protective atmosphere. The electric interaction created by the electric arc and the surface of the alloy removes built up undesired material. | 03-04-2010 |
20110223317 | DIRECT THERMAL STABILIZATION FOR COATING APPLICATION - A coating system includes a first work piece, a work piece support for holding the first work piece, a plasma-based coating delivery apparatus configured to apply a coating material to the first work piece in a plasma-based vapor stream, and a first electron gun configured to direct a first electron beam at the first work piece while the plasma-based coating delivery apparatus applies the coating to the first work piece for heating the first work piece being coated, wherein the first electron gun is configured to direct the first electron beam at a region of the first work piece facing away from the plasma-based coating delivery apparatus. | 09-15-2011 |
20110223353 | HIGH PRESSURE PRE-OXIDATION FOR DEPOSITION OF THERMAL BARRIER COATING WITH HOOD - An apparatus for coating a work piece includes a process chamber, a coating material supply apparatus located at least partially within the process chamber for delivering a coating material to the work piece, a pre-heater assembly adjoining the process chamber, and a support for holding the work piece. The pre-heater assembly includes a housing that opens to the process chamber, a thermal hood positioned within the housing and configured to reflect thermal energy for reflecting thermal energy toward the work piece. The support is movable to selectively move the work piece between a first position within the housing of the pre-heater assembly and a second position within the process chamber and outside the housing of the pre-heater assembly. | 09-15-2011 |
20110223354 | HIGH PRESSURE PRE-OXIDATION FOR DEPOSITION OF THERMAL BARRIER COATING - An apparatus for coating a work piece includes a process chamber, a coating material supply apparatus located at least partially within the process chamber for delivering a coating material to the work piece, a pre-heater assembly adjoining the process chamber, and a support for holding the work piece. The pre-heater assembly includes a housing that opens to the process chamber, a susceptor comprising a ceramic material positioned at least partially within the housing, and a pre-heater electron gun configured to configured to direct an electron beam at the susceptor such that the susceptor radiates heat toward the work piece. The support is movable to selectively move the work piece between a first position within the housing of the pre-heater assembly and a second position within the process chamber and outside the housing of the pre-heater assembly. | 09-15-2011 |
20110223355 | THERMAL STABILIZATION OF COATING MATERIAL VAPOR STREAM - A coating system includes a work piece, a coating delivery apparatus configured to apply a coating material to the work piece in a plasma-based vapor stream, and a first electron gun configured to direct a first electron beam at the plasma-based vapor stream for adding thermal energy to the coating material in the plasma-based vapor stream. | 09-15-2011 |
20110223356 | COATING APPARATUS AND METHOD WITH INDIRECT THERMAL STABILIZATION - An apparatus includes a work piece support for holding and selectively rotating a work piece, a coating delivery apparatus configured to apply a coating material to the work piece, a susceptor positioned adjacent to the work piece support, and a first electron gun configured to direct a first electron beam at the susceptor such that the susceptor radiates heat toward the work piece. | 09-15-2011 |
20110281107 | LAYERED THERMAL BARRIER COATING WITH BLENDED TRANSITION AND METHOD OF APPLICATION - A multilayer coating includes a bond coat layer and a first barrier layer applied on the bond coat layer. The first barrier layer has a compositional gradient comprising a majority of a first rare earth stabilized zirconia material proximate the bond coat layer to a majority of a second rare earth stabilized zirconia material away from the bond coat layer. The first and second rare earth stabilized zirconia materials are different. | 11-17-2011 |
20120282402 | Coating Methods and Apparatus - An apparatus deposits a coating on a part. The apparatus comprises a chamber and a sting assembly for carrying the part. The sting assembly is shiftable between: an inserted condition where the sting assembly holds the part within the chamber for coating; and a retracted condition where the sting assembly holds the part outside of the chamber. The apparatus comprises a source of the coating material positioned to communicate the coating material to the part in the inserted condition. The apparatus comprises a thermal hood comprising a first member and a second member. The second member is between the first member and the part when the part is in the inserted condition. The second member is carried by the sting assembly so as to retract with the sting assembly as the sting assembly is retracted from the inserted condition to the retracted condition. | 11-08-2012 |
20120328445 | GRIT BLAST FREE THERMAL BARRIER COATING REWORK - A grit blast free method of removing a ceramic thermal barrier layer from a turbine component is described. The method comprises removing the layer in an autoclave with a caustic medium followed by a low pressure water jet wash. The component is dried in a stream of hot dry nitrogen and a new thermal barrier coating is applied before the component reenters product flow. | 12-27-2012 |
20130065048 | LAYERED THERMAL BARRIER COATING WITH BLENDED TRANSITION AND METHOD OF APPLICATION - A multilayer coating includes a bond coat layer, a first barrier layer applied on the bond coat layer, and a second barrier layer applied on the first barrier layer. The first barrier layer has a compositional gradient comprising a majority of a first rare earth stabilized zirconia material proximate the bond coat layer to a majority of a second rare earth stabilized zirconia material away from the bond coat layer. The first and second rare earth stabilized zirconia materials are different. The second barrier layer has a compositional gradient comprising a majority of the second rare earth stabilized zirconia material to 100 wt % of a third rare earth stabilized zirconia material away from the first barrier layer. | 03-14-2013 |
20140030446 | METHOD OF APPLICATION FOR LAYERED THERMAL BARRIER COATING WITH BLENDED TRANSITION - A method includes generating a plasma plume with a plasma gun, delivering a plurality of coating materials to the plasma plume with a powder feeder assembly to vaporize the coating materials. The delivery includes delivering a first (bond coat) material from a first powder feeder to the plasma gun, ceasing delivery of the first material, increasing a rate of delivery of a second (rare earth stabilized zirconia) material from a second powder feeder to the plasma plume, increasing a rate of delivery of a third material (a rare earth stabilized zirconia material different from the second material) from a third powder feeder to the plasma plume, decreasing a rate of delivery of the second material, and decreasing a rate of delivery of the third material, and depositing the plurality of coating materials on a work piece to produce a layered coating with blended transitions between coating layers. | 01-30-2014 |
20150152544 | Coating Methods and Apparatus - An apparatus deposits a coating on a part. The apparatus comprises a chamber and a sting assembly for carrying the part. The sting assembly is shiftable between: an inserted condition where the sting assembly holds the part within the chamber for coating; and a retracted condition where the sting assembly holds the part outside of the chamber. The apparatus comprises a source of the coating material positioned to communicate the coating material to the part in the inserted condition. The apparatus comprises a thermal hood comprising a first member and a second member. The second member is between the first member and the part when the part is in the inserted condition. The second member is carried by the sting assembly so as to retract with the sting assembly as the sting assembly is retracted from the inserted condition to the retracted condition. | 06-04-2015 |