Patent application number | Description | Published |
20090003541 | NETWORK-HOSTED SERVER, A METHOD OF MONITORING A CALL CONNECTED THERETO AND A NETWORK-HOSTED VOICEMAIL SERVER - The present invention provides a network-hosted server, a method of monitoring a call connected to a network-hosted server and a network-hosted voicemail server associated with a called party. In one embodiment, the network hosted server includes: (1) a call-alerter configured to provide an indication to a called party when a call between a calling party and the network-hosted server is established and (2) a call monitor coupled to the call-alerter and configured to establish a one-way connection of the call between the server and the called party to allow the called party to monitor the call in progress. | 01-01-2009 |
20100217869 | TOPOLOGY AWARE CACHE COOPERATION - A content distribution network (CDN) comprising a hierarchy of content storage nodes (CSNs) or caches having storage space that is allocated between local space for storing locally popular content objects and federated space for storing a portion of the less popular content objects. Local space and federated space based upon changes in content object popularity and/or other utility factors. Optionally, parent/child (upstream/downstream) communication paths are used to migrate content between CSNs or caches of the same or different hierarchical levels to avoid utilizing higher price top hierarchical level communications channels. | 08-26-2010 |
20110078312 | Method and system for monitoring incoming connection requests in a Peer-to-Peer network - The present invention relates to a system and method for controlling peer-to-peer (P2P) traffic in an internet service provider (ISP) network. The system includes an ISP server configured to determine whether to accept or reject an incoming connection request to connect to a requested peer from a requesting peer in a P2P application. According to one embodiment, the ISP is configured to determine whether to accept or reject the incoming connection request based on current peer connectivity and cost of the incoming connection request. According to another embodiment, the ISP server determines whether to accept or reject the incoming connection request based on preference information available at the ISP server. | 03-31-2011 |
20110173248 | Method for providing on-path content distribution - In one embodiment, the method includes receiving a content request from the end user at a proxy. A modified TCP connection request message is generated such that the modified TCP connection request message includes a content identifier. The content identifier identifies the requested content. The modified TCP connection request message is sent from the proxy towards an origin server associated with the requested content, and a response to the TCP connection request message is received from a network element. A TCP connection is established with the network element. | 07-14-2011 |
20110202651 | PRICE-AWARE NEIGHBORHOOD SELECTION FOR PEER-TO-PEER NETWORKS - A method and apparatus for peer-to-peer file sharing is provided. In some embodiments, the method includes receiving a request for a list of neighbor peers, where the request is made by a requesting peer device, and where the requesting peer device has a local internet service provider (ISP). The method may also include employing a server device to rank each neighbor peer in a plurality of neighbor peers based on whether the respective neighbor peer is external to the local ISP, and if the respective neighbor peer is external to the ISP, further based on a cost metric associated with a next ISP hop from the requesting peer device to the respective neighbor peer. The method may also include generating the list of neighbor peers based on the ranking of the neighbor peers, and enabling transmission of the list of neighbor peers to the requesting peer device. | 08-18-2011 |
20110276718 | Decreasing latency in anonymity networks - According to one embodiment, a method of decreasing latency in an anonymity network includes filtering a list of anonymity routers for a client device based on one of (i) loads of the anonymity routers on the list and (ii) distances of the anonymity routers from the client device. | 11-10-2011 |
20130007187 | TOPOLOGY AWARE CACHE STORAGE - A content distribution network (CDN) comprising a hierarchy of content storage nodes (CSNs) or caches having storage space that is allocated between local space for storing locally popular content objects and federated space for storing a portion of the less popular content objects. Local space and federated space based upon changes in content object popularity and/or other utility factors. Optionally, parent/child (upstream/downstream) communication paths are used to migrate content between CSNs or caches of the same or different hierarchical levels to avoid utilizing higher price top hierarchical level communications channels. | 01-03-2013 |
Patent application number | Description | Published |
20100070700 | CACHE MANAGEMENT SYSTEM AND METHOD AND CONTENT DISTRIBUTION SYSTEM INCORPORATING THE SAME - A cache management system and method and a content distribution system. In one embodiment, the cache management system includes: (1) a content request receiver configured to receive content requests, (2) a popularity lifetime prediction modeler coupled to the content request receiver and configured to generate popularity lifetime prediction models for content that can be cached based on at least some of the content requests, (3) a database coupled to the popularity lifetime prediction modeler and configured to contain the popularity lifetime prediction models and (4) a popularity lifetime prediction model matcher coupled to the content request receiver and the database and configured to match at least one content request to the popularity lifetime prediction models and control a cache based thereon. | 03-18-2010 |
20100293294 | PEER-TO-PEER COMMUNICATION OPTIMIZATION - A peer-to-peer communication optimizer uses both peer locality and content diversity in a peer group to reduce network usage cost associated with using remote peers in a peer-to-peer system while reducing impact on the download time relative to peer-to-peer protocols operating with locality optimization alone or no localization of peers. The optimizer intercepts control messages in the peer-to-peer system and substitutes peer lists that meet both diversity indicator and network usage cost thresholds. Transparent embodiments operate without requirement to change peer or tracker implementations. Such embodiments include control message redirection, interception, and modification transparent to the client and tracker applications. Other embodiments include proxy designation. Still other embodiments include the use of gateway peers selected as function of diversity of content and network topology. Still other embodiments involve modification to one or more of client and/or tracker software and potentially the use of a standard interface for network topology determination. | 11-18-2010 |
20110153835 | SYSTEM AND METHOD FOR CONTROLLING PEER-TO-PEER CONNECTIONS - The present invention relates to a system and method for controlling peer-to-peer connections in a Peer-to-Peer (P2P) streaming application for individual Internet Service Provider (ISP) networks over a localized overlay. The system may include a tracker local to a first ISP network configured to select edge peers among local peers of the first ISP network. The selected edge peers have external connections to peers outside the first ISP network in order to transfer sub-streams to or from the first ISP network, and the local peers not selected as edge peers have internal connections to other local peers within the first ISP network to transfer the sub-streams over the localized overlay. | 06-23-2011 |
20120327931 | GATEWAYS INTEGRATING NAME-BASED NETWORKS WITH HOST-BASED NETWORKS - A method of retrieving content from a network with a host-based network and a name-based network includes receiving, at a network node, a first message including at least one of a first host-based request and a first name-based interest, and transmitting, from the network node, a second message based on the at least one of the first host-based request and the first name-based interest. | 12-27-2012 |
20140089452 | CONTENT STREAM DELIVERY USING VARIABLE CACHE REPLACEMENT GRANULARITY - A method comprises associating at least one cache replacement granularity value with a given one of a plurality of content streams comprising a number of segments, receiving a request for a given segment of the given content stream in a network element, identifying a given portion of the given content stream which contains the given segment, updating a value corresponding to the given portion of the given content stream, and determining whether to store the given portion of the given content stream in a memory of the network element based at least in part on the updated value corresponding to the given portion. The at least one cache replacement granularity value represents a given number of segments, the given content stream being separable into one or more portions based at least in part on the at least one cache replacement granularity value. | 03-27-2014 |
20140089467 | CONTENT STREAM DELIVERY USING PRE-LOADED SEGMENTS - A method comprises receiving a first request for a first segment of a content stream in a network element from a given one of a plurality of clients, determining in the network element whether the first segment is stored in a memory of the network element, sending a second request for the first segment from the network element to a server responsive to the determining step, receiving a response comprising the first segment in the network element from the server responsive to the second request, and sending the first segment from the network element to the given one of the plurality of clients. The first segment is related to a second segment of the content stream, the relationship being transparent to the network element but being inferable based at least in part on at least one of the first request, the response and one or more prior requests. | 03-27-2014 |
Patent application number | Description | Published |
20100095331 | Method and apparatus for performing template-based prefix caching for use in video-on-demand applications - A method and apparatus for performing template-based prefix caching advantageously identifies common prefixes (i.e., initial video segments) in video titles, stores common prefixes only once in a prefix cache, and uses these common prefixes when serving requests for video content. This advantageously enables a prefix cache to scale to a large number of video titles, since the cache stores each common prefix only once. A new video title that uses an already existing prefix may be advantageously added without requiring additional storage in the prefix cache. Template-based prefix caching also advantageously reduces the bandwidth required to distribute prefixes when new titles are ingested into the system—if the required template is already available in the prefix cache, prefix-caching is enabled instantly for this title and no additional bandwidth is required to distribute the prefix. | 04-15-2010 |
20110299392 | QUALITY OF SERVICE AWARE RATE THROTTLING OF DELAY TOLERANT TRAFFIC FOR ENERGY EFFICIENT ROUTING - The invention is directed to energy-efficient network processing of delay tolerant data packet traffic. Embodiments of the invention determine if an aggregate of time critical traffic flow rates and minimum rates for meeting QoS requirements of delay tolerant traffic flows exceeds a combined optimal rate of packet processing engines of a network processor. In the affirmative case, embodiments set the processing rate of individual packet processing engines to a minimum rate, such that the cumulative rate of the packet processing engines meets the aggregate rate, and schedule the delay tolerant flows to meet their respective minimum rates. Advantageously, by throttling the processing rate of only delay tolerant traffic, energy consumption of network processors can be reduced while at the same time QoS requirements of the delay tolerant traffic and time critical traffic can be met. | 12-08-2011 |
20110307538 | NETWORK BASED PEER-TO-PEER TRAFFIC OPTIMIZATION - A peer-to-peer accelerator system is disclosed for reducing reverse link bandwidth bottlenecking of peer-to-peer content transfers. The peer-to-peer accelerator system contains a peer-to-peer proxy which resides in the core of the network. When a peer-to-peer bootstrap message from an asymmetrically connected client occurs, the proxy intercepts the message and instantiates an agent which will perform file transfers on the asymmetrically connected client's behalf thereby eliminating the need for the client to effect file content transfers over the reverse link. The peer-to-peer accelerator system is particularly useful for overcoming the bottlenecking and reverse link contention problems of peer-to-peer file transfer systems known in the art. | 12-15-2011 |
20120324102 | QUALITY OF SERVICE AWARE RATE THROTTLING OF DELAY TOLERANT TRAFFIC FOR ENERGY EFFICIENT ROUTING - The invention is directed to energy-efficient network processing of delay tolerant data packet traffic. Embodiments of the invention determine if an aggregate of time critical traffic flow rates and minimum rates for meeting QoS requirements of delay tolerant traffic flows exceeds a combined optimal rate of packet processing engines of a network processor. In the affirmative case, embodiments set the processing rate of individual packet processing engines to a minimum rate, such that the cumulative rate of the packet processing engines meets the aggregate rate, and schedule the delay tolerant flows to meet their respective minimum rates. Advantageously, by throttling the processing rate of only delay tolerant traffic, energy consumption of network processors can be reduced while at the same time QoS requirements of the delay tolerant traffic and time critical traffic can be met. | 12-20-2012 |