Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


MULTICOMPUTER DATA TRANSFERRING VIA SHARED MEMORY

Subclass of:

709 - Electrical computers and digital processing systems: multicomputer data transferring

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
709216000 Accessing another computer's memory 55
709214000 Plural shared memories 39
709215000 Partitioned shared memory 24
Entries
DocumentTitleDate
20130073668SPECULATIVE AND COORDINATED DATA ACCESS IN A HYBRID MEMORY SERVER - A method, accelerator system, and computer program product, for prefetching data from a server system in an out-of-order processing environment. A plurality of prefetch requests associated with one or more given data sets residing on the server system are received from an application on the server system. Each prefetch request is stored in a prefetch request queue. A score is assigned to each prefetch request. A set of the prefetch requests are selected from the prefetch queue that comprise a score above a given threshold. A set of data, for each prefetch request in the set of prefetch requests, is prefetched from the server system that satisfies each prefetch request, respectively.03-21-2013
20130073667TECHNIQUES FOR ADMINISTERING AND MONITORING MULTI-TENANT STORAGE - Techniques for managing and monitoring multi-tenant storage in a cloud environment are presented. Storage resources are monitored on a per tenant bases and as a whole for the cloud environment. New and existing administrative types can be dynamically created and managed within the cloud environment.03-21-2013
20130073666DISTRIBUTED CACHE CONTROL TECHNIQUE - A disclosed method include: receiving, an identifier of a user, an identifier of contents associated with the user and identification data concerning a sensor that read the identifier of the user; reading an identifier of a node associated with the received identification data or a combination of the received identification data and the received identifier of the user, from a data storage unit storing an identifier of a node that will cache contents to be outputted to a display device provided at a different position from a position of a sensor in association with identification data concerning the sensor or a combination of identification data concerning the sensor and an identifier of a user; and transmitting the received identifier of the user and an identifier of contents associated with the user to a node whose identifier was read.03-21-2013
20130031201INTELLIGENT ELECTRONIC DEVICE COMMUNICATION SOLUTIONS FOR NETWORK TOPOLOGIES - Systems and methods for communicating data from an IED on an internal network to a server, a client or device on an external network through a firewall are provided.01-31-2013
20130031200QUALITY OF SERVICE MANAGEMENT - A method for managing an amount of IO requests transmitted from a host computer to a storage system is described. A current latency value of an IO request most recently removed from an issue queue maintained by the host computer in order to transmit IO requests from the host computer to the storage system is periodically determined. An average latency value is the calculated based on the current latency value and a size limit of the issue queue is adjusted based in part on the average latency value. Upon receiving an IO request from one of a plurality of client applications running on the host computer, it can then be determined whether a number of pending IO requests in the issue queue has reached the size limit and the IO request can be transmitted to the issue queue if the number of pending IO request falls within the size limit.01-31-2013
20130031199TRANSMITTING DATA INCLUDING PIECES OF DATA - A method and system for transmitting data including pieces of data. The method includes the steps of: placing a piece of data on at least one cache memory; and sending a signal indicating a presence of the piece of data on the cache memory to at least one client, where at least one of the steps is carried out by a computer device.01-31-2013
20130031198TAILORING CONTENT TO BE DELIVERED TO MOBILE DEVICE BASED UPON FEATURES OF MOBILE DEVICE - A system and computer program product for delivering tailored specific content to a mobile device. A shim application is provided to the mobile device by a content server after the mobile device visits the content server for the first time. The shim application detects the capabilities of the mobile device, such as the screen size, screen resolution, memory size, browser capabilities, etc. The shim application then includes such information in the header of the requests, such as a request for content, sent from the mobile device to the content server. The content server then generates the requested content in the appropriate format based on the information provided in the header. In this manner, the content server will now be able to ensure that the content provided by the content server for a particular mobile device will be appropriately displayed on the mobile device.01-31-2013
20130031197INTERNET CACHE SUBSCRIPTION FOR WIRELESS MOBILE USERS - A server device may receive an indication that a mobile device has enrolled in a cache subscription service. The server device may receive cache parameters associated with the cache subscription service, where the cache parameters are specific to the mobile device. Content may be retrieved from a network and stored, in a memory associated with the one or more server devices, based on the received cache parameters. The server device may receive, from the mobile device, a request for particular content, determine whether the request for particular content corresponds to content that is stored in the memory, and provide, when determining that the requested particular content corresponds to content that is stored in the memory, the corresponding stored content to the mobile device.01-31-2013
20110202625Storage device, system and method for data share - The present invention is one storage device for data share which comprises a device body with a USB communications interface unit, a memory unit, and a control unit wherein the memory unit has an executive file/program comprising a group management module used to manage a group/peer list and the group list has at least a group ID and a peer ID. Accordingly, the storage devices with the same group ID can be referred to as “peers” inside the group and mutually share files saved in respective storage devices when at least two storage devices with the same group ID are separately plugged onto computers and complete login on the central server via Internets.08-18-2011
20100077055Remote user interface in a terminal server environment - Methods, apparatus, systems and computer program product for updating a user session in a terminal server environment. Transfer of display data corresponding to an updated user interface can occur via a memory shared between an agent server and an agent client in a terminal server environment. Access to the shared memory can be synchronized via token passing or other operation to prevent simultaneous access to the shared memory. Token sharing and synchronized input/output can be performed using FIFOs, sockets, files, semaphores and the like, allowing communications between the agent server and agent client communications to adapt to different operating system architecture.03-25-2010
20120246258HTTP-BASED SYNCHRONIZATION METHOD AND APPARATUS - An HTTP-based synchronization method includes obtaining a first response sent by a source server or a cache in response to an HTTP request for obtaining a file; determining time when the first response is sent in local time at server, according to a value of a Date field and a value of an Age field in the first response; determining time when the first response is sent in local time at client, according to the client time of an event related to the first response; and determining time offset between the server time and the client time according to the time when the first response is sent in local time at server and the time when the first response is sent in local time at client, and setting up a synchronization relationship between the client time and the server time.09-27-2012
20120246257Pre-Caching Web Content For A Mobile Device - A web service for pre-caching web content on a mobile device includes receiving a request from the mobile device for first web content, fetching the first web content, determining second web content to pre-fetch based upon the first web content, fetching the second web content, and causing the second web content to be stored in a content cache on the mobile device responsive to the request for the first web content. Pre-caching web content in this manner provides web content to the mobile device that the user of the mobile device is likely to access. Pre-caching of additional web content prior to receiving an explicit request improves web browsing performance of the mobile device.09-27-2012
20130086198APPLICATION-GUIDED BANDWIDTH-MANAGED CACHING - Methods and systems for populating a cache memory that services a media composition system. Caching priorities are based on a state of the media composition system, such as media currently within a media composition timeline, a composition playback location, media playback history, and temporal location within clips that are included in the composition. Caching may also be informed by descriptive metadata and media search results within a media composition client or a within a media asset management system accessed by the client. Additional caching priorities may be based on a project workflow phase or a client project schedule. Media may be partially written to or read from cache in order to meet media request deadlines. Caches may be local to a media composition system or remote, and may be fixed or portable.04-04-2013
20130080567ENCAPSULATED ACCELERATOR - A data processing system comprising: a host computer system supporting a software entity and a receive queue for the software entity; a network interface device having a controller unit configured to provide a data port for receiving data packets from a network and a data bus interface for connection to a host computer system, the network interface device being connected to the host computer system by means of the data bus interface; and an accelerator module arranged between the controller unit and a network and having a first medium access controller for connection to the network and a second medium access controller coupled to the data port of the controller unit, the accelerator module being configured to: on behalf of the software entity, process incoming data packets received from the network in one or more streams associated with a first set of one or more network endpoints; encapsulate data resulting from said processing in network data packets directed to the software entity; and deliver the network data packets to the data port of the controller unit so as to cause the network data packets to be written to the receive queue of the software entity.03-28-2013
20130080568SYSTEM AND METHOD FOR CACHING INQUIRY DATA ABOUT SEQUENTIAL ACCESS DEVICES - An intermediate device communicatively connected to a host device and a sequential device in a storage area network. The host device is configured to issue different kinds of commands to the sequential device, including an inquiry command. The sequential device is configured to sequentially process requests from the host device. The intermediate device is configured to cache inquiry data about the sequential device itself in a cache memory connected to the intermediate device and service inquiry commands from the host device.03-28-2013
20130086199SYSTEM AND METHOD FOR MANAGING MESSAGE QUEUES FOR MULTINODE APPLICATIONS IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A middleware machine environment can manage message queues for multimode applications. The middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment. The middleware machine environment further includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages. Additionally, different processes on a client operate to use at least one proxy to communicate with the message server. Furthermore, the middleware machine environment can protect message queues for multimode applications using a security token created by the daemon process.04-04-2013
20130036186CACHING REMOTE SWITCH INFORMATION IN A FIBRE CHANNEL SWITCH - A network of switches with a distributed name server configuration and caching of remote node device information is disclosed. The network preferably comprises a first switch coupled to a second switch. Each of the switches directly couple to respective node devices. The first switch maintains a name server database about its local node devices, as does the second switch. The second switch further maintains a information cache about remote node devices. The name server preferably notifies other switches of changes to the database, and the cache manager preferably uses the notifications from other switches to maintain the cache. The name server accesses the cache to respond to queries about remote node devices. The cache manager may also aggregate notification messages from other switches when notifying local devices of state changes. Traffic overhead and peak traffic loads may advantageously be reduced.02-07-2013
20130080566SYSTEM AND METHOD FOR DYNAMIC CACHE DATA DECOMPRESSION IN A TRAFFIC DIRECTOR ENVIRONMENT - Described herein are systems and methods for use with a load balancer or traffic director, and administration thereof, wherein the traffic director is provided as a software-based load balancer that can be used to deliver a fast, reliable, scalable, and secure platform for load-balancing Internet and other traffic to back-end origin servers, such as web servers, application servers, or other resource servers. In accordance with an embodiment, the traffic director can be configured to compress data stored in its cache, and to respond to requests from clients by serving content from origin servers either as compressed data, or by dynamically decompressing the data before serving it, should a particular client prefer to receive a non-compressed variant of the data. In accordance with an embodiment, the traffic director can be configured to make use of hardware-assisted compression primitives, to further improve the performance of its data compression and decompression.03-28-2013
20130080565METHOD AND APPARATUS FOR COLLABORATIVE UPLOAD OF CONTENT - A collaborative cloud DVR system (ccDVR), which includes a cloud storage system and a plurality of participating DVR client devices, acts collaboratively as a single communal entity in which community members authorize each other to upload, remotely store and download licensed content for time shifted viewing, in a manner which rigorously protects legal rights of the content owners while overcoming the potential physical obstacles of limited bandwidth, power failures, incomplete uploads/downloads of content, limited cloud storage capacity, etc. The collaborative cloud DVR community collaboratively shares bandwidth and cloud storage capacity among DVR viewer/users with each owner/user of a DVR client device authorizing his or her individual DVR client device to be utilized by a cloud storage system server and any other owner/user of a DVR client device in the respective service community, and receiving similar permission in return to promote the convenience of cloud storage in an authorized manner.03-28-2013
20130138763SYSTEMS AND METHODS FOR CACHING AND SERVING DYNAMIC CONTENT - A web server and a shared caching server are described for serving dynamic content to users of at least two different types, where the different types of users receive different versions of the dynamic content. A version of the dynamic content includes a validation header, such as an ETag, that stores information indicative of the currency of the dynamic content and information indicative of a user type for which the version of the dynamic content is intended. In response to a user request for the dynamic content, the shared caching server sends a validation request to the web server with the validation header information. The web server determines, based on the user type of the requestor and/or on the currency of the cached dynamic content whether to instruct the shared caching server to send the cached content or to send updated content for serving to the user.05-30-2013
20130041974APPLICATION AND NETWORK-BASED LONG POLL REQUEST DETECTION AND CACHEABILITY ASSESSMENT THEREFOR - Systems and methods for application and network-based long poll request detection and cacheability assessment therefore are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a distributed proxy and cache system, including, determining relative timings between a first request initiated by the application, a response received responsive to the first request, and a second request initiated subsequent to the first request also by the application. The relative timings can be compared to request-response timing characteristics for other applications to determine whether the requests of the application are long poll requests.02-14-2013
20130041973Method and System for Sharing Audio and/or Video - The disclosure discloses a method for sharing audio and/or video. The method includes the steps that: a first terminal writes audio and/or video from an audio-video providing module into a cache space according to a play request of a second terminal, and transmits the audio and/or video stored in the cache space to the second terminal.02-14-2013
20130041972Content Delivery Network Routing Using Border Gateway Protocol - An announcement protocol may allow disparate, and previously incompatible, content delivery network caches to exchange information and cache content for one another. Announcement data may be stored by the respective caches, and used to determine whether a cache is able to service an incoming request. URL prefixes may be included in the announcements to identify the content, and longest-match lookups may be used to help determine a secondary option when a first cache determines that it lacks a requested content.02-14-2013
20130041971TECHNIQUE FOR IMPROVING REPLICATION PERSISTANCE IN A CACHING APPLICANCE STRUCTURE - A method for improving replication persistence in a caching appliance structure can begin when a primary catalog service receives a command to instantiate a data partition. The primary catalog service can manage a collective of caching appliances in a networked computing environment. The data partition can include a primary shard and at least one replica shard. The primary shard of the data partition can be stored within a memory space of a first caching appliance. The at least one replica shard of the data partition can be stored within a non-volatile storage space of a second caching appliance. The first and the second caching appliances can be separate physical devices. The memory space of the second caching appliance that could have been used to store the at least one replica shard can be available for storing primary shards for other data partitions, increasing the capacity of the collective.02-14-2013
20130041970CLIENT SIDE CACHING - A method for client side caching includes, with a client system, running a proxy caching application designed for execution on a proxy server, with a content presentation application running on the client system, accessing content from a server communicatively coupled to the client system, and with said proxy caching application, transparently caching said content into a cache system of said client system.02-14-2013
20090157840Controlling Shared Access Of A Media Tray - Methods, apparatus, and products for controlling shared access of a media tray are disclosed that include monitoring communications between a virtualized media tray and a computing device currently connected to the virtualized media tray; receiving an access request from a requesting computing device not currently connected to the virtualized media tray; determining, in dependence upon the monitored communications between the virtualized media tray and the computing device currently connected to the virtualized media tray, to switch connection of the virtualized media tray from the computing device currently connected to the virtualized media tray to the requesting computing device; and switching connection of the virtualized media tray from the computing device currently connected to the virtualized media tray to the requesting computing device.06-18-2009
20130046845STORAGE SYSTEM, CONTROL METHOD FOR STORAGE SYSTEM, AND COMPUTER PROGRAM - A control method for a storage system, whereby a plurality of storage nodes included in the storage system are grouped into a first group composed of storage nodes with a network distance in the storage system within a predetermined distance range, and second groups composed of storage nodes that share position information for the storage nodes that store data. A logical spatial identifier that identifies the second groups is allocated for each of the second groups, to calculate a logical spatial position using a data identifier as an input value for a distributed function, and store data corresponding to the data identifier in the storage node that belongs the second group to which the identifier corresponding to the calculated position is allocated.02-21-2013
20120166571APPARATUS AND METHOD FOR PROVIDING MOBILE SERVICE IN A MOBILE COMMNUCATION NETWORK - Apparatus, system, and method for providing a mobile service to a mobile node in a mobile communication network. In order to provide the mobile service, a request may be received from a mobile node for connecting to a mobile router. When the mobile node is authorized to access the mobile router, the authorized mobile node may be connected to a file server in the mobile router. Then, a storage service may be provided to the authorized mobile node.06-28-2012
20130185377PIPELINE SYSTEMS AND METHOD FOR TRANSFERRING DATA IN A NETWORK ENVIRONMENT - A communications system having a data transfer pipeline apparatus for transferring data in a sequence of N stages from an origination device to a destination device. The apparatus comprises dedicated memory having buffers dedicated for carrying data and a master control for registering and controlling processes associated with the apparatus for participation in the N stage data transfer sequence. The processes include a first stage process for initiating the data transfer and a last Nth stage process for completing data transfer. The first stage process allocates a buffer from a predetermined number of buffers available within the memory for collection, processing, and sending of the data from the origination device to a next stage process. The Nth stage process receives a buffer allocated to the first stage process from the (N−1)th stage and to free the buffer upon processing completion to permit reallocation of the buffer.07-18-2013
20130103779METHOD AND APPARATUS FOR AUGMENTING SMARTPHONE-CENTRIC IN-CAR INFOTAINMENT SYSTEM USING VEHICLE WIFI/DSRC - A method and system for augmenting smartphone-centric in-car infotainment systems using Wi-Fi or DSRC communications between a vehicle and surrounding infrastructure. One or more smartphones or other electronic devices within a vehicle electronically communicate with the vehicle via a wireless protocol, such as Bluetooth, or a wired connection. The electronic devices run applications which submit requests for internet-based files or data, such as web pages, audio or video files. The vehicle brokers these requests and, using its own external wireless communications systems, such as Wi-Fi or DSRC, retrieves as many of the files or data as possible whenever internet access is available via an external wireless connection. The vehicle then provides the files or data to the requesting electronic devices. A token-based method for prioritizing the requests and rendering the data to the electronic devices is also disclosed.04-25-2013
20120191801Utilizing Removable Virtual Volumes for Sharing Data on Storage Area Network - The present disclosure provides data sharing through virtual removable volumes. A virtual volume of a SAN (storage area network) is presented to clients as a virtual removable volume. A controlling application controls access of clients connected to the SAN to the virtual removable volume. The controlling application allows only one client at a time to access the virtual removable volume. The controlling application allows a first client to mount the virtual removable volume as a removable volume. The controlling application then causes the first client to unmount the virtual removable volume and allows a second client to mount the virtual removable volume as a removable volume. In this way, the first client and second client are able to share data via the virtual removable volume without causing corruption of data and without requiring a shared file system or physical transfer of removable media.07-26-2012
20130060883MULTIMEDIA PLAYBACK CALIBRATION METHODS, DEVICES AND SYSTEMS - A multimedia playback calibration method includes a calibration module operating on a mobile communications device to cause it to: introduce test data at a first end, in the mobile device, of a playback path and receive data, played back by a playback device at a second end of the playback path, at a sensor integral to the mobile device; compare the received data against the test data to determine a characteristic of the playback path; and configure the mobile device to compensate for this characteristic. The mobile device may comprise a handheld casing enclosing a central processing unit, a multimedia player module for initiating playback of at least one data stream on a playback device, communication capability for forwarding the at least one data stream from the mobile device to the playback device along a playback path and the calibration module.03-07-2013
20130060882TRANSMITTING DATA INCLUDING PIECES OF DATA - A method and system for transmitting data including pieces of data. The method includes the steps of: placing a piece of data on at least one cache memory; and sending a signal indicating a presence of the piece of data on the cache memory to at least one client, where at least one of the steps is carried out by a computer device.03-07-2013
20130060881COMMUNICATION DEVICE AND METHOD FOR RECEIVING MEDIA DATA - Communication devices are provided comprising a receiver configured to receive a data stream including data for reconstructing media data at a first quality level; a memory for storing data for reconstructing the media data at a second quality level wherein the first quality level is higher than the second quality level; a determiner configured to determine whether the reception rate of the data included in the data stream fulfills a predetermined criterion; and a processing circuit configured to reconstruct the media data from the data included in the data stream if it has been determined that the reception rate of the data included in the data stream fulfills the predetermined criterion and to reconstruct the media data from the data stored in the memory if it has been determined that the reception rate of the data included in the data stream does not fulfill the predetermined criterion.03-07-2013
20120311064METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR CACHING CALL SESSION CONTROL FUNCTION (CSCF) DATA AT A DIAMETER SIGNALING ROUTER (DSR) - According to one aspect, the subject matter described herein includes a method for caching call session control function (CSCF) data at a Diameter signaling router (DSR). The method includes steps occurring at a DSR network node comprising a communication interface, a processor, and a memory. The steps include receiving, via the communication interface, a Diameter message associated with a network subscriber. The steps also include identifying, by the processor, a CSCF associated with the network subscriber based on the Diameter message. The steps further include storing, in the memory, a record associating the CSCF and the network subscriber.12-06-2012
20090271493System and Apparatus for Managing Social Networking and Loyalty Program Data - A system for managing and sharing social networking and loyalty program data that includes a quick-transfer device. The quick-transfer device can be in any number of forms that are readily portable such as a keychain, wristwatch, accessory for a mobile phone or music player or similar form. The quick-transfer device provides a set of interactive features for managing and transferring the social networking and loyalty program data through a ‘quick-touch’ or ‘quick-click’ transfer mechanism. The loyalty program and social networking data structure is managed by the quick-transfer device. The data structure is accessible to general purpose applications such as web browsers. The quick-transfer device can communicate with other quick-transfer devices as well as computers and external sensors to update and modify the contents of the stored data structure.10-29-2009
20120226765DATA RECEPTION MANAGEMENT APPARATUS, SYSTEMS, AND METHODS - Apparatus, systems, and methods to manage networks may operate to receive a packet into an element of an array contained in a memory while a low resource state exists, and to truncate the array at the element responsive to at least one of an indication that the array is full, or an indication that no more packets are available to be received after receiving at least the packet. The receiving and the truncating may be executed by a processor. Additional apparatus, systems, and methods are disclosed.09-06-2012
20130067019SELECTIVE USE OF SHARED MEMORY FOR REMOTE DESKTOP APPLICATION - A method includes determining if a server supporting an application and a client having remote desktop access to the server are on a same physical computing device. Upon determining that the server and the client are on the same physical computing device, graphics data related to the application is stored from the server to shared memory that is accessible by the server and by the client. Information to enable the client to retrieve the graphics data stored by the server in the shared memory is communicated from the server to the client.03-14-2013
20130067020METHOD AND APPARATUS FOR SERVER SIDE REMOTE DESKTOP RECORDATION AND PLAYBACK - Various methods for server-side recordation and playback of a remote desktop session are provided. One example method may comprise receiving data related to a remote desktop protocol session. The method of this example embodiment may further comprise providing for storage of the data at a location other than the device associated with the remote desktop protocol client of the remote desktop protocol session. Furthermore, the method of this example embodiment may comprise receiving a request to reproduce the remote desktop protocol session. The method of this example embodiment may also comprise retrieving the data from storage. Additionally, the method of this example embodiment may comprise facilitating reproduction of at least a portion of the remote desktop protocol session based at least in part on the retrieved data. Similar and related example methods, apparatuses, systems, and computer program products are also provided.03-14-2013
20110022677Media Fusion Remote Access System - The present invention is a system that receives data in different formats from different devices/applications in the format native to the devices/applications and fuses the data into a common shared audio/video collaborative environment including a composite display showing the data from the different sources in different areas of the display and composite audio. The common environment is presented to users who can be at remote locations. The users are allowed to supply a control input for the different device data sources and the control input is mapped back to the source, thereby controlling the source. The location of the control input on the remote display is mapped to the storage area for that portion of the display and the control data is transmitted to the corresponding device/application. The fusion system converts the data from the different sources/applications into a common format and stores the converted data from the different sources in a shared memory with each source allocated a different area in the memory. A combined window like composite representation of the data is produced and also stored in the memory. The combined representation is transmitted to and can be controlled by the users.01-27-2011
20090234933DATA FORWARDING STORAGE - Methods and apparatus, including computer program products, for data forwarding storage. A network includes a group of interconnected computer system nodes each adapted to receive data and continuously forward the data from computer memory to computer memory without storing on any physical storage device in response to a request to store data from a requesting system and retrieve data being continuously forwarded from computer memory to computer memory in response to a request to retrieve data from the requesting system.09-17-2009
20120233284METHODS AND SYSTEMS FOR CACHING DATA COMMUNICATIONS OVER COMPUTER NETWORKS - A computer-implemented method and system for caching multi-session data communications in a computer network.09-13-2012
20130166670NETWORKED STORAGE SYSTEM AND METHOD INCLUDING PRIVATE DATA NETWORK - A networked storage system includes a source mass storage device coupled to a client via a storage area network (SAN). A target mass storage device is coupled to the source mass storage device via a private data network. The source mass storage device stores source data which is provided to the client via the SAN in response to a request by the client to read the data. If the request is to copy or move the source data, however, the source mass storage device determines an identifier for the target mass storage device and directly provides, based on the identifier, the source data to the target mass storage device via the private data network. The transfer via the private data network bypasses the client and the SAN.06-27-2013
20120102139MANAGING DATA DELIVERY BASED ON DEVICE STATE - Managing power-consuming resources on a first computing device by adjusting data delivery from a plurality of second computing devices based on a state of the first computing device. The state of the first computing device is provided to the second computing devices to alter the data delivery. In some embodiments, the first computing device provides the second computing devices with actions or commands relating to data delivery based on the device state. For example, the second computing devices are instructed to store the data, forward the data, forward only high priority data, or perform other actions. Managing the data delivery from the second computing devices preserves battery life of the first computing device.04-26-2012
20080270565Method and system for arbitrating computer access to a shared storage medium - A method of arbitrating access to a storage medium that is shared by M first computers operating on a Windows™ operating comprising (1) determining if the SCSI PR-flag has been set; (2) if yes, preventing the N second computers from writing to the storage medium; and (3) setting the SCSI MC-flag for each of said M first computers after one of the second computers writes to the storage medium to notify the M first computers that the contents of the storage medium may have changed.10-30-2008
20110289179DYNAMIC CONFIGURATION OF PROCESSING MODULES IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a method of updating configuration data of a network processor having one or more processing modules and a shared memory. A control processor of the network processor writes updated configuration data to the shared memory and sends a configuration update request to a configuration manager. The configuration update request corresponds to the updated configuration data. The configuration manager determines whether the configuration update request corresponds to settings of a given one of the processing modules. If the configuration update request corresponds to settings of a given one of the one or more processing modules, the configuration manager, sends one or more configuration operations to a destination one of the processing modules corresponding to the configuration update request and updated configuration data. The destination processing module updates one or more register values corresponding to configuration settings of the processing module with the corresponding updated configuration data.11-24-2011
20110289181Method and System for Detecting Changes in a Network Using Simple Network Management Protocol Polling - In an embodiment, methods and systems have been provided for detecting changes in a network using improved Simple Network Management Protocol (SNMP) polling that reduces network traffic. Examples of changes in the network include, but are not limited to, configuration and behavioral changes in a network device, and response of network device to a network change. A Network Management Station (NMS) periodically polls Management Information Base (MIB) groups instead of periodically polling individual MIB object instances. The NMS receives the Aggregate Change Identifiers (ACIs) of MIB groups in response to polling, from a SNMP agent. The changes in the received ACIs represent the changes in the MIB groups. A change in an MIB group represents changes in the MIB object instances of the MIB group. The ACIs can be checksum, timestamp, and a combination of number of MIB object instances in a group and checksum of the MIB group.11-24-2011
20110289180DATA CACHING IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.11-24-2011
20110295968DATA PROCESSING METHOD AND COMPUTER SYSTEM - A technique for increasing the speed of data entry into a distributed processing platform is provided. According to a computer system of the present invention, when data is entered into each node in a distributed manner, the most efficient entry method (a method with the highest processing speed) is selected from among a plurality of entry methods, so that the data is entered into each node with no overlaps in accordance with the selected method.12-01-2011
20120023187Multi-Tenant Universal Storage Manager - In one aspect, a universal storage manager in a multi-tenant computing system receives at least one message requesting a change to a storage infrastructure of the multi-tenant computing system. Thereafter, the universal storage manager associates the requested change with one of a plurality of operations changing the storage infrastructure. Once this association is made, the universal storage manager initiates the associated operation to change the storage infrastructure. Related apparatus, systems, techniques and articles are also described.01-26-2012
20120066336APPARATUS AND METHOD FOR AGGREGATING DISPARATE STORAGE ON CONSUMER ELECTRONICS DEVICES - A method includes determining whether a requesting device includes sufficient available memory to store a media file. The method further includes determining whether a best fit memory block is available in a particular device of a plurality of devices in response to a determination that the requesting device includes insufficient available memory.03-15-2012
20100153514Non-disruptive, reliable live migration of virtual machines with network data reception directly into virtual machines' memory - Techniques are disclosed for the non-disruptive and reliable live migration of a virtual machine (VM) from a source host to a target host, where network data is placed directly into the VM's memory. When a live migration begins, a network interface card (NIC) of the source stops placing newly received packets into the VM's memory. A virtual server driver (VSP) on the source stores the packets being processed and forces a return of the memory where the packets are stored to the NIC. When the VM has been migrated to the target, and the source VSP has transferred the stored packets to the target host, the VM resumes processing the packets, and when the VM sends messages to the target NIC that the memory associated with a processed packet is free, a VSP on the target intercepts that message, blocking the target NIC from receiving it.06-17-2010
20110270946SERVICE PROVIDING APPARATUS, SERVICE PROVIDING SYSTEM, SERVICE PROVIDING METHOD, AND STORAGE MEDIUM - A service providing apparatus (11-03-2011
20110270945COMPUTER SYSTEM AND CONTROL METHOD FOR THE SAME - A computer system with a plurality of storage systems connected to each other via a network, each storage system including a virtual machine whose data is stored in hierarchized storage areas. When a virtual machine of a first storage system is migrated from the first storage system to a second storage system, the second storage system stores data of the virtual machine of the first storage system as well as data of its own virtual machine, in the hierarchized storage areas in the second storage system.11-03-2011
20110191436Method and System for Protocol Offload in Paravirtualized Systems - Certain aspects of a method and system for protocol offload in paravirtualized systems may be disclosed. Exemplary aspects of the method may include preposting of application buffers to a front-end driver rather than to a NIC in a paravirtualized system. The NIC may be enabled to place the received offloaded data packets into a received data buffer corresponding to a particular GOS. A back-end driver may be enabled to acknowledge the placed offloaded data packets. The back-end driver may be enabled to forward the received data buffer corresponding to the particular GOS to the front-end driver. The front-end driver may be enabled to copy offloaded data packets from a received data buffer corresponding to a particular guest operating system (GOS) to the preposted application buffers.08-04-2011
20100115048DATA TRANSMISSION SCHEDULER - A method of co-ordinating the time of execution of a plurality of applications all hosted by the same communications device, each application requiring a network connection for completion of a predetermined task, the method comprising for each task: determining one or more task completion conditions including one or more network conditions for said network connection required to complete said task; retrieving stored data indicating for a predetermined period of time, one or more network characteristics for an available network connection; processing said task completion conditions to determine if said one or more network characteristics retrieved for said predetermined period of time match said one or more network conditions for said network connection required to complete said task; and in the event of a match in between the network characteristics of a connection available for a predetermined period of time and the network conditions required for said network connection to complete said task, scheduling said task for execution in said predetermined period of time; and reducing the predetermined period of time by the duration of the network connection required to complete a scheduled task.05-06-2010
20090150510SYSTEM AND METHOD FOR USING REMOTE MODULE ON VIOS TO MANAGE BACKUPS TO REMOTE BACKUP SERVERS - A system, method, and program product is provided that receives a backup request at a virtual input/output server (VIOS) from a client of the VIOS. The backup request corresponds to a virtual nonvolatile storage that is used by the client. The VIOS retrieves data from the nonvolatile storage device where the virtual nonvolatile storage is stored. The VIOS transmits the retrieved data to a backup server via a computer network, such as the Internet. In one embodiment, a backup software application runs on the VIOS client and a backup proxy software application runs on the VIOS.06-11-2009
20100268788Remote Asynchronous Data Mover - A distributed data processing system executes multiple tasks within a parallel job, including a first local task on a local node and at least one task executing on a remote node, with a remote memory having real address (RA) locations mapped to one or more of the source effective addresses (EA) and destination EA of a data move operation initiated by a task executing on the local node. On initiation of the data move operation, remote asynchronous data move (RADM) logic identifies that the operation moves data to/from a first EA that is memory mapped to an RA of the remote memory. The local processor/RADM logic initiates a RADM operation that moves a copy of the data directly from/to the first remote memory by completing the RADM operation using the network interface cards (NICs) of the source and destination processing nodes, determined by accessing a data center for the node IDs of remote memory.10-21-2010
20090063652Localized Media Content Delivery - Improved approaches to make data available locally at business establishments are disclosed. In one embodiment, data anticipated to be soon to be requested by patrons of a particular business establishment can be pre-loaded to a local server provided at the particular business establishment. By pre-loading data that is anticipated to be soon to be requested by patrons of the particular business establishment, local network access traffic and congestion at the retail establishment can be reduced. The improved approaches are particularly well suited for media content data that is likely to be requested by patrons at business (e.g., retail) establishments. Advantageously, patrons can get rapid download of media content data associated with one or more media items that the patrons have purchased from an online media store.03-05-2009
20120110111CACHE DEFEAT DETECTION AND CACHING OF CONTENT ADDRESSED BY IDENTIFIERS INTENDED TO DEFEAT CACHE - Systems and methods for cache defeat detection are disclosed. Moreover, systems and methods for caching of content addressed by identifiers intended to defeat cache are further disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of resource management in a wireless network by caching content on a mobile device. The method can include detecting a data request to a content source for which content received is stored as cache elements in a local cache on the mobile device, determining, from an identifier of the data request, that a cache defeating mechanism is used by the content source, and/or retrieving content from the cache elements in the local cache to respond to the data request.05-03-2012
20120110110REQUEST AND RESPONSE CHARACTERISTICS BASED ADAPTATION OF DISTRIBUTED CACHING IN A MOBILE NETWORK - Systems and methods of request and response characteristics based adaptation of distributed caching in a mobile network are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of collecting information about a request or information about the response received for the request, the request being initiated at the mobile device, using the information about the request or the response, determining cacheability of the response, caching the response by storing the response a cache entry in a cache on the mobile device in response to determining the cacheability of the response, and/or serving the response from the cache to satisfy a subsequent request. The response in the cache entry can be verified by an entity physically separate from the mobile device to determine whether the response stored in the local cache still matches a current response at a source which sent the response.05-03-2012
20120110109CACHING ADAPTED FOR MOBILE APPLICATION BEHAVIOR AND NETWORK CONDITIONS - Systems and methods for caching adapted for mobile application behavior and network conditions are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of determining cacheability of content received for a client on a mobile device by tracking requests generated by the client at the mobile device to detect periodicity of the requests generated by the client, tracking responses received for requests generated by the client to detect repeatability in content of the responses, and/or determining whether the content received for the client is cacheable on the mobile device based on one or more of the periodicity in the requests and the repeatability in the content of the responses.05-03-2012
20120110108Computer System with Cooperative Cache - A server receives information that identifies which chunks are stored in local caches at client computers and receives a request to evict a chunk from a local cache of a first one of the client computers. The server determines whether the chunk stored at the local cache of the first one of the client computers is globally oldest among the chunks stored in the local caches at the client computers, and authorizes the first one of the client computers to evict the chunk when the chunk is the globally oldest among the chunks stored in the local caches at the client computers.05-03-2012
20100070605Dynamic Load Management of Network Memory - A system for managing network memory comprises a communication interface and a processor. The communication interface receives a status message from another appliance. The status message indicates an activity level of a faster memory and a slower memory associated with the other appliance. The communication interface also receives a data packet. The processor processes the status message to determine the activity level of the faster memory and the slower memory. The processor also processes the data packet to identify any matching data in the other appliance and estimate whether the matching data is stored in the faster memory based on the activity level. Based on the estimate, the processor determines whether to generate an instruction to retrieve the matching data.03-18-2010
20100023596File-system based data store for a workgroup server - A system and method for storing workgroup objects on a file-system based data store in a workgroup server is disclosed. The present invention implements a file-system based workgroup system in which a workgroup object is stored in one or more files. The present invention further includes a workgroup object list comprising object identifiers, each object identifier uniquely mapping to a workgroup object and each object identifier including a property of the workgroup object based on which the workgroup object list is sorted.01-28-2010
20100100604CACHE CONFIGURATION SYSTEM, MANAGEMENT SERVER AND CACHE CONFIGURATION MANAGEMENT METHOD - A cache configuration management system capable of lightening workloads of estimation of a cache capacity in virtualization apparatus and/or cache assignment is provided. In a storage system having application servers, storage devices, a virtualization apparatus for letting the storage devices be distinctly recognizable as virtualized storages, and a storage management server, the storage management server predicts a response time of the virtualization apparatus with respect to a application server from cache configurations and access performances of the virtualization apparatus and storage device and then evaluates the presence or absence of the assignment to a virtual volume of internal cache and a predictive performance value based on a to-be-assigned capacity to thereby perform judgment of the cache capacity within the virtualization apparatus and estimation of an optimal cache capacity, thus enabling preparation of an internal cache configuration change plan.04-22-2010
20100082764COMMUNITY CACHING NETWORKS - A system for sharing data within a network, the system including a first peer device coupled with the network that comprises local cache storage configured to store data comprising at least one entry designated as network accessible cache data and a cache control module operative to control access to the data stored in the local cache storage. The system further includes a second peer device coupled with the first peer device via the network where the second peer device is configured to request network accessible cache data stored in the local cache storage of the first peer device. Furthermore, the cache control module of the first peer device is configured to transmit at least a portion of the requested network accessible cache data to the second peer device in response to the request for network accessible data stored from the second peer device.04-01-2010
20090182836System and method for populating a cache using behavioral adaptive policies - A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications.07-16-2009
20090182835Non-disruptive storage caching using spliced cache appliances with packet inspection intelligence - A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using spliced cache appliances with packet inspection intelligence. A cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).07-16-2009
20090292789Computer System, Management Server and Configuration Information Acquisition Method - The management server includes an acquisition unit for acquiring the configuration information and performance information of the storage apparatus and the host computer respectively at different timings, and a comparison unit for comparing, when a configuration change of the storage apparatus is commanded externally, a performance value of components in the storage apparatus subject to the configuration change and a performance value of components in a connection relationship with the components. The acquisition unit determines that an unknown component has been added in the storage apparatus when the difference in the performance values compared with the comparison unit is of a certain level or greater, and reacquires configuration information from the storage apparatus.11-26-2009
20090276502Network Switch with Shared Memory - A network switch that incorporates memory that can be shared by computers or processors connected to the network switch is provided. The network switch of the present invention is particularly suitable for use in a computer cluster, such as a Beowulf cluster, in which each computer in the cluster can use the shared memory resident in at least one of the network switches.11-05-2009
20100281132MULTISTAGE ONLINE TRANSACTION SYSTEM, SERVER, MULTISTAGE ONLINE TRANSACTION PROCESSING METHOD AND PROGRAM - Provided is a system in which a plurality of nodes including a plurality of servers are connected at least with one NAS shared among the plurality of nodes. At least one of the nodes includes a shared memory from/to which each server belonging to the same node can read and write data. Each of at least two of the servers belonging to the node having the shared memory includes: a node judging device which judges whether output destination of output data obtained by processing the input data is the server belonging to the same node as that of the server itself; a data storage memory acquiring device which secures a storage region of the output data on the shared memory if the output destination is the server belonging to the same node; and a data processor which processes the input data and stores the output data to the storage region.11-04-2010
20120297012UPDATING MULTIPLE COMPUTING DEVICES - A system includes a server site that includes a memory for storing update data sets that correspond to data sets stored on multiple computing devices of a user. The system also includes a synchronization manager for determining that one computing device associated with the user and another computing device associated with the user are absent one or more data updates stored in the memory at the server site. The synchronization manager is configured to send in parallel, absent establishing a data transfer lock, the one or more data updates to the both computing devices of the user for updating the corresponding data stored on each computing device.11-22-2012
20120297011Intelligent Reception of Broadcasted Information Items - A method comprising: receiving a plurality of broadcasted information items in a client device; determining fondness of the information items to the user of the client device according to predefined criteria; and selecting a subset of the information items to be stored in a memory of the client device at least partly based on the determined fondness of the information items.11-22-2012
20120297009METHOD AND SYSTEM FOR CAHING IN MOBILE RAN - A non-transitory computer readable medium and a method that may include receiving, at a first level cache that is coupled to a radio access network (RAN) component, a data entity that comprises an address; wherein each cache of the hierarchical group of caches is coupled to a component of the RAN or to a component of a core network that is coupled between the RAN and the Internet; identifying the data entity as comprising a request to receive information from a requesting entity that is wirelessly coupled to the RAN—if the address belongs to a root cache address range; providing the information, by the first level cache, to the requesting entity if the content is stored in the first level cache; and sending to an intermediate level cache the data entity if the information is not stored in the first level cache.11-22-2012
20120297008CACHING PROVENANCE INFORMATION - Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server.11-22-2012
20080244031On-Demand Memory Sharing - A method for sharing memory resources in a data network is provided. The method comprises monitoring first memory space available to a first system; transferring data to a second system, in response to determining that the first memory space has fallen below a first threshold level; and transferring instructions to the second system to perform a first operation on the data.10-02-2008
20080250116METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH - Reducing pool starvation in a switch is disclosed. The switch includes a plurality of egress ports, and a reserved pool of buffers in a shared memory. The reserved pool of buffers is one of a number of reserved pools of buffers, and the reserved pool of buffers is reserved for one of the egress ports. A shared pool of buffers and a multicast pool of buffers are in the shared memory. The shared pool of buffers is shared by the egress ports.10-09-2008
20080288608Method and System for Correlating Transactions and Messages - A method is presented for correlating related transactions, such as a parent transaction that invokes a child transaction within a distributed data processing system, using a particular format for the correlation tokens. Each transaction is associated with a correlation token containing a hierarchical, three-layer identifier that includes a local transaction identifier and a local system identifier, which are associated with the local system of the child transaction, along with a root transaction identifier, a root system identifier, and a registry identifier. The local transaction identifier is unique within the local system, and the local system identifier is unique within a registry that contains a set of system identifiers. The registry is associated with a domain in which the local systems operate, and multiple domains exist within a transaction space of entities that use these correlation tokens. Correlation token pairs are analyzed to construct a call graph of related transactions.11-20-2008
20120297010Distributed Caching and Cache Analysis - In a distributed caching system, a Web server may receive, from a user device, a request for a Web service. The Web server may parse the request to identify a cookie included in the request and determine whether the cookie includes allocation information. The allocation information may indicate multiple cache servers temporally store certain data associated with the Web service. The Web server may request for the certain data from the cache servers and then transmit the certain data to the user device. If one of the cache servers fails to respond to the request, the Web server may reallocate the cached data and update the cookie by overwriting the allocation information stored in the cookie.11-22-2012
20080281939DECOUPLED LOGICAL AND PHYSICAL DATA STORAGE WITHIN A DATABASE MANAGEMENT SYSTEM - The subject matter herein relates to database management systems and, more particularly, to decoupled logical and physical data storage within a database management system. Various embodiments provide systems, methods, and software that separate physical storage from logical storage of data. These embodiments include a mapping of logical storage to physical storage to allow data to be moved within the physical storage to increase database responsiveness.11-13-2008
20100138513SYSTEM AND METHOD FOR SELECTIVELY TRANSFERRING BLOCK DATA OVER A NETWORK - A system for sharing block data includes a non-removable device for storing block data (e.g. a hard drive) that is networked with a plurality of computers. Each computer can initiate discovery commands and read/write commands, and transmit these commands over the network to the non-removable storage device. Computer commands are intercepted and processed by a logical algorithm program at the storage device. One function of the logical algorithm program is to instruct each computer to treat the non-removable block storage device as a removable block device. Because the computers treat the storage device as a removable block device, they relinquish control of the device (after use) to other computers on the network. The logical algorithm program also functions to allocate temporary ownership of the block storage device to one of the computers on the network and passes temporary ownership from computer to computer on the network.06-03-2010
20120084384DISTRIBUTED CACHE FOR STATE TRANSFER OPERATIONS - A network arrangement that employs a cache having copies distributed among a plurality of different locations. The cache stores state information for a session with any of the server devices so that it is accessible to at least one other server device. Using this arrangement, when a client device switches from a connection with a first server device to a connection with a second server device, the second server device can retrieve state information from the cache corresponding to the session between the client device and the first server device. The second server device can then use the retrieved state information to accept a session with the client device.04-05-2012
20090327446Software Application Striping - A distributed computing system comprising networking infrastructure and methods of executing an application on the distributed computing system is presented. Interconnected networking nodes offering available computing resources form a network fabric. The computing resources can be allocated from the networking nodes, including available processing cores or memory elements located on the networking nodes. A software application can be stored in a system memory comprising memory elements allocated from the nodes. The software application can be disaggregated into a plurality of executable portions that are striped across the allocated processing cores by assigning each core a portion to execute. When the cores are authenticated with respect to their portions, the cores are allowed to execute the portions by accessing the system memory over the fabric. While executing the software application, the networking nodes having the allocated cores concurrently forward packets through the fabric.12-31-2009
20090327445CONTINUOUS DATA PROTECTION AND REMOTE BLOCK-LEVEL STORAGE FOR A DATA VOLUME - A system and method for writing and reading blocks of a data volume are disclosed. The method provides continuous data protection (CDP) for a data volume by backing up blocks of the data volume in real time to a local CDP log and transmitting the blocks over the Internet for storage in a remote CDP log on a server computer system in response to write requests that change the blocks of the data volume. In response to a read request for a particular block the method attempts to read the block from the data volume. If the block is not present in the data volume the method attempts to read the block from the local CDP log. If the block is not present in the local CDP log the method request the server computer system to read the block from the remote CDP log and return the block.12-31-2009
20120079056Network Cache Architecture - There is described a method and apparatus for sending data through one or more packet data networks. A stripped-down packet is sent from a packet sending node towards a cache node, the stripped down packet including in its payload a pointer to a payload data segment stored in a file at the cache node. When the stripped-down packet is received at the cache node, the pointer is used to identify the payload data segment from data stored at the cache node. The payload data segment is inserted into the stripped-down packet in place of the pointer so as to generate a full size packet, which is sent from the cache node towards a client.03-29-2012
20120079055REVERSE DNS LOOKUP WITH MODIFIED REVERSE MAPPINGS - In accordance with the invention, embodiments of a DNS server, a DNS proxy process, and an intermediate server (IMS) are described. The DNS server, DNS proxy process, and intermediate server (IMS) described herein utilize a source IP address for a client device, in combination with a destination IP address for a host server, in reverse mapping operations in order to accurately provide a hostname originally requested by the client device.03-29-2012
20120079054Automatic Memory Management for a Home Transcoding Device - A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and assigns a priority level to the content. The content moving device transcodes the candidate content in order of highest priority first and lowest priority last. The content moving device may also use the priority level to manage deletion of the transcoded content from the storage on the content moving device. The lowest priority level content may be deleted first as storage space is needed.03-29-2012
20120096109Hierarchical Pre-fetch Pipelining in a Hybrid Memory Server - A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.04-19-2012
20110231510PROCESSING DATA FLOWS WITH A DATA FLOW PROCESSOR - An apparatus and method to distribute applications and services in and throughout a network and to secure the network includes the functionality of a switch with the ability to apply applications and services to received data according to respective subscriber profiles. Front-end processors, or Network Processor Modules (NPMs), receive and recognize data flows from subscribers, extract profile information for the respective subscribers, utilize flow scheduling techniques to forward the data to applications processors, or Flow Processor Modules (FPMs). The FPMs utilize resident applications to process data received from the NPMs. A Control Processor Module (CPM) facilitates applications processing and maintains connections to the NPMs, FPMs, local and remote storage devices, and a Management Server (MS) module that can monitor the health and maintenance of the various modules.09-22-2011
20090198790METHOD AND SYSTEM FOR AN EFFICIENT DISTRIBUTED CACHE WITH A SHARED CACHE REPOSITORY - Network cache systems are used to improve network performance and reduce network traffic. An improved network cache system that uses a centralized shared cache system is disclosed. Each cache device that shares the centralized shared cache system maintains its own catalog, database or metadata index of the content stored on the centralized shared cache system. When one of the cache devices that shares the centralized shared cache system stores a new content resource to the centralized shared cache system, that cache device transmits a broadcast message to all of the peer cache devices. The other cache devices that receive the broadcast message will then update their own local catalog, database or metadata index of the centralized share cache system with the information about the new content resource.08-06-2009
20090198789VIDEOGAME LOCALIZATION USING LANGUAGE PACKS - A code library, or “language interface pack” library, is provided that can be integrated into a video game to detect new localizations of the video game dynamically, and to locate and load the most appropriate localized resources depending on user preferences and available localized game content. If no localized content is available in the preferred language, a fallback system ensures that the game always receives the location of existing game content in another language.08-06-2009
20110145357STORAGE REPLICATION SYSTEMS AND METHODS - Systems and methods for information storage replication are presented. In one embodiment a storage flow control method includes estimating in a primary data server what an outstanding request backlog trend is for a remote secondary data server; determining a relationship of an outstanding request backlog trend to a threshold; and notifying a client that the primary data server can not service additional requests if the trend exceeds the threshold. In one embodiment the estimating comprises: sampling a number of outstanding messages at a plurality of fixed time intervals; and determining if there is a trend in the number of outstanding messages over the plurality of fixed time intervals. It is appreciated the estimating can be performed in a variety of ways, (e.g., utilizing an average, a moving average, etc). Determining the trend can include determining if values monotonically increase. The estimating in the primary server can be performed without intruding on operations of the remote secondary data server. The primary data server and the secondary data server can have a variety of configurations (e.g., a mirrored configuration, a RAID5 configuration, etc.).06-16-2011
20080307065Method for starting up file sharing system and file sharing device - The file sharing system of the present invention is capable of starting up a file sharing device and preventing the connection of an external storage medium to an erroneous host using information that is saved in the external storage medium. In cases where the maintenance exchange work for a NAS device is performed, the collection section collects information that is required in order to start up the NAS system section. The saving section stores the collected information in the USB memory as startup information. In cases where the NAS device is returned after the maintenance exchange is complete, the USB memory is attached to the NAS device. The setting section reads the startup information that is stored in the USB memory and sets the communication control section in accordance with an instruction from the startup control section. As a result, the NAS-OS is read from the logical volume in the storage device and the NAS system section starts up.12-11-2008
20100180006Network Access Device with Shared Memory - A technique for providing network access in accordance with at least one layered network access technology comprising layer 07-15-2010
20120246259EXCHANGING STREAMING INFORMATION - Intermediate devices (09-27-2012
20100180005CACHE CYCLING - The present invention relates to methods, apparatus, and systems for implementing cache cycling. The system includes a gateway in communication with a satellite. The gateway includes a gateway accelerator module which further includes a proxy server. The proxy server is configured to receive the request for the new copy of the requested content and forward the request. Furthermore, the system includes a content provider in communication with the gateway. The content provider is configured to receive the content request and transmit the new copy of the requested content to the gateway. The gateway is configured to transmit the new copy of the content to the subscriber terminal via the satellite, and wherein the subscriber terminal is further configured to replace the requested content stored in the terminal cache module with the new copy of the requested content. The content stored in the terminal cache module is updated for subsequent requests.07-15-2010
20130219007SYSTEMS AND METHODS THERETO FOR ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING TECHNIQUES - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.08-22-2013
20130219006MULTIPLE MEDIA DEVICES THROUGH A GATEWAY SERVER OR SERVICES TO ACCESS CLOUD COMPUTING SERVICE STORAGE - A system, method, and computer program product are provided for enabling client devices to transparently access cloud computing services, service storage, and related data via a gateway server that connects to an external network such as the internet or a social network. Data requests are transmitted from at least one client device to the gateway server. The gateway server determines if the data request cannot be satisfied by data stored in its memory, and responsively transmits a second data request to the external network and stores data received in response to the second data request in its memory. The gateway server then satisfies the data request using the stored data, which may include a web computing service, an application program interface, streaming data, metadata, and/or media data.08-22-2013
20100185744MANAGEMENT OF A RESERVE FOREVER DEVICE - A host reserves a device controlled by a controller that is coupled to the host. The controller starts a first timer, in response to a completion of input/output (I/O) operations on the device by the host, wherein the host continues to reserve the device after the completion of the I/O operations. The controller sends a notification to the host after an expiry of the first timer, wherein the notification requests the host to determine whether the device should continue to be reserved by the host. The controller starts a second timer, in response to receiving an acknowledgement from the host that the notification has been received by the host, wherein reservation status of the device reserved by the host is determined by the controller on or prior to an expiry of the second timer.07-22-2010
20100191823Data Processing In A Hybrid Computing Environment - Data processing in a hybrid computing environment that includes a host computer, a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the plurality of accelerators shared remotely with the host computer, where data processing according to embodiments of the present invention includes performing, by the plurality of accelerators, a local reduction operation with the local shared memory for the accelerators; writing remotely, by one of the plurality of accelerators to the shared memory local to the host computer, a result of the local reduction operation; and reading, by the host computer from shared memory local to the host computer, the result of the local reduction operation.07-29-2010
20100191822Broadcasting Data In A Hybrid Computing Environment - Methods, apparatus, and products for broadcasting data in a hybrid computing environment that includes a host computer, a number of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the accelerators shared remotely with the host computer, where broadcasting data according to embodiments of the present invention includes: writing, by the host computer remotely to the shared local memory for the accelerators, the data to be broadcast; reading, by each of the accelerators from the shared local memory for the accelerators, the data; and notifying the host computer, by the accelerators, that the accelerators have read the data.07-29-2010
20100161751METHOD AND SYSTEM FOR ACCESSING DATA - A method and system for distributing and accessing data over multiple storage controllers wherein data is broken down into one or more fragments over the multiple storage controllers, each storage controller owning a fragment of the data, receiving a request for data in a first storage controller from one of a plurality of hosts, responding to the host by the first storage controller with the request if the first storage controller contains the requested data, forwarding the request to the second storage controller from the first storage controller if the first storage controller does not contain the requested data, responding to the first storage controller from the second storage controller with the request, and responding to the host from the first storage controller with the request.06-24-2010
20090043863SYSTEM USING VIRTUAL REPLICATED TABLES IN A CLUSTER DATABASE MANAGEMENT SYSTEM - A system for improved data sharing within a cluster of nodes having a database management system. The system defines a virtual replicated table as being useable in a hybrid of a shared-cache and shared-nothing architecture. The virtual replicated table is a physically single table sharable among a plurality of cluster nodes for data read operations and not sharable with other cluster nodes for data modification operations. Default owner node is assigned for each virtual replicated table to ensure the page validity and provide requested pages to the requesting node.02-12-2009
20100250699METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH - A switch includes a reserved pool of buffers in a shared memory. The reserved pool of buffers is reserved for exclusive use by an egress port. The switch includes pool select logic which selects a free buffer from the reserved pool for storing data received from an ingress port to be forwarded to the egress port. The shared memory also includes a shared pool of buffers. The shared pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the shared pool upon detecting no free buffer in the reserved pool. The shared memory may also include a multicast pool of buffers. The multicast pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the multicast pool upon detecting an IP Multicast data packet received from an ingress port.09-30-2010
20100235461NETWORK DEVICE AND METHOD OF SHARING EXTERNAL STORAGE DEVICE - When the external storage device is connected to the USB connector of the router, through the process of the OS which has detected this event, it will be determined whether the device is a USB mass storage device; and if it is found to be a USB mass storage device, internal software is started up by using Hotplug function, and it is further determined whether the file system is recognizable; and if the file system is recognizable, CIFS is configured to allow sharing and enable GUEST access. As a result, no laborious operation is needed to share a memory device such as a hard disk among users on a network.09-16-2010
20110238775Virtualized Data Storage Applications and Optimizations - Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage, but actually store data at the data center. Virtual storage arrays may prioritize storage client and prefetching requests for communication over the WAN and/or SAN based on their associated clients, servers, storage clients, and/or applications. A virtual storage array may transfer large data sets from a data center to a branch location while providing branch location users with immediate access to the data set stored at the data center. Virtual storage arrays may be migrated by disabling a virtual storage array interface at a first branch location and then configuring another branch virtual storage array interface at a second branch location to provide its storage clients with access to storage array data stored at the data center.09-29-2011
20110131289METHOD AND APPARATUS FOR SWITCHING COMMUNICATION CHANNEL IN SHARED MEMORY COMMUNICATION ENVIRONMENT - A method for switching a communication channel in a shared memory communication environment which sets up a TCP/IP (Transmission Control Protocol/Internet Protocol) communication channel and a shared memory communication channel from a first virtual machine to a second virtual machine, the method includes: transmitting a channel switching message to the first virtual machine when the first virtual machine moves to another physical machine; transmitting the channel switching message from the first virtual machine to the second virtual machine; and switching a channel state between the first virtual machine and the second virtual machine.06-02-2011
20090150511NETWORK WITH DISTRIBUTED SHARED MEMORY - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The network also includes a plurality of local cache managers, each of which are associated with a different portion of the clustered memory cache, and a metadata service operatively coupled with the local cache managers. Also, a plurality of clients are operatively coupled with the metadata service and the local cache managers. In response to a request issuing from any of the clients for a data item present in the clustered memory cache, the metadata service is configured to respond with identification of the local cache manager associated with the portion of the clustered memory cache containing such data item.06-11-2009
20090144388NETWORK WITH DISTRIBUTED SHARED MEMORY - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The clustered memory cache is accessible by a plurality of clients on the computer network and is configured to perform page caching of data items accessed by the clients. The network also includes a policy engine operatively coupled with the clustered memory cache, where the policy engine is configured to control where data items are cached in the clustered memory cache.06-04-2009
20120143979PROTOCOL STACK USING SHARED MEMORY - There are disclosed processes and systems relating to optimized network traffic generation and reception. Application programs and a protocol stack may share a memory space. The protocol stack may designate available bandwidth for use by an application program. The application programs may store descriptors from which the protocol stack may form payload data for data units.06-07-2012
20100306337SYSTEMS AND METHODS FOR CLONING TARGET MACHINES IN A SOFTWARE PROVISIONING ENVIRONMENT - A provisioning server can provide and interact with a cloner agent on target machines. The cloner agent can execute on a source target machine and copy the contents of storage on the source target machine to a storage location of the provisioning server. Once copied, the provisioning server can provide the cloner agent to destination target machines. The cloner agent can copy the contents of the source target machine, stored at the provisioning server, to the destination target machines.12-02-2010
20090037555Storage system that transfers system information elements - A first storage system that has a first storage device comprises a first interface device that is connected to a second interface device that a second storage system has. A first controller of the first storage system reads system information elements of first system information (information relating to the constitution and control of the first storage system) from a first system area (a storage area that is not provided for the host of the first storage device) and transfers the system information elements or modified system information elements to the second storage system via the first interface device. The system information elements are recorded in a second system area in a second storage device that the second storage system has.02-05-2009
20090037554MIGRATING WORKLOADS USING NETWORKED ATTACHED MEMORY - A network system comprising a plurality of servers communicatively-coupled on a network, a network-attached memory coupled between a first server and a second server of the server plurality, and a memory management logic that executes on selected servers of the server plurality and migrates a virtual machine from the first server to the second server with memory for the virtual machine residing on the network-attached memory.02-05-2009
20100306338WRITING OPERATING DATA INTO A PORTABLE DATA CARRIER - In a method for writing (S12-02-2010
20130138761STREAMING AND BULK DATA TRANSFER TRANSFORMATION WITH CONTEXT SWITCHING - In described embodiments, processing of a data stream, such as a packet stream or flow, associated with data streaming is improved by context switching that employs context history. For each data stream that is transformed through processing, a context is maintained that comprises state information and includes a history and state information that enables the transformation for the data stream, Processing for the data transformation examines currently arriving data and then processes the data based on the context data and previously known context information for the data stream from the history stored in memory.05-30-2013
20130138762FACILITATING COMMUNICATION BETWEEN ISOLATED MEMORY SPACES OF A COMMUNICATIONS ENVIRONMENT - Automatically converting a synchronous data transfer to an asynchronous data transfer. Data to be transferred from a sender to a receiver is initiated using a synchronous data transfer protocol. Responsive to a determination that the data is to be sent asynchronously, the data transfer is automatically converted from the synchronous data transfer to the asynchronous data transfer.05-30-2013
20130144967Scalable Queuing System - A method, an apparatus and an article of manufacture for providing queuing semantics in a distributed queuing service while maintaining service scalability. The method includes supporting at least one of an en-queue and a de-queue operation of one or more queued messages in a non-guaranteed order, maintaining the ordering of the one or more queued messages, and routing an en-queue operation to a persistent queue server and a de-queue operation to a cache manager in the maintained ordering of the one or more queued messages to provide queuing semantics in a distributed queuing service while maintaining service scalability.06-06-2013
20110010428PEER-TO-PEER STREAMING AND API SERVICES FOR PLURAL APPLICATIONS - Embodiments of apparatuses with a universal P2P service platform are disclosed herein. A unified infrastructure is built in such apparatuses and a unified P2P network may be established with such apparatuses. In various embodiments, such an apparatus comprises a P2P operating system (OS) virtual machine (VM) 01-13-2011
20110113115STORAGE SYSTEM WITH A MEMORY BLADE THAT GENERATES A COMPUTATIONAL RESULT FOR A STORAGE DEVICE - One embodiment is a storage system having one or more compute blades to generate and use data and one or more memory blades to generate a computational result. The computational result is generated by a computational function that transforms the data generated and used by the one or more compute blades. One or more storage devices are in communication with and remotely located from the one or more compute blades. The one or more storage devices store and serve the data for the one or more compute blades.05-12-2011
20110040850MESH-MANAGING DATA ACROSS A DISTRIBUTED SET OF DEVICES - Data files, applications and/or corresponding user interfaces may be accessed at a device that collaborates in a mesh. The mesh may include any number or type of devices that collaborate in a network. Data, applications and/or corresponding user interfaces may be stored within a core object that may be shared over the mesh. Information in the core object may be identified with a corresponding user such that a user may use any collaborating device in the mesh to access the information. In one example, the information is stored remotely from a device used to access the information. A remote source may store the desired information or may determine the storage location of the desired information in the mesh and may further provide the desired information to a corresponding user.02-17-2011
20110040848PUSH PULL CACHING FOR SOCIAL NETWORK INFORMATION - Embodiments are directed towards modifying a distribution of writers as either a push writer or a pull writer based on a cost model that decides for a given content reader whether it is more effective for the writer to be a pull writer or a push writer. A cache is maintained for each content reader for caching content items pushed by a push writer in the content writer's push list of writers when the content is generated. At query time, content items are pulled by the content reader based on writers a content reader's pull list. One embodiment of the cost model employs data about a previous number of requests for content items for a given writer for a number of previous blended display results of content items. When a writer is determined to be popular, mechanisms are proposed for pushing content items to a plurality of content readers.02-17-2011
20120173653VIRTUAL MACHINE MIGRATION IN FABRIC ATTACHED MEMORY - A computer program product and computer implemented method are provided for migrating a virtual machine between servers. The virtual machine is initially operated on a first server, wherein the first server accesses the virtual machine image over a network at a memory location within fabric attached memory. The virtual machine is migrated from the first server to a second server by flushing data to the virtual machine image from cache memory associated with the virtual machine on the first server and providing the state and memory location of the virtual machine to the second server. The virtual machine may then operate on the second server, wherein the second server accesses the virtual machine image over the network at the same memory location within the fabric attached memory without copying the virtual machine image.07-05-2012
20100064023HOST DISCOVERY IN MULTI-BLADE SERVER CHASSIS - A method for discovering hosts on a multi-blade server chassis is provided. A switch, operational in the multi-blade server, is queried for first world-wide name (WWN) information of the hosts. The first WWN information is known to the switch. The first WWN information is saved on a redundant array of independent disks (RAID) subsystem of the multi-blade server chassis. A system location for each of the hosts is mapped to the RAID subsystem.03-11-2010
20110179133CONNECTION MANAGER CAPABLE OF SUPPORTING BOTH DISTRIBUTED COMPUTING SESSIONS AND NON DISTRIBUTED COMPUTING SESSIONS - A method is described that involves establishing a connection over a shared memory between a connection manager and a worker node. The shared memory is accessible to multiple worker nodes. Then sending, from the connection manager to the worker node over the connection, a first request containing a method call to a remote object on the worker node. Also sending, from the connection manager to the worker node over the connection, a second request containing a second method call to a second remote object on the worker node.07-21-2011
20110082908DYNAMIC CACHING OF NODES - A replication count of a data element of a node of a cache cluster is defined. The data element has a key-value pair where the node is selected based on a hash of the key and a size of the cache cluster. The data element is replicated to at least one other node of the cache cluster based on the replication count.04-07-2011
20120303737System and Method for Storing Data in Clusters Located Remotely From Each Other - A system for storing data includes a plurality of clusters located remotely from each other in which the data is stored. Each cluster has a token server that controls access to the data with only one token server responsible for any piece of data. Each cluster has a plurality of Cache appliances. Each cluster has at least one backend file server in which the data is stored. The system includes a communication network through which the servers and appliances communicate with each other. A Cache Appliance cluster in which data is stored in back-end servers within each of a plurality of clusters located remotely from each other. A method for storing data.11-29-2012
20110060806USING IN-THE-CLOUD STORAGE FOR COMPUTER HEALTH DATA - A policy enforcement point (PEP) controls access to a network in accordance with one or more policy statements that specify conditions for compliant devices. The PEP receives current health data from a device seeking to access the network, and stores this health data in local volatile memory. If the health data stored in local volatile memory complies with the policy statements, the device is permitted to access the network. Otherwise, the device is denied access to the network, or permitted only limited access to the network in order to resolve its compliance issues. The PEP occasionally stores the health data in local persistent memory and on an online service (OLS). During reboot, the PEP accesses the OLS to confirm that it has the most recent health data. If more recent health data is available from the OLS, the OLS provides this more recent data to the PEP.03-10-2011
20110040849RELAY DEVICE, MAC ADDRESS SEARCH METHOD - A relay device includes: memories, each memory being operable to store at least a data pair formed of a MAC address and a port number; a search unit to search only amongst ones of the memories having valid data pairs when searching for a port number based upon a MAC address; a data moving unit to move valid data pairs to different locations within the plurality of memories in order to reduce a total number of memories, amongst the plurality thereof, having valid data pairs; and a power supply controller to selectively stop supplying power to ones of the memories storing only invalid data.02-17-2011
20110258283MESSAGE COMMUNICATION TECHNIQUES - A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.10-20-2011
20100281133STORING LOSSY HASHES OF FILE NAMES AND PARENT HANDLES RATHER THAN FULL NAMES USING A COMPACT TABLE FOR NETWORK-ATTACHED-STORAGE (NAS) - Multiple Network Attached Storage (NAS) appliances are pooled together by a virtual NAS translator, forming one common name space visible to clients. Clients send messages to the virtual NAS translator with a file name and a virtual handle of the parent directory that are concatenated to a full file-path name and compressed by a cryptographic hash function to generate a hashed-name key. The hashed-name key is matched to a storage key in a table. The full file-path name is not stored, reducing the table size. A unique entry number is returned to the client as the virtual file handle that is also stored in another table with one or more native file handles, allowing virtual handles to be translated to native handles that the NAS appliance servers use to retrieve files. File movement among NAS servers alters native file handles but not virtual handles, hiding NAS details from clients.11-04-2010
20100281131User Interface Between a Flexray Communications Module and a Flexray User, and Method for Transmiting Message Over Such an Interface - A user interface between a FlexRay communications module, which is connected to a FlexRay communications connection via which messages are transmitted, and which includes a message memory for the temporary storage of messages from the FlexRay communications connection or for the FlexRay communications connection, and a FlexRay user assigned to the FlexRay communications module. In order to make possible a particularly resource-saving and resource-conserving connection of the user to the FlexRay communications module, it is provided that the user interface has a device for the temporary storage of the messages. The device includes at least one message memory which has a first connection to FlexRay the communications module and a second connection to the user. Message memory may be implemented as a dual-ported RAM.11-04-2010
20130159452Memory Server Architecture - A memory server system is provided herein. It includes a first plurality of Field Programmable Gate Arrays (FPGA) application server nodes that are configured to parse the location of the FPGA data server nodes; a second plurality of FPGA data server nodes that are configured as memory controllers, each of the second plurality of FPGA data server nodes being connected to a plurality of RAM memory banks; and a network connection between the first plurality of FPGAs and the second plurality of FPGA processing nodes.06-20-2013
20130159451SEMANTIC CACHE CLOUD SERVICES FOR CONNECTED DEVICES - Technologies are described for semantic cache for connected devices (semantic cache) as a set of next generation cloud services to primarily support the Internet of things scenario: a massive network of devices and device application services inter-communicating, facilitated by cloud-based semantic cache services. The semantic cache may be an instrumented caching reverse proxy with auto-detection of semantic web traffic, public, shadow and private namespace management and control, and real time semantic object temporal versioning, geospatial versioning, semantic contextual versioning and groupings and semantic object transformations.06-20-2013
20090077194Data input terminal, method, and computer readable storage medium storing program thereof - The input unit stores data input by the user in the data storage. The status determiner determines reception status of the screen data to be one of three statuses of “abnormal”, “normal”, and “recovery” from “abnormal” to “normal” on the basis of frame losses. In a case of the “abnormal” status, a transmission controller does not read the input data stored in the data storage. In a case of the “normal” status, the transmission controller reads the input data stored in the data storage, transmits the input data via the transmitter, and deletes the input data stored in the data storage. In a case of the “recovery” status, the transmission confirmer instructs the output unit to output the input data stored in the data storage to ask the user whether to transmit the input data to the server.03-19-2009
20090063653Grid computing space - A method and apparatus for using a tree-structured cluster as a library for a computing grid. In one embodiment, a request for computation is received at a cache node of the cluster. The computation requires data from an other cache node of the cluster, and not present in the cache node receiving the request. The other cache nodes of the cluster are polled for the required data. An instance of the required data stored in the other cache node of the cluster is replicated to the cache node receiving the computation request.03-05-2009
20100318625METHOD AND APPARATUS FOR STORAGE-SERVICE-PROVIDER-AWARE STORAGE SYSTEM - A storage system includes a virtual volume configured on a storage controller and mapping to a physical storage capacity maintained at a remote location by a storage service provider (SSP). The storage controller receives an I/O command in a block-based protocol specifying a logical block address (LBA). The storage controller correlates the LBA with a file name of a file stored by the SSP, translates the I/O command to an IP-supported protocol, and forwards the translated I/O command with the file name to the SSP for processing. In the case of a write command, the SSP stores the write data using the specified file name. In the case of a read command, the SSP enables download of data from the specified file name. In an alternative embodiment, a NAS head may replace the storage controller for correlating the LBA with a file name and translating the I/O command.12-16-2010
20120150987TRANSMISSION SYSTEM AND APPARATUS, AND METHOD - In a transmission system, a first transmitting apparatus (server node) acquires distribution data, which includes multiple update data sets and attribute information of the update data sets, from a file server via a second network. The first transmitting apparatus (server node) stores the attribute information so as to allow a second transmitting apparatus (client node) connected to the first transmitting apparatus (server node) via a first network to acquire the attribute information and determine necessity of acquisition with respect to each of the update data sets. The first transmitting apparatus (server node) also stores the update data sets to be acquired by the second transmitting apparatus (client node).06-14-2012
20110119344Apparatus And Method For Using Distributed Servers As Mainframe Class Computers - The invention consists of a switch or bank of switches that give hundreds or thousands of servers the ability to share memory efficiently. It supports improving distributed server utilization from 10% on average to 100%. The invention consists of connecting distributed servers via a cross point switch to a back plane shared random access (RAM) memory thereby achieving a mainframe class computer. The distributed servers may be Windows PCs or Linux standalone computers. They may be clustered or virtualized. This use of cross point switches provides shared memory across servers, improving performance.05-19-2011
20120311067Data Communication Efficiency - To reduce repetitive data transfers, data content of an outgoing message is stored within cache storage of an intermediate node of a data communications network. A token for identifying the cached data content is stored at the intermediate node and the sender. When a subsequent outgoing message is to be routed from a first network node to a target destination via the intermediate node, a process running at the first node checks whether the content of the message matches data cached at the intermediate node. If there is a match, a copy of the token is sent from the first node to the intermediate node instead date data content. The token is used at the intermediate node to identify the cached data, and the cached data is retrieved from the cache and forwarded to the target destination as an outgoing message.12-06-2012
20120311066System for the Delivery and Dynamic Presentation of Large Media Assets over Bandwidth Constrained Networks - Media content, based on a predetermined set of constraints, from a content provider is delivered to a local cache of a user device before viewing the media. A client asset manager process resides in the user device, an asset list at the content provider site, and the media assets are located at a remote site.12-06-2012
20120311065ASYNCHRONOUS FILE OPERATIONS IN A SCALABLE MULTI-NODE FILE SYSTEM CACHE FOR A REMOTE CLUSTER FILE SYSTEM - Asynchronous file operations in a scalable multi-node file system cache for a remote cluster file system, is provided. One implementation involves maintaining a scalable multi-node file system cache in a local cluster file system, and caching local file data in the cache by fetching file data on demand from the remote cluster file system into the cache over the network. The local file data corresponds to file data in the remote cluster file system. Local file information is asynchronously committed from the cache to the remote cluster file system over the network.12-06-2012
20110010427Quality of Service in Virtual Computing Environments - Methods and apparatus facilitate the management of input/output (I/O) subsystems in virtual I/O servers to provide appropriate quality of services (QoS). A hierarchical QoS scheme based on partitioning of network interfaces and I/O subsystems transaction types are used to classify Virtual I/O communications. This multi-tier QoS method allows virtual I/O servers to be scalable and provide appropriate QoS granularity.01-13-2011
20110320558Network with Distributed Shared Memory - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The network also includes a plurality of local cache managers, each of which are associated with a different portion of the clustered memory cache, and a metadata service operatively coupled with the local cache managers. Also, a plurality of clients are operatively coupled with the metadata service and the local cache managers. In response to a request issuing from any of the clients for a data item present in the clustered memory cache, the metadata service is configured to respond with identification of the local cache manager associated with the portion of the clustered memory cache containing such data item.12-29-2011
20120047221METHODS AND APPARATUS FACILITATING ACCESS TO STORAGE AMONG MULTIPLE COMPUTERS - Multiple computers in a cluster maintain respective sets of identifiers of neighbor computers in the cluster for each of multiple named resource. A combination of the respective sets of identifiers define a respective tree formed by the respective sets of identifiers for a respective named resource in the set of named resources. Upon origination and detection of a request at a given computer in the cluster, a given computer forwards the request from the given computer over a network to successive computers in the hierarchical tree leading to the computers relevant in handling the request based on use of identifiers of neighbor computers. Thus, a combination of identifiers of neighbor computers identify potential paths to related computers in the tree.02-23-2012
20120047220DATA TRANSFER DEVICE AND DATA TRANSFER SYSTEM - According to one embodiment, a data transfer device is provided. The data transfer device is configured to transfer data between a plurality of data transceivers and at least one memory having a first memory area. When one of the data transceivers has acquired an exclusive access right to the first memory area of the memory, the data transfer device stores address information corresponding to the first memory area.02-23-2012
20120005301Sharing an image - Method, server, network and computer program product for sharing an image between a first terminal and a second terminal. An original version of the image is received at the server from the first terminal. Tiles are then received at the server from the first terminal, each tile representing at least a section of the image and including a change made to the image at the first terminal. An image state is maintained at the server identifying which tiles are required for forming a latest version of the image. On determining that the latest version of the image is to be formed at the second terminal, tiles based on the image state for forming the latest version of the image are transmitted from the server to the second terminal.01-05-2012
20120124159CONTENT DELIVERY SYSTEM, CONTENT DELIVERY METHOD AND CONTENT DELIVERY PROGRAM - In order to stably deliver content data over a network, a content delivery system is provided with: a content retention module for storing content data consisting of hierarchically encoded hierarchical data; a cache retention module for caching content data; a hierarchical score determination module for calculating an access requirement frequency for each piece of cached hierarchical data; a hierarchical arrangement determination module for replacing hierarchical data having an access requirement frequency lower than a fixed value with the hierarchical data stored in the content retention module; and a content delivery module for delivering content data in response to requests from a client device.05-17-2012
20120124158FILE TRANSFER PROTOCOL FOR MOBILE COMPUTER - A method is disclosed for communicating using a device having a Palm OS. SMB is preferentially used to communicate with a node, and if use of SMB is not possible, FTP is used, and if use of FTP is not possible, Bluetooth is used. If FTP or Bluetooth is selected as the protocol, file sharing between the device and node that entails a read or write is executed by temporarily copying a file to an internal Palm OS memory of the device, performing the read or write on the file, and then copying the file back to the node to overwrite a previous version of the file at the node. For non-Palm OS file transfer to the internal memory, the file is wrapped in a Palm OS stream in the internal memory for executing reads or writes. For file transfer to an expansion Palm OS memory card, byte-to-byte copying of the file is executed using the FAT of the expansion memory, with the file being transferred through an internal Palm OS memory of the device.05-17-2012
20120131128SYSTEM AND METHOD FOR GENERATING A CONSISTENT USER NAME-SPACE ON NETWORKED DEVICES - Implementing a consistent user name-space on networked computing devices includes various components and methods. When a network connection between a local or host computing device and one or more remote computing devices is present, remote items are represented using the same methodology as items located on the host computing device. To the user, remote and local items are indistinguishable. When the network connection is lost or items located on a remote computer are otherwise unavailable, the unavailable items remain represented on the host computing device. Unavailable items are represented in a way that informs the user that the items may not be fully accessed.05-24-2012
20100250698AUTOMATED TAPE DRIVE SHARING IN A HETEROGENEOUS SERVER AND APPLICATION ENVIRONMENT - A method and system for automatically sharing a tape drive in a heterogeneous computing environment that includes a first computer and second computer. The first computer receives a message that includes a shared tape drive identifier, a source port identifier of the second computer, and a reservation status change for the tape drive. Based on the tape drive identifier, the first computer determines that the tape drive is connected to the first computer. The source port identifier is determined to not identify any host bus adapter installed in the first computer. In response to the first computer determining that the reservation status change indicates a reservation or a release of the tape drive for the second computer, the first computer sets the tape drive offline or online, respectively, in an application executing in the first computer.09-30-2010
20120016950METHOD AND APPARATUS FOR DYNAMICALLY MANAGING BANDWIDTH FOR CLIENTS IN A STORAGE AREA NETWORK - A method for managing bandwidth allocation in a storage area network includes receiving a plurality of Input/Output (I/O) requests from a plurality of client devices, determining a priority of each of the client devices relative to other client devices, and dynamically allocating bandwidth resources to each client device based on the priority assigned to that client device.01-19-2012
20120059900PERSISTENT PERSONAL MESSAGING IN A DISTRIBUTED SYSTEM - A persistent personal messaging system provides shared memory space functionality supporting a user changing between a plurality of client devices, even within a loosely coupled, distributed system for persistent personal messaging. A user, irrespective of which messaging client they are using, logs on to the system. The act of logging on places user data, representing the user, into the shared memory space. A “contacts” service agent finds the friends and groups that the user belongs to and notifies other users that the user has logged on. Given the on-line status of other users and groups, a “history” service agent will retrieve previous messages from the shared memory space that formed the user's conversations with users and groups, as if the user had never logged off or switched devices. When the user adds a new message to any conversation, the message is added to the shared memory space.03-08-2012
20120158883INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM - A reading instruction receiving unit (06-21-2012
20120158884CONTENT DISTRIBUTION DEVICE, CONTENT DISTRIBUTION METHOD, AND PROGRAM - A content distribution device of the present invention includes: a content holding unit that holds a plurality of distribution contents that can be distributed, the distribution content including a first content that has a first bit rate and a second content that has a second bit rate lower than the first bit rate; a cache holding unit that temporarily holds at least one of the first and second contents; a cache control unit that reads out the first or second content to be distributed from the content holding unit, and stores the read out content in the cache holding unit; and a content distribution unit that reads out and distributes the first or second content that is temporarily held in the cache holding unit, or the first or second content that is held in the content holding unit, in distribution of a specified content specified by a distribution request among the plurality of distribution contents. The content distribution unit reads out and distributes the second content of the specified content that is held in the content holding unit, in a case of the specified content not being stored in the cache holding unit, and an available capacity of the cache holding unit being less than a first available capacity threshold value.06-21-2012
20120158882HIGHLY SCALABLE AND DISTRIBUTED DATA SHARING AND STORAGE - Embodiments of the disclosure relate to storing and sharing data in a scalable distributed storing system using parallel file systems. An exemplary embodiment may comprise a network, a storage node coupled to the network for storing data, a plurality of application nodes in device and system modalities coupled to the network, and a parallel file structure disposed across the storage node and the application nodes to allow data storage, access and sharing through the parallel file structure. Other embodiments may comprise interface nodes for accessing data through various file access protocols, a storage management node for managing and archiving data, and a system management node for managing nodes in the system.06-21-2012
20120209942SYSTEM COMBINING A CDN REVERSE PROXY AND AN EDGE FORWARD PROXY WITH SECURE CONNECTIONS - A proxy system is provided to receive an HTTP request for content accessible over the Internet comprising: cache storage; and a computer system configured to implement, a CDN proxy module and an edge forward proxy module each having access to the cache storage to cache and to retrieve content; and a selector to select either the CDN proxy module or the edge forward proxy module depending upon contents of a header of the HTTP request received from the user device; an HTTP client to forward the request from the CDN proxy or from the edge forward proxy over the Internet to a server to serve the requested content.08-16-2012
20120209944Software Pipelining On A Network On Chip - Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and deallocating the shared memory.08-16-2012
20120209943APPARATUS AND METHOD FOR CONTROLLING DISTRIBUTED MEMORY CLUSTER - Provided are an apparatus and method for controlling a distributed memory cluster. A distributed computing system may include a computing node cluster, a distributed memory cluster, and a controlling node. The computing node cluster may include a plurality of computing nodes including first computing nodes that each generates associated data. The distributed memory cluster may be configured to store the associated data of the first computing nodes. The controlling node may be configured to select memory blocks of the associated data for distribution on the distributed memory cluster based on a node selection rule and memory cluster structure information, and to select second computing nodes from the computing node cluster based on a location selection rule and the memory cluster structure information.08-16-2012
20120110112DISTRIBUTED SYSTEM FOR CACHE DEFEAT DETECTION AND CACHING OF CONTENT ADDRESSED BY IDENTIFIERS INTENDED TO DEFEAT CACHE - Systems and methods for cache defeat detection are disclosed. Moreover, systems and methods for caching of content addressed by identifiers intended to defeat cache are further disclosed. In one aspect, embodiments of the present disclosure include a system for optimizing resources in a mobile network, by for example performing one or more of, identifying a parameter in an identifier used in multiple polling requests to a given content source; means for, detecting that the parameter in the identifier changes for each of the polling requests; determining whether responses received from the given content source are the same for each of the multiple polling requests; and/or caching the responses on the mobile device in response to determining that the responses received for the given content source are the same.05-03-2012
20120072524SYSTEM AND METHOD FOR RECORDING DATA IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes receiving a signal to record a media stream, and recording the media stream in a first file that has a preconfigured length. If the media stream being recorded exceeds the preconfigured length then a second file is used to continue recording the media stream. The second file can have the same preconfigured length as the first file. The method also includes receiving a signal to stop recording the media stream, and storing metadata associated with the media stream in a database. In specific implementations, the metadata can include a unique file name associated with the media stream, a directory name of a disk directory, a first time indicative of when the recording started, and a second time indicative of when the recording ended.03-22-2012
20110066696COMMUNICATION PROTOCOL - One aspect relates to a communication protocol for communicating between one or more entities, such as devices, hosts or any other system capable of communicating over a network. A protocol is provided that allows communication between entities without a priori knowledge of the communication protocol. In such a protocol, for example, information describing a data structure of the communication protocol is transferred between communicating entities. Further, an authentication protocol is provided for providing bidirectional authentication between communicating entities. In one specific example, the entities include a master device and a slave device coupled by a serial link. In another specific example, the communication protocol may be used for performing unbalanced transmission between communicating entities.03-17-2011
20110106907SYSTEM AND METHOD FOR SEQUENTIAL RECORDING AND ARCHIVING LARGE VOLUMES OF VIDEO DATA - The invention relates to a data storage system comprising a plurality of arrays of a server and a number of data recording devices, capable of sequentially recording supplied data at an input rate below a given maximum input rate. The system further comprises a network switch as an interface between the arrays of data recording devices and a network of data capturing devices where there is a variable overall data capturing rate. The servers are each provided with monitoring means for monitoring the input rate of the respective array. The servers are communicatively linked to each other and at least one of the servers is provided for functioning as a controller for controlling at least one other of the servers and assigning part of the stream of captured data to the at least one other server in response to its monitoring means.05-05-2011
20100094950Methods and systems for controlling fragment load on shared links - Controlling fragment load on shared links, including a large number of fractional-storage CDN servers storing erasure-coded fragments encoded with a redundancy factor greater than one from contents, and a large number of assembling devices configured to obtain the fragments from sub-sets of the servers. At least some of the servers share their Internet communication link with other Internet traffic, and the fragment traffic via the shared link is determined by the number of sub-sets in which the servers accessed via the shared link participate. Wherein the maximum number of sub-sets in which the servers accessed via the shared link are allowed to participate is approximately a decreasing function of the throughput of the other Internet traffic via the shared link.04-15-2010
20100094949Method of Backing Up Library Virtual Private Database Using a Web Browser - A library uses a web server to store library vital product data (VPD) to a user's computer. In certain embodiments, the library uses web type cookies to save library VPD as name-value pairs. After an action, such as a service action, that results in a loss of VPD, the library can automatically retrieve the VPD from the web browser of the user's computer. This approach has several advantages. No user intervention is required to back up or restore the library VPD. Simply using the web user interface of the library accomplishes the necessary connection to the user's computer storage. If the user does not connect to the web browser then it is likely that library VPD is not being changed. No additional hardware or software is required. Additionally, the library already has a web server and the customer already uses web browsers to access the library. No cost, installation, or setup is required. In certain embodiments, library firmware can use the existing operator panel and web user interface for prompting the user through any decisions that may be required, as it relates to backing up or restoring library VPD.04-15-2010
20120221670METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR CACHING CONTENT - Disclosed are methods, circuits, devices, systems and associated computer executable code for caching content. According to embodiments, a client device may be connected to the internet or other distributed data network through a gateway network. As initial portions of client requested content enters the gateway network, the requested content may be characterized and compared to content previously cached on a cache integral or otherwise functionally associated with the gateway network. In the event a match is found, a routing logic, mechanism, circuitry or module may replace the content source server with the cache as the source of content being routed to the client device. In the event the comparison does not produce a match, as content enters the network a caching routine running on processing circuitry associated with the gateway network may passively cache the requested content while routing the content to the client device.08-30-2012
20120131127ADVANCED CONTENTION DETECTION - A multiple computer system disclosed in which n computers (M05-24-2012
20120131126Mirroring Solution in Cloud Storage Environment - A system configured to provide access to shared storage includes a first network node configured to provide access to the shared storage to a first plurality of client stations. The first network node includes a first cache memory module configured to store first data corresponding to the first plurality of client stations, and a first cache control module configured to transfer the first data from the first cache memory module to the shared storage. A second network node is configured to provide access to the shared storage to a second plurality of client stations. The second network node includes a second cache memory module configured to store second data corresponding to the second plurality of client stations and store the first data, and a second cache control module configured to transfer the second data from the second cache memory module to the shared storage.05-24-2012
20100082765SYSTEM AND METHOD FOR CHUNK BASED TIERED STORAGE VOLUME MIGRATION - System and method for reducing costs of moving data between two or more of multi-tiered storage devices. Specifically, the system operates by moving only high tier portion of data and merely remapping the low tier data to migration target device, which eliminates a large amount of data movement (low tier) while maintaining the SLA of high tier data. Specifically, when a command to migrate a thin provisioned volume is received from a source primary storage device to another target primary storage device, the system doesn't copy all of the tier04-01-2010
20120166573CENTRALIZED FEED MANAGER - A method delivering content from a plurality of sources to a plurality of end servers through a central manager is provided. The method includes receiving the content from the plurality of sources at the central manager, formatting the content to a form usable by the plurality of end servers, creating a transaction generic to the plurality of end servers where the transaction includes a reference to a set of instructions for storing the formatted content, sending the transaction to an end server in the plurality of end servers, and calling the reference to execute the set of instructions where the set of instructions store the formatted content into the memory of the end server.06-28-2012
20120166572CACHE SHARING AMONG BRANCH PROXY SERVERS VIA A MASTER PROXY SERVER AT A DATA CENTER - A method for cache sharing among branch proxy servers. A branch proxy sever receives a request for accessing a resource at a data center. The branch proxy server creates a cache entry in its cache to store the requested resource if the branch proxy server does not store the requested resource. Upon creating the cache entry, the branch proxy server sends the cache entry to a master proxy server at the data center to transfer ownership of the cache entry if the master proxy server did not store the resource in its cache. When the resource becomes invalid or expired, the master proxy server informs the appropriate branch proxy servers storing the resource to purge the cache entry containing this resource. In this manner, the master proxy server ensures that the cached resource is synchronized across the branch proxy servers storing this resource.06-28-2012
20110179132Provisioning Server Resources in a Cloud Resource - Systems and methods to manage workloads and hardware resources in a data center or cloud. In one embodiment, a method includes a data center having a plurality of servers in a network. The data center provides a virtual machine for each of a plurality of users, each virtual machine to use a portion of hardware resources of the data center. The hardware resources include storage and processing resources distributed onto each of the plurality of servers. The method further includes sending messages amongst the servers, some of the messages being sent from a server including status information regarding a hardware resource utilization status of that server. The method further includes detecting a request from the virtual machine to handle a workload requiring increased use of the hardware resources, and provisioning the servers to temporarily allocate additional resources to the virtual machine, wherein the provisioning is based on status information provided by one or more of the messages.07-21-2011
20120173654METHOD AND APPARATUS FOR IDENTIFYING VIRTUAL CONTENT CANDIDATES TO ENSURE DELIVERY OF VIRTUAL CONTENT - An apparatus and method is provided that ensures virtual content providers such as advertisers that their virtual content will reach every mobile device, every application within each mobile device and/or every user. Such functionality is referred to herein as a “guaranteed reach”. Guaranteed reach parameters including reach type parameters (mobile devices, applications and/or users) are specified in a memory. A server receives a virtual content request and a received target identification uniquely identifying, for example, the requesting device via a network. The server identifies virtual content candidates from the memory by comparing the received target identification to the stored target identification associated with the virtual content. The guaranteed reach parameters may also include frequency-based criteria that guarantee a frequency of impression(s) for particular virtual content and guaranteed priority criteria to ensure the guarantee will be met.07-05-2012
20120215878CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS - The CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS (“CDP”) transform content seed selections and recommendations via CDP components such as discovery and gurus into events and discovery of other contents for users and revenue for right-holders. In one embodiment, the CDP may provide facilities for obtaining a universally resolvable list of content items on a local client and identifying a non-local item from the list that is absent on the local client. The CDP may generate a local cache request for the identified non-local item having an associated universally resolvable content identifier and transmit the generated local cache request to a universally resolvable content server. The CDP may then receive, in response to the transmitted request, a universally resolvable content item corresponding to the local cache request and may mark the requested item as temporary and locally available upon receiving the content item.08-23-2012
20100299402CONFIGURING CHANNELS FOR SHARING MEDIA - A user interface for sharing media items with others. From a sender's perspective, embodiments of the invention allow for an easy-to-use drag-and-drop technique that is more user-friendly than conventional techniques. From the recipient's perspective, embodiments of the invention allow media items from multiple sources to be aggregated into a single viewport, providing a cohesive and unified approach to media items received from others.11-25-2010
20100049822Network, storage appliance, and method for externalizing an external I/O link between a server and a storage controller integrated within the storage appliance chassis - A network storage appliance is disclosed. The storage appliance includes a port combiner that provides data communication between at least first, second, and third I/O ports; a storage controller that controls storage devices and includes the first I/O port; a server having the second I/O port; and an I/O connector for networking the third I/O port to the port combiner. A single chassis encloses the port combiner, storage controller, and server, and the I/O connector is affixed on the storage appliance. The third I/O port is external to the chassis and is not enclosed therein. In various embodiments, the port combiner comprises a FibreChannel hub comprising a series of loop resiliency circuits, or a FibreChannel, Ethernet, or Infiniband switch. In one embodiment, the port combiner, I/O ports, and server are all comprised in a single blade module for plugging into a backplane of the chassis.02-25-2010
20120179773METHOD AND SYSTEM FOR COMMUNITY DATA CACHING - A cache module (07-12-2012
20120179772SYSTEM AND METHOD TO IMPROVE FITNESS TRAINING - A method for creating a personalized exercise routine with at least one user interface used in connection with forming machine-readable instructions protected as private to a user subsequently carrying out the exercise routine on an exercise machine, the method including providing the user with at least one user interface to define the personalized exercise routine; forming machine-readable instructions to control the exercise machine to carry out the exercise routine on the exercise machine, said machine instructions protected as private to the user; storing the personalized exercise routine formed in the machine-readable instructions in a memory device; and user-triggered engaging of the machine-readable instructions to control the exercise machine in carrying out the personalized exercise routine. The method can include associating the exercise routine with a first exercise machine to produce a first set of signals; and subsequently translating the first set of signals into the machine-readable instructions.07-12-2012
20120179771SUPPORTING AUTONOMOUS LIVE PARTITION MOBILITY DURING A CLUSTER SPLIT-BRAINED CONDITION - A method, data processing system, and computer program product autonomously migrate clients serviced by a first VIOS to other VIOSes in the event of a VIOS cluster “split-brain” scenario generating a primary sub-cluster and a secondary sub-cluster, where the first VIOS is in the secondary sub-cluster. The VIOSes in the cluster continually exchange keep-alive information to provide each VIOS with an up-to-date status of other VIOSes within the cluster and to notify the VIOSes when one or more nodes loose connection to or are no longer communicating with other nodes within the cluster, as occurs with a cluster split-brain event/condition. When this event is detected, a first sub-cluster assumes a primary sub-cluster role and one or more clients served by one or more VIOSes within the secondary sub-cluster are autonomously migrated to other VIOSes in the primary sub-cluster, thus minimizing downtime for clients previously served by the unavailable/uncommunicative VIOSes.07-12-2012
20120254340Local Storage Linked to Networked Storage System - Disclosed are various embodiments for storage of files. A portable memory device is configured to couple to a computing device, and a storage management application is stored in the portable memory device, the storage management application being executable by a processor circuit. The storage management application is configured to send a plurality of files for storage in a networked storage system, the networked storage system being remote to the computing device. The storage management system caches a subset of the files on the portable memory device and maintains a local file directory in the portable memory device. The local file directory lists the files stored in the networked storage system in association with an account linked to the portable memory device.10-04-2012
20120221673METHOD FOR PROVIDING VIRTUALIZATION INFORMATION - Virtualization information on a first user terminal is generated and is stored in a data storage device through a mobile communication system. When a user with a second user terminal requests virtualization information while the second user terminal provides a first identification number of the first user terminal, the mobile communication system provides virtualization information corresponding to the identification number to the second user terminal. The second user terminal operates the virtualization information corresponding to the first identification number.08-30-2012
20120221672COMMUNICATION DEVICES, METHODS AND COMPUTER READABLE STORAGE MEDIA - A communication device includes a memory that has a first storage area that stores an identifier of a first communication device, which is in a communication session with the communication device, and a second storage area that stores an identifier of a second communication device, which established a communication session with the communication device. The communication device performs the steps of: notifying the identifier stored in the first storage area to the first communication device, receiving an identifier stored in a first storage area of the first communication device from the first communication device, determining whether the identifier received from the first communication device is stored in the second storage area of the communication device, restricting re-establishment of the communication session with the first communication device when the identifier received from the first communication device is stored in the second storage area of the communication device.08-30-2012
20120084385Network Cache Architecture - There is described a method and apparatus for sending data through one or more packet data networks. A reduced size packet is sent from a packet sending node towards a cache node, the reduced size packet including in its payload a pointer to a payload data segment stored in a file at the cache node. When the reduced size packet is received at the cache node, the pointer is used to identify the payload data segment from data stored at the cache node. The payload data segment is inserted into the reduced size packet in place of the pointer so as to generate a full size packet, which is sent from the cache node towards a client.04-05-2012
20120084383Distributed Data Storage - The present invention relates to a distributed data storage system comprising a plurality of storage nodes. Using unicast and multicast transmission, a server application may write data in the storage system. When writing data, at least two storage nodes are selected based in part on a randomized function, which ensures that data is sufficiently spread to provide efficient and reliable replication of data in case a storage node malfunctions.04-05-2012
20120084382ON-THE-FLY REVERSE MAPPING - In accordance with the invention, embodiments of a DNS server, a DNS proxy process, and an intermediate server (IMS) are described. The DNS server, DNS proxy process, and intermediate server (IMS) described herein utilize a destination IP address for a destination device in on-the-fly reverse mapping operations in order to accurately provide a hostname originally requested by the client device.04-05-2012
20120084381Virtual Desktop Configuration And Operation Techniques - Techniques for configuring and operating a virtual desktop session are disclosed herein. In an exemplary embodiment, an inter-partition communication channel can be established between a virtualization platform and a virtual machine. The inter-partition communication channel can be used to configure a guest operating system to conduct virtual desktop sessions and manage running virtual desktop sessions. In addition to the foregoing, other techniques are described in the claims, the detailed description, and the figures.04-05-2012
20120221671Controlling Shared Memory - In view of the characteristics of distributed applications, the present invention proposes a technical solution for applying a shared memory on an NIC comprising: a shared memory configured to provide shared storage space for a task of a distributed application, and a microcontroller. Furthermore, the present invention provides a computer device that includes the above-mentioned NIC, a method for controlling a read/write operation on a shared memory of a NIC, and a method for invoking the NIC. The use of the technical solution provided in the present invention bypasses the processing of network protocol stack, avoids the time delay introduced by the network protocol stack. The present invention does not need to perform TCP/IP encapsulation on the data packet, thus greatly saving additional packet header and packet tail overheads generated from the TCP/IP layer data encapsulation.08-30-2012
20120259942Proxy server with byte-based include interpreter - According to this disclosure, a proxy server is enhanced to be able to interpret instructions that specify how to modify an input object to create an output object to serve to a requesting client. Typically the instructions operate on binary data. For example, the instructions can be interpreted in a byte-based interpreter that directs the proxy as to what order, and from which source, to fill an output buffer that is served to the client. The instructions specify what changes to make to a generic input file. This functionality extends the capability of the proxy server in an open-ended fashion and enables it to efficiently create a wide variety of outputs for a given generic input file. The generic input file and/or the instructions may be cached at the proxy. The teachings hereof have applications in, among other things, the delivery of web content, streaming media, and the like.10-11-2012
20120226766SYSTEMS AND METHODS THERETO FOR ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING TECHNIQUES - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.09-06-2012
20100306339P2P CONTENT CACHING SYSTEM AND METHOD - A P2P content caching system, method, and computer program product for a P2P application on a computer network device. The system includes: a content analyzer; and a content manager. The method includes: determining P2P hotspot downloading contents of the P2P application on the computer network device; downloading the determined P2P hotspot downloading contents into a local memory, and requesting a directory server of the P2P application to register a P2P content caching system as a P2P content provider of the downloaded P2P hotspot downloading contents; and providing the downloaded P2P hotspot downloading contents to a P2P participant in response to a request from the P2P participant to the downloaded P2P hotspot downloading contents.12-02-2010
20120239774UNOBTRUSIVE METHODS AND SYSTEMS FOR COLLECTING INFORMATION TRANSMITTED OVER A NETWORK - The present invention relates generally to unobtrusive methods and systems for collecting information transmitted over a network utilizing a data collection system residing between an originator system and a responding system. In one embodiment the Originator System can be a web browser and the Responding System can be a web server. In another embodiment the Originator System can be a local computer and the Responding System can be another computer on the network. Both these and other configurations are considered to be within the domain of this invention. The Data Collection System acts in a hybrid peer-to-peer/client-server manner in responding to the Originating System as a Responding System while acting as an Originating System to the Responding System. This configuration enables real-time acquisition and storage of network traffic information in a completely unobtrusive manner without requiring any server- or client-side code.09-20-2012
20110035460METHOD AND APPARATUS FOR MANAGING SHARED DATA AT A PORTABLE ELECTRONIC DEVICE OF A FIRST ENTITY - A method and apparatus for managing shared data at a portable electronic device of a first entity is provided. A message is received advising that data associated with a second entity is being shared. A request is transmitted to a server for a list of shared folders associated with the second entity, in response to an option to view shared folders associated with the second entity being selected. The list is received. An initialize command is transmitted to the server, the initialize command identifying at least one folder in the list. The data associated with the second entity is received, responsive to the transmitting the initialize command. The data is stored in association with a second entity identifier.02-10-2011
20110035461Protocol adapter for transferring diagnostic signals between in-vehicle networks and a computer - A protocol adapter for simultaneously communicating with one or more remote computers over any one of a plurality of protocols. The adapter includes a motherboard having an integrated CPU, a plurality of interface modules, a plurality of device drivers and a plurality of daughter-board module slots. The protocol adapter further includes at least one daughter-board interface module mounted in one of the plurality of daughter-board slots. The at least one daughter-board modules expands the number of protocols of the adapter beyond those protocols being run by the motherboard.02-10-2011
20120331086Clustered Storage Network - A data storage network is provided. The network includes a client connected to the data storage network; a plurality nodes on the data storage network, wherein each data node has two or more RAID controllers, wherein a first RAID controller of a first node is configured to receive a data storage request from the client and to generate RAID parity data on a data set received from the client, and to store all of the generated RAID parity data on a single node of the plurality of nodes.12-27-2012
20120324035SHARED NETWORK RESPONSE CACHE - An apparatus and system are disclosed for reducing network traffic using a shared network response cache. A request filter module intercepts a network request to prevent the network request from entering a data network. The network request is sent by a client and is intended for one or more recipients on the data network. A cache check module checks a shared response cache for an entry matching the network request. A local response module sends a local response to the client in response to an entry in the shared response cache matching the network request. The local response satisfies the network request based on information from the matching entry in the shared response cache.12-20-2012
20120271904Method and Apparatus for Caching in a Networked Environment - In general, methods and apparatus according to the invention mitigate these and other issues by implementing caching techniques described herein. So when one device in a home network downloads and plays a particular content (e.g., a video, song) from a given site, the content is cached within the network such that the same content is available to be re-played on another device without re-downloading the same content from the Internet.10-25-2012
20120271906System and Method for Selectively Caching Hot Content in a Content Delivery System - A method includes altering a request interval threshold when a cache-hit ratio falling below a target, receiving a request for content, providing the content when the content is in the cache, when the content is not in the cache and the time since a previous request for the content is less than the request interval threshold, retrieving and storing the content, and providing the content to the client, when the elapsed time is greater than the request interval threshold, and when another elapsed time since another previous request for the content is less than another request interval threshold, retrieving and storing the content, and providing the content to the client, and when the other elapsed time is greater than the other request interval threshold, rerouting the request to the content server without caching the content.10-25-2012
20120271905PROXY CACHING IN A PHOTOSHARING PEER-TO-PEER NETWORK TO IMPROVE GUEST IMAGE VIEWING PERFORMANCE - The present invention provides a method and system for serving an image stored in the peer computer to a requesting computer in a network photosharing system in which the peer computer is coupled to a photosharing system server. Aspects of the invention include caching copy of the image in the photosharing server; and in response to the photosharing server receiving a request from the requesting computer to view the image stored in the peer computer, transmitting the cached image from the photosharing server to the requesting computer, thereby avoiding the need to transfer the image from the peer computer to the photosharing server for each request to view the image.10-25-2012
20120271903SHARED RESOURCE AND VIRTUAL RESOURCE MANAGEMENT IN A NETWORKED ENVIRONMENT - Systems and methods for shared resource or virtual resource management in a networked environment are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, includes, creating a virtual memory pool from an aggregation of the physical memory of the devices and/or allocating portions of the virtual memory pool to a given device among the devices. Further, the portions of the virtual memory pool allocated to the given device are in part accessible over a wireless connection for data retrieval and storage by the given device.10-25-2012
20120324038Controlling Shared Memory - In view of the characteristics of distributed applications, the present invention proposes a technical solution for applying a shared memory on an NIC comprising: a shared memory configured to provide shared storage space for a task of a distributed application, and a microcontroller. Furthermore, the present invention provides a computer device that includes the above-mentioned NIC, a method for controlling a read/write operation on a shared memory of a NIC, and a method for invoking the NIC. The use of the technical solution provided in the present invention bypasses the processing of network protocol stack, avoids the time delay introduced by the network protocol stack. The present invention does not need to perform TCP/IP encapsulation on the data packet, thus greatly saving additional packet header and packet tail overheads generated from the TCP/IP layer data encapsulation.12-20-2012
20120324037FLOW CONTROL METHOD AND APPARATUS FOR ENHANCING THE PERFORMANCE OF WEB BROWSERS OVER BANDWIDTH CONSTRAINED LINKS - Flow control is applied to increasing the performance of a browser while pre-fetching Web objects while operating over bandwidth constrained links to increase the level of concurrency, thus reducing contention for limited bandwidth resources with increased levels of concurrency. Using an agent or a gateway to speed up its Internet transactions over bandwidth constrained connections to source servers. Assisting a browser in the fetching of objects in such a way that an object is ready and available locally before the browser requires it, without suffering congestion on any bandwidth constrained link. Providing seemingly instantaneous availability of objects to a browser enabling it to complete processing the object to request the next object without much wait.12-20-2012
20120324036System And Method For Acceleration Of A Secure Transmission Over Satellite - A broadband communication system with improved latency is disclosed. The system employs acceleration of secure web-based communications over a satellite communication network. In accordance with aspects of the invention, secure protocol acceleration is employed such that required protocol signals transmitted from a computer employing a web browser may be intercepted by a remote terminal. To insure that the browser will continue transmitting data, the remote terminal generates required acknowledgment and security signals to continue the secure communication, which may then transmitted back to the computer. Meanwhile, the received protocol signals may be converted by the remote terminal for transmission through the satellite communications system in a format appropriate for that communication medium. Aspects of the invention further include a hub or similar device for communicating with the satellite communications system.12-20-2012
20110238777PIPELINE SYSTEMS AND METHOD FOR TRANSFERRING DATA IN A NETWORK ENVIRONMENT - A communications system having a data transfer pipeline apparatus for transferring data in a sequence of N stages from an origination device to a destination device. The apparatus comprises dedicated memory having buffers dedicated for carrying data and a master control for registering and controlling processes associated with the apparatus for participation in the N stage data transfer sequence. The processes include a first stage process for initiating the data transfer and a last Nth stage process for completing data transfer. The first stage process allocates a buffer from a predetermined number of buffers available within the memory for collection, processing, and sending of the data from the origination device to a next stage process. The Nth stage process receives a buffer allocated to the first stage process from the (N−1)th stage and to free the buffer upon processing completion to permit reallocation of the buffer.09-29-2011
20110238776METHOD AND SYSTEM FOR THE VIRTUALIZED STORAGE OF A DIGITAL DATA SET - This method of virtualized storage of a digital data set (09-29-2011
20120278423Method for Transmitting Data by Means of Storage Area Network and System Thereof - In the technical field of data storage and access, the invention relates to the technique of data transmission using a storage area network (SAN) in a magnetic disk storage device environment, including a method for transmitting data over a SAN in such an environment, including: determining a logical volume accessible to a server of the magnetic disk storage device; obtaining information on a logical volume accessible to a client of the magnetic disk storage device, which is determined by the client; establishing a corresponding relationship between the logical volume accessible to the server and the logical volume accessible to the client; receiving a request for using the logical volume of the magnetic disk storage device from the client; and informing the client of an available logical volume by utilizing the corresponding relationship so that a data access to the available logical volume is performed by the client over the SAN.11-01-2012
20120278424SYSTEM, A METHOD, AND A COMPUTER PROGRAM PRODUCT FOR COMPUTER COMMUNICATION - A system, a method and a computer program product for transmission over a network, the method includes: receiving, by an intermediate system coupled to the network, a portion of a data structure that is aimed to a recipient computer; generating a stamp that is responsive to a content of a segment of the data structure and is indifferent to transfer information about a transmission of the data structure; wherein the portion may include the segment or equals the segment; determining, by the intermediate system, whether to cache the portion, in response to at least a comparison between the stamp and stamps of cached portions of data structures; selectively caching the portion in response to the determination; and transmitting to the recipient computer either one of the portion of the transmitted data structure and a cached version of the portion of the transmitted data structure.11-01-2012
20120089696METHOD AND APPARATUS FOR MANAGING SHARED DATA AT A PORTABLE ELECTRONIC DEVICE OF A FIRST ENTITY - A method and apparatus for managing shared data at a portable electronic device of a first entity is provided. A message is received advising that data associated with a second entity is being shared. A request is transmitted to a server for a list of shared folders associated with the second entity, in response to an option to view shared folders associated with the second entity being selected. The list is received. An initialize command is transmitted to the server, the initialize command identifying at least one folder in the list. The data associated with the second entity is received, responsive to the transmitting the initialize command. The data is stored in association with a second entity identifier.04-12-2012
20120089695ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests.04-12-2012
20120331085LOAD BALANCING BASED UPON DATA USAGE - A method of load balancing can include segmenting data from a plurality of servers into usage patterns determined from accesses to the data. Items of the data can be cached in one or more servers of the plurality of servers according to the usage patterns. Each of the plurality of servers can be designated to cache items of the data of a particular usage pattern. A reference to an item of the data cached in one of the plurality of servers can be updated to specify the server of the plurality of servers within which the item is cached.12-27-2012
20120331087TIMING OF KEEP-ALIVE MESSAGES USED IN A SYSTEM FOR MOBILE NETWORK RESOURCE CONSERVATION AND OPTIMIZATION - Systems and methods for timing of a keep-alive messages used in a system for mobile network resource conservation and optimization are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of detecting a rate of content change at the content source and adjusting adjusts timing of keep-alive messages sent to the mobile device based on the rate of content change. The timing of the keep-alive messages can further be determined using different polling rates for the content polls of the multiple applications on the mobile device detected by the local proxy.12-27-2012
20120331084Method and System for Operation of Memory System Having Multiple Storage Devices - Systems and methods for operation of a memory system are disclosed. In some example embodiments, a system for storing or retrieving data in response to one or more signals provided from one or more clients includes a plurality of memcached-type memory devices arranged in a cluster, and a proxy module configured to communicate at least indirectly with each of the memcached-type memory devices and further configured to receive the one or more signals. The proxy module is configured to perform a determination of how to proceed in communicating with the memcached-type memory devices for the purpose of the storing or retrieving of data at or from one or more of the memcached-type memory devices in response to the one or more signals. In additional example embodiments, the proxy module is a centralized proxy and makes selections among the memory devices based upon performing of a memcache selection/fail-over algorithm (MSFOA).12-27-2012
20110289178Host Device and Method For Accessing a Virtual File in a Storage Device by Bypassing a Cache in the Host Device - A host device is provided comprising an interface configured to communicate with a storage device having a public memory area and a private memory area, wherein the public memory area stores a virtual file that is associated with content stored in the private memory area. The host device also comprises a cache, a host application, and a server. The server is configured to receive a request for the virtual file from the host application, send a request to the storage device for the virtual file, receive the content associated with the virtual file from the private memory area of the storage device, wherein the content is received by bypassing the cache, generate a response to the request from the host application, the response including the content, and send the response to the host application. In one embodiment, the server is a hypertext transfer protocol (HTTP) server. In another embodiment, the server can determine if a request is associated with a normal user permission or a super user permission, and send a response to the host application only if it is determined that the request is associated with the normal user permission.11-24-2011
20110320556Techniques For Migrating A Virtual Machine Using Shared Storage - Techniques for providing the ability to live migrate a virtual machine from one physical host to another physical host employ shared storage as the transfer medium for the state of the virtual machine. In addition, the ability for a virtualization module to use second-level paging functionality is employed, paging-out the virtual machine memory content from one physical host to the shared storage. The content of the memory file can be restored on another physical host by employing on-demand paging and optionally low-priority background paging from the shared storage to the other physical host.12-29-2011
20120102140METHOD FOR EFFICIENT UTILISATION OF THE THROUGHPUT CAPACITY OF AN ENB BY USING A CACHE - Method and apparatus for enabling optimisation of the utilisation of the throughput capacity of a first and a second interface of an eNB, where the first and the second interface alternate in having the lowest throughput capacity, and thereby take turns in limiting the combined data throughput over the two interfaces. In the method, data is received over the first interface and then cached in one of the higher layers of the Internet Protocol stack. The output from the cache of data to be sent over the second interface is controlled, based on the available throughput capacity of the second interface. Thereby, the alternating limiting effect of the interfaces is levelled out.04-26-2012
20120102138Multiplexing Users and Enabling Virtualization on a Hybrid System - A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems.04-26-2012
20120102137CLUSTER CACHE COHERENCY PROTOCOL - Systems, methods, and other embodiments associated with a cluster cache coherency protocol are described. According to one embodiment, an apparatus includes non-transitory storage media configured as a cache associated with a computing machine. The computing machine is a member of a cluster of computing machines that share access to a storage device. A cluster caching logic is associated with the computing machine. The cluster caching logic is configured to communicate with cluster caching logics associated with the other computing machines to determine an operational status of a clique of cluster caching logics performing caching operations on data in the storage device. The cluster caching logic is also configured to selectively enable caching of data from the storage device in the cache based, at least in part, on a membership status of the cluster caching logic in the clique.04-26-2012
20120102136DATA CACHING SYSTEM - Provided herein are systems, uses, and processes relating to network communications. For example, provided herein are systems, uses, and processes for increasing transmission efficiency by removing redundancy from single source multiple destination transfers.04-26-2012
20120102135SEAMLESS TAKEOVER OF A STATEFUL PROTOCOL SESSION IN A VIRTUAL MACHINE ENVIRONMENT - The disclosed technique uses virtual machines in solving a problem of persistent state for storage protocols. The technique provides for seamless, persistent, storage protocol session state management on a server, for higher availability. A first virtual server is operated in an active role in a host system to serve a client, by using a stateful protocol between the first virtual server and the client. A second, substantially identical virtual server is maintained in a passive role. In response to a predetermined event, the second virtual server takes over for the first virtual server, while preserving state for a pending client request sent to the first virtual server in the stateful protocol. The method can further include causing the second virtual server to respond to the request before a timeout which is specific to the stateful protocol can occur.04-26-2012
20120102134CACHE SHARING AMONG BRANCH PROXY SERVERS VIA A MASTER PROXY SERVER AT A DATA CENTER - A method, system and computer program product for cache sharing among branch proxy servers. A branch proxy sever receives a request for accessing a resource at a data center. The branch proxy server creates a cache entry in its cache to store the requested resource if the branch proxy server does not store the requested resource. Upon creating the cache entry, the branch proxy server sends the cache entry to a master proxy server at the data center to transfer ownership of the cache entry if the master proxy server did not store the resource in its cache. When the resource becomes invalid or expired, the master proxy server informs the appropriate branch proxy servers storing the resource to purge the cache entry containing this resource. In this manner, the master proxy server ensures that the cached resource is synchronized across the branch proxy servers storing this resource.04-26-2012
20130013724METHOD, SYSTEM AND APPARATUS FOR DELIVERING WEB CONTENT - According to embodiments described in the specification, a method, system and apparatus for delivering web content are provided. The method comprises maintaining a web page in a memory of a web server identifiable by a network address, the web page including at least one reference to a foreign element maintained at a second web server identifiable by a second network address; identifying the at least one reference; transmitting a request from an interface of the web server for obtaining the second network address; receiving the second network address of the second web server and storing the second network address in the memory in association with an identifier of the web page.01-10-2013
20130013726CACHING IN MOBILE NETWORKS - A method for optimising the distribution of data objects between caches in a cache domain of a resource limited network. User requests for data objects are received at caches in the cache domain. A notification is sent from each cache at which a request received to a cache manager. The notification reports the user request and identifies the requested data object. At the cache manager, object information including the request frequency of each requested data object and the locations of the caches at which the requests were received are collated and stored, and objects for distribution within the cache domain are identified on the basis of the object information. Instructions are sent from the cache manager to the caches to distribute data objects stored in those caches between themselves. The data objects are distributed between the caches using transmission capacity of the network that would otherwise be unused.01-10-2013
20130013725SYSTEM AND METHOD FOR MANAGING PAGE VARIATIONS IN A PAGE DELIVERY CACHE - Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end.01-10-2013
20130018976CACHING EMAIL UNIQUE IDENTIFIERS - Accessing, via an end user device, email messages of an external mail source. A direct access proxy is operative to reconcile the email contents of external email sources with the email contents of user devices through the use of lists of unique email identifiers (UIDs). A Partition Database returns UID lists reflective of the UIDs of email messages previously received from the external email source and forwarded to a network server of the system (forwarded UID lists). A memory cache external to the direct access proxy and its corresponding Partition Database returns forwarded UID lists. The direct access proxy determines the data reliability of the Partition Database and memory cache, and obtains forwarded UID lists from the memory cache when it determines that the memory cache is at least as reliable as the Partition Database.01-17-2013
20130018977DATA SHARING METHODS AND PORTABLE TERMINALSAANM Peng; GangAACI BeijingAACO CNAAGP Peng; Gang Beijing CN - A data sharing method and a portable terminal are provided. The portable terminal is a first terminal having a first system and a second system which have a capability of operating a shared storage area. The method comprises: starting transmitting a file in the shared storage area to a second terminal by the first system; acquiring uploaded information of the file by the second system, when detecting that the first system fulfills a predetermined condition, during the transmission of the file in the shared storage area to the second terminal by the first system; and continuing the transmission of the file to the second terminal by the second system in accordance with the uploaded information. In the transmission of shared date according to the embodiments of the present disclosure, due to the two-system hybrid architecture of the terminal, one of the two systems may continue the transmission of the shared data if the transmission is interrupt by shut down or fault of the other one of the two systems, thereby the user experience in transmitting the shared data may be improved.01-17-2013
20130024538FAST SEQUENTIAL MESSAGE STORE - A broker may be used as an intermediary to exchange messages between producers and consumers. The broker may store and dispatch messages from a physical queue stored in a persistent memory. More specifically, the broker may enqueue messages to the physical queue that are received from producers and may dispatch messages from the physical queue to interested consumers. The broker may further utilize one or more logical queues stored in transient memory to track the status of the messages stored in persistent memory. As messages are dispatched to and acknowledged by interested consumers, the broker deletes acknowledged messages from the physical queue. The messages deleted are those preceding a physical ACKlevel pointer that specifies the first non-acknowledged message in the physical queue. The physical ACKlevel pointer is advanced in the physical queue based on the relative position of corresponding logical ACKlevel pointers maintained by the logical queues.01-24-2013
20130173736COMMUNICATIONS SYSTEM PROVIDING ENHANCED TRUSTED SERVICE MANAGER (TSM)VERIFICATION FEATURES AND RELATED METHODS - A trusted service manager (TSM) server may include at least one communications device capable of communicating with at least one application server, a verification database server, and at least one mobile communications device. The TSM server may further include a processor coupled with the at least one communications device and capable of registering the at least one application server with the verification database server, receiving a request from the at least one application server to access the memory of the mobile communications device, cooperating with the verification database server to verify the at least one application server based upon the access request and based upon registering of the at least one application server, and writing application data from the at least one application server to the memory of the at least one mobile communications device based upon verifying the at least one application server.07-04-2013
20130173737METHOD AND APPARATUS FOR FLEXIBLE CACHING OF DELIVERED MEDIA - Various methods are described for selecting an access method for flexible caching in DASH. One example method may comprise causing a request for at least one of a primary representation for a segment or an alternative representation for the segment to be transmitted to a caching proxy. The method of this example embodiment may further comprise causing the caching proxy to respond with at least one of the primary representation or the alternate representation based on the caching status at a caching proxy. In some example embodiments, the caching proxy is configured to determine whether the request enables an alternative representation to be included in a response. Furthermore, the method of this example embodiment may comprise receiving at least one of the primary representation and the alternative representation for the segment from the caching proxy. Similar and related example methods, apparatuses, and computer program products are also provided.07-04-2013
20130173738Administering Globally Accessible Memory Space In A Distributed Computing System - In a distributed computing system that includes compute nodes that include computer memory, globally accessible memory space is administered by: for each compute node: mapping a memory region of a predefined size beginning at a predefined address; executing one or more memory management operations within the memory region, including, for each memory management operation executed within the memory region: executing the operation collectively by all compute nodes, where the operation includes a specification of one or more parameters and the parameters are the same across all compute nodes; receiving, by each compute node from a deterministic memory management module in response to the memory management operation, a return value, where the return value is the same across all compute nodes; entering, by each compute node after local completion of the memory management operation, a barrier; and when all compute nodes have entered the barrier, resuming execution.07-04-2013
20130173739Reverse Mapping Method and Apparatus for Form Filing - In the presently preferred embodiment of the invention, every time a user submits a form the client software tries to match the submitted information with the stored profile of that user. If a match is discovered, the program tags the field of the recognized data with a corresponding type. The resulting profile can be used after that to help all subsequent users to fill the same form.07-04-2013
20080228897LAYERING SERIAL ATTACHED SMALL COMPUTER SYSTEM INTERFACE (SAS) OVER ETHERNET - Disclosed are embodiments of a storage area network (SAN), a network interface card and a method of managing data transfers. These embodiments overcome the distance limitation of the Serial Attached Small Computer System Interface (SAS) physical layer so that SAS storage protocol may be used for communication between host systems and storage controllers. Host systems and storage controls are connected via an Ethernet interface (e.g., a legacy Ethernet or enhanced Ethernet for datacenter (EED) fabric). SAS storage protocol is layered over this Ethernet interface, providing commands and transport protocol for information exchange. Since the Ethernet interface has its own physical layer, the SAS physical layer is unnecessary and, thus, so is the SAS distance limitation. If legacy Ethernet is used, over-provisioning is used to avoid packet drops, or alternatively, TCP/IP is supported in order to recover from packet drops. If EED is used, congestion management as well as priority of service functions are provided by the EED protocols09-18-2008
20110246600MEMORY SHARING APPARATUS - A memory sharing apparatus includes a server, a host and a client. The server includes a shared page which is an entity of a shared memory, a share setting page which is data in which an index value of each shared page is collected, and a grant table in which a page frame number of each share setting page and the index value are stored so as to correspond to each other. The host includes a database in which the index value in the grant table is managed. The client includes the shared page and a shared page area to which the shared page is mapped, and a share setting page area to which the share setting page is mapped.10-06-2011
20130179531NETWORK COMMUNICATIONS APPARATUS, METHOD, AND MEDIUM - The present invention provides a novel network communications apparatus that includes a LAN interface that transmits and receives data via a network, a plurality of memory resources to transfer data to an application, an analyzing unit that divides data to be sent and received data into a control part and a content part and analyzes the control part, a storage unit that stores rules to determine resources to be used and transfer control method in accordance with characteristic of the data to be sent and the received data, and a controller that transfers the content data to the application in accordance with a result of analyzing the control part of the data to be sent and the received data and applying the rule.07-11-2013
20130179533DATA STORAGE CONTROL SYSTEM, DATA STORAGE CONTROL METHOD, AND DATA STORAGE CONTROL PROGRAM - A reduction in network load as well as an increase in speed of response through caching and an increase in communication efficiency through buffering are both achieved. A data storage control system that temporarily stores and controls data exchanged between a user terminal 07-11-2013
20130179532COMPUTER SYSTEM AND SYSTEM SWITCH CONTROL METHOD FOR COMPUTER SYSTEM - Disclosed is a computer system provided with an I/O processing unit comprising a buffer and a control unit, wherein the buffer is located between the first computer and a storage apparatus and between a second computer and the storage apparatus and temporarily stores an I/O output from a first computer, and the control unit outputs data stored in the buffer to the storage apparatus, and wherein, a management computer functions to store the I/O output of the first computer in the buffer at a predetermined time, to separate a first storage unit and a second storage unit which are mirror volumes, to connect the buffer and the second storage unit, to connect the second computer and the first storage unit, to output data stored in the buffer to the second storage unit, and to activate the second computer using the first storage unit.07-11-2013
20130179530ENVIRONMENT CONSTRUCTION APPARATUS AND METHOD, ENVIRONMENT REGISTRATION APPARATUS AND METHOD, ENVIRONMENT SWITCHING APPARATUS AND METHOD - An environment construction apparatus that carries out, in a second system, acquiring a connection permission data of a first storage in a first system that was set in a second storage of the second system; and extracting identification data of a first server in the first system based on the connection permission data of the first storage of the first system, and assigning the extracted identification data of the first server in the first system as identification data stored in a connection section of a second server in the second system.07-11-2013
20120254341METHOD AND SYSTEM FOR DYNAMIC DISTRIBUTED DATA CACHING - A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member.10-04-2012
20080215701Modified machine architecture with advanced synchronization - A multiple computer environment is disclosed in which an application program executes simultaneously on a plurality of computers (M09-04-2008
20130138760APPLICATION-DRIVEN SHARED DEVICE QUEUE POLLING - Methods and systems for application-driven polling of shared device queues are provided. One or more applications running in non-virtualized or virtualized computing environments may be adapted to enable methods for polling shared device queues. Applications adapted to operate in a polling mode may transmit a request to initiate polling of shared device queues, wherein operating in the polling mode disables corresponding device interrupts. Applications adapted to operate in a polling mode may be regulated by one or more predefined threshold limitations.05-30-2013
20130091237Aligned Data Storage for Network Attached Media Streaming Systems - Described embodiments provide a server for transferring data packets of streaming data sessions between devices. A redundant array of inexpensive disks (RAID) array having one or more stripe sector units (SSU) stores media files corresponding to the one or more data sessions. The RAID control module receives a request to perform the write operation to the RAID array beginning at a starting data storage address (DSA) and pads the data of the write operation if the amount of data is less than a full SSU of data, such that the padded data of the write operation is a full SSU of data. The RAID control module stores the full SSU of data beginning at a starting data storage address (DSA) that is aligned with a second SSU boundary, without performing a read-modify-write operation.04-11-2013
20130179529Optimizing Multi-Hit Caching for Long Tail Content - Some embodiments provide an optimized multi-hit caching technique that minimizes the performance impact associated with caching of long-tail content while retaining much of the efficiency and minimal overhead associated with first hit caching in determining when to cache content. The optimized multi-hit caching utilizes a modified bloom filter implementation that performs flushing and state rolling to delete indices representing stale content from a bit array used to track hit counts without affecting identification of other content that may be represented with indices overlapping with those representing the stale content. Specifically, a copy of the bit array is stored prior to flushing the bit array so as to avoid losing track of previously requested and cached content when flushing the bit arras and the flushing is performed to remove the bit indices representing stale content from the bit array and to minimize the possibility of a false positive.07-11-2013
20130097275CLOUD-BASED STORAGE DEPROVISIONING - A device creates a first cloud storage container in a first region of cloud storage, clears a delete flag associated with the first cloud storage container, and stores a first data object in the first cloud storage container in the first region of cloud storage. The device receives a request to delete the first cloud storage container, sets a delete flag associated with the first cloud storage container based on the request to delete the first cloud storage container, and deletes the first cloud storage container if the request to delete has not been rescinded prior to expiration of a time period.04-18-2013
20130103781MOBILE COMMUNICATION DEVICE - A mobile communication device is mounted on a vehicle, and has a reception unit for receiving a distributed cache that is data having divided information, a distributed cache restoration unit for restoring the distributed cache into original information, a data dividing unit for producing the distributed cache by dividing the information, and a transmission unit for transmitting the distributed cache.04-25-2013
20130103780DELAYED PUBLISHING IN PROCESS CONTROL SYSTEMS - Techniques for delaying the publication of data to a network by a device in a process control system or plant include obtaining, at the device, data to be published to the network; storing the obtained data and a corresponding timestamp in a cache; triggering a publication of cached data; and, based on the trigger, publishing the oldest cached data to the network during the publishing timeslot assigned to the device. The cached data may correspond to a sample rate of the device and may include multiple instances of data obtained over time. The device includes a network interface, a cache, and a publisher, and the device may be configured to operate in the delayed publishing mode, or to operate in an immediate publishing mode in which currently obtained data that has not been cached is published to the network during the publishing time slot assigned to the device.04-25-2013
20130103778METHOD AND APPARATUS TO CHANGE TIERS - Systems and methods directed to changing tiers for a storage area that utilizes thin provisioning. Systems and methods check the area subject to a tier change command and change the tier based on the tier specified in the tier change command, and the tier presently associated with the targeted storage area. The pages of the systems and methods may be further restricted to one file per page.04-25-2013
20130103782APPARATUS AND METHOD FOR CACHING OF COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags.04-25-2013
20130124667SYSTEM AND METHOD FOR MANAGING DEDICATED CACHES - A client-based computer system configured to communicate with a remote server through a network and to provide access to content or services provided by the server is provided. The system includes a processor, a storage device, a client-side cache dedicated to a set of resources specified by a configuration, and a caching manager to automatically manage the cache as directed by the configuration. The client-side cache is directed by the configuration to transparently intercept a request for one of the resources from a client application to the server, and to automatically determine when to send the request to and provide a response from the server over the network to appear to the client application as though the client application sent the request to and received the response from the server.05-16-2013
20130132504ADAPTIVE NETWORK CONTENT DELIVERY SYSTEM - A method and apparatus stores media content in a variety of storage devices, with at least a portion of the storage devices having different performance characteristics. The system can deliver media to a large number of clients while maintaining a high level of viewing experience for each client by automatically adapting the bit rate of a media being delivered to a client using the client's last mile bit rate variation. The system provides clients with smooth viewing of video without buffering stops. The client does not need a custom video content player to communicate with the system.05-23-2013
20130132503COMPUTER SYSTEM AND NETWORK INTERFACE SUPPORTING CLASS OF SERVICE QUEUES - A data processing system adapted for high-speed network communications, a method for managing a network interface and a network interface for such system, are provided, in which processing of packets received over the network is achieved by embedded logic at the network interface level. Incoming packets on the network interface are parsed and classified as they are stored in a buffer memory. Functional logic coupled to the buffer memory on the network interface is enabled to access any data field within a packet in a single cycle, using pointers and packet classification information produced by the parsing and classifying step. Results of operations on the data fields in the packets are available before the packets are transferred out of the buffer memory. A data processing system, a method for management of a network interface and a network interface are also provided by the present invention that include an embedded firewall at the network interface level of the system, which protects against inside and outside attacks on the security of data processing system. Furthermore, a data processing system, a method for management of a network interface and a network interface are a provided by the present invention that support class of service management for packets incoming from the network, by applying priority rules at the network interface level of the system.05-23-2013
20130179528USE OF MULTICORE PROCESSORS FOR NETWORK COMMUNICATION IN CONTROL SYSTEMS - Various embodiments of the present invention relate to use of one or more multicore processors for network communication (e.g., Ethernet-based communication) in control systems (e.g., vehicle control systems, medical control systems, hospital control systems, instrumentation control systems, test instrument control systems, energy control systems and/or industrial control systems). In one example, one or more systems may be provided with regard to use of multicore processor(s) for network communication (e.g., Ethernet-based communication) in control systems. In another example, one or more methods may be provided with regard to use of multicore processor(s) for network communication (e.g., Ethernet-based communication) in control systems.07-11-2013
20130151645METHOD AND APPARATUS FOR PRE-FETCHING PLACE PAGE DATA FOR SUBSEQUENT DISPLAY ON A MOBILE COMPUTING DEVICE - A computer-implemented method and system for pre-fetching place page data from a remote mapping system for display on a client computing device is disclosed. User preference data collected from various data sources including applications executing on the client device, online or local user profiles, and other sources may be analyzed to generate a request for place page data from the remote mapping system. The user preference data may indicate a map feature such as a place of business, park, or historic landmark having the characteristics of both a user's preferred geographic location and the user's personal interests. For example, where the user indicates a geographic preference for “Boston” and a personal interest for “home brewing” the system and method may request place page data for all home brewing or craft beer-related map features near Boston.06-13-2013
20130151646STORAGE TRAFFIC COMMUNICATION VIA A SWITCH FABRIC IN ACCORDANCE WITH A VLAN - A plurality of SMP modules and an IOP module communicate storage traffic via respective corresponding I/O controllers coupled to respective physical ports of a switch fabric by addressing cells to physical port addresses corresponding to the physical ports. One of the SMPs executes initiator software to partially manage the storage traffic and the IOP executes target software to partially manage the storage traffic. Storage controllers are coupled to the IOP, enabling communication with storage devices, such as disk drives, tape drives, and/or networks of same. Respective network identification registers are included in each of the I/O controller corresponding to the SMP executing the initiator software and the I/O controller corresponding to the IOP. Transport of the storage traffic in accordance with a particular VLAN is enabled by writing a same particular value into each of the network identification registers.06-13-2013
20130151647METHOD FOR REWRITING PROGRAM, REPROGRAM APPARATUS, AND ELECTRONIC CONTROL UNIT - A reprogram apparatus does not transmit a reprogram data set as it is. The reprogram data set has a plurality of unit blocks and is used for rewriting a program in a memory of a subject electronic control unit (ECU). A consecutive range having at least the predetermined number of consecutive specified unit blocks is extracted. Range size information indicating a range size of the extracted consecutive range is transmitted to the subject ECU. The reprogram data set excluding the specified unit blocks included in the consecutive range is transmitted to the subject ECU on a unit-block-by-unit-block basis. The subject ECU restores the data corresponding to the consecutive range containing the specified unit blocks, which are not received from the reprogram apparatus, based on the range size information received. The reprogram data set is thereby restored. Rewriting of the program is executed using the reprogram data set restored.06-13-2013
20130151650SYSTEMS AND METHODS FOR GENERATING AND MANAGING COOKIE SIGNATURES FOR PREVENTION OF HTTP DENIAL OF SERVICE IN A MULTI-CORE SYSTEM - The present application is directed towards systems and methods for generating and maintaining cookie consistency for security protection across a plurality of cores in a multi-core system. A packet processing engine executing on one core designated as a primary packet processing engine generates and maintains a global random seed. The global random seed may be used as an initial seed for creation of cookie signatures by each of a plurality of packet processing engines executing on a plurality of cores of the multi-core system using a deterministic pseudo-random number generation function such that each core creates an identical set of cookie signatures.06-13-2013
20130151649MOBILE DEVICE HAVING CONTENT CACHING MECHANISMS INTEGRATED WITH A NETWORK OPERATOR FOR TRAFFIC ALLEVIATION IN A WIRELESS NETWORK AND METHODS THEREFOR - Mobile device having content caching mechanisms integrated with a network operator for traffic alleviation in a wireless network and methods therefor are disclosed. One embodiment includes a method of integration of content caching with a network operator for traffic alleviation a wireless network, which may be embodied on a mobile device, including determining whether a cache element stored in a local cache on the mobile device for an application poll on the mobile device is valid and forwarding the application poll to an external entity to service the application poll in response to determining that the cache element is no longer valid. The external entity is in part managed by the network operator of the wireless network and can be in part or in whole, a component of an infrastructure of the network operator or external to an infrastructure of the network operator.06-13-2013
20130151648FLEXIBLE AND DYNAMIC INTEGRATION SCHEMAS OF A TRAFFIC MANAGEMENT SYSTEM WITH VARIOUS NETWORK OPERATORS FOR NETWORK TRAFFIC ALLIEVIATION - Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation are disclosed. One embodiment includes a method of integration of content caching with a network operator for traffic alleviation a wireless network, including detecting, by an operator proxy of the network operator, a poll from an application on a mobile device which would have been served using a cache element from a local cache on the mobile device, after the cache element stored in the local cache has been invalidated and forwarding the poll from the application on the mobile device to a proxy server. Whether the poll is sent to a service provider of the application directly by the proxy server, or by the proxy server through the operator proxy is configurable or reconfigurable.06-13-2013
20120284356WIRELESS TRAFFIC MANAGEMENT SYSTEM CACHE OPTIMIZATION USING HTTP HEADERS - Wireless traffic management system cache optimization using HTTP headers is disclosed. In one embodiment, the method can include, for example: storing the web content from a web server as cached elements in a local cache on the mobile device and retrieving the cached elements from the local cache to respond to a request made at the mobile device, regardless of expiration indicated in headers of the web content that is cached. The cached elements can be retrieved from the local cache and used to respond to the request at the mobile device even if the expiration in the headers has exceeded, using a tag is used by a proxy server remote from the mobile device to determine if the cached elements for the web content on the local proxy are still valid.11-08-2012
20130185376EFFICIENT STATE TRACKING FOR CLUSTERS - Exemplary system and computer program product embodiments for efficient state tracking for clusters are provided. In one embodiment, by way of example only, in a distributed shared memory architecture, an asynchronous calculation of deltas and the views is performed while concurrently receiving client requests and concurrently tracking the client requests times. The results of the asynchronous calculation may be applied to each of the client requests that are competing for data of the same concurrency during a certain period with currently executing client requests. Additional system and computer program product embodiments are disclosed and provide related advantages.07-18-2013
20130185378CACHED HASH TABLE FOR NETWORKING - Systems, methods, and devices are provided for managing hash table lookups. In certain network devices, a hash table having multiple buckets may be allocated for network socket lookups. Network socket information for multiple open network socket connections may be distributed among the buckets of the hash table. For each of the buckets of the hash table, at least a subset of the network socket information that is most likely to be used may be identified, and the identified subset of most likely to be used network socket information may be promoted at each bucket to a position having a faster lookup time than a remaining subset of the network socket information at that bucket.07-18-2013
20130185379EFFICIENT STATE TRACKING FOR CLUSTERS - Exemplary method, system, and computer program product embodiments for efficient state tracking for clusters are provided. In one embodiment, by way of example only, in a distributed shared memory architecture, an asynchronous calculation of deltas and the views is performed while concurrently receiving client requests and concurrently tracking the client requests times. The results of the asynchronous calculation may be applied to each of the client requests that are competing for data of the same concurrency during a certain period with currently executing client requests. Additional system and computer program product embodiments are disclosed and provide related advantages.07-18-2013
20110314120SYSTEM AND METHOD FOR PERFORMING MULTISTREAM STORAGE OPERATIONS - Systems and methods for performing storage operations over multi-stream data paths are provided. Prior to performing a storage operation, it may be determined whether multi-streaming resources are available to perform a multi-stream storage operation. Availability of multi-streaming resources may be related to network pathways capable of supporting multi-stream storage operations, existing network load related to other storage operations being or to be performed, availability of components capable of supporting multi-stream storage operation, and other factors. If system resources to support multi-stream storage operations are not available, the system may optionally perform a traditional storage operation that does not incorporate multiple data streams.12-22-2011
20110314119MASSIVELY SCALABLE MULTILAYERED LOAD BALANCING BASED ON INTEGRATED CONTROL AND DATA PLANE - Method and system for load balancing in providing a service. A request for a service, represented by a single IP address, is first received by a router in the network. The router accesses information received from one or more advertising routers in the network. Each of the advertising routers advertises, via the single IP address, the service provided by at least one server in a server pool associated with the advertising router. The advertisement includes metrics indicating a health condition of the associated server pool. The router selects a target router based on, at least in part, the metrics of the server pools associated with the advertising routers to achieve a first level load balancing and forwards the request for the service to the target router. A local server load balancer (SLB) connected with the target router then identifies a target server from the associated server pool to provide the requested service thereby to achieve a second level load balancing.12-22-2011
20130191489Media Content Streaming Using Stream Message Fragments - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for media content streaming can include transacting access information associated with a media stream and transacting one or more fragments associated with the media stream to facilitate a delivery of media content associated with the media stream. Access information can include fragment sequencing information to facilitate individual retrieval of fragments associated with the media stream using a uniform resource identifier via a processing device configured to cache content. A fragment can include one or more stream messages. A stream message can include a message header and a corresponding media data sample. The message header can include a message stream identifier, a message type identifier, a timestamp, and a message length value.07-25-2013
20130191487METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR RECEIVING DIGITAL DATA FILES - A method, apparatus and computer program product are provided to efficiently receive digital imaging data files, regardless of their size. For a respective data packet of a digital imaging data file, the method may determine whether that portion of the digital imaging data file that has been received satisfies the first threshold. If the first threshold is not satisfied, the method may receive the respective data packet using memory, such as by appending the data packet to a linked list. However, if the first threshold is satisfied, the method may receive the respective data packet and subsequent data packet(s) of the digital imaging data file using file storage. The receipt of the respective data packet using file storage is slower than the receipt of the respective data packet using memory.07-25-2013
20130191490SENDING DATA OF READ REQUESTS TO A CLIENT IN A NETWORKED CLIENT-SERVER ARCHITECTURE - Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. The storage system sends to the client agent the requested data by partitioning the returned data into segments for each read request. The storage system sends each segment in a separate network message.07-25-2013
20130191491System and Method for Optimizing Secured Internet Small Computer System Interface Storage Area Networks - A network device includes a port coupled to a device, another port coupled to another device, and an access control list with an access control entry that causes the network device to permit log in frames to be forwarded from the first device to the second device. The network device receives a frame addressed to the second device and determines the frame type. If the frame type is a log in frame, then the frame is forwarded to the second device and another access control entry is added to the access control list. The second access control entry causes the network device to permit data frames to be forwarded from the first device to the second device. If not, then the frame is dropped based upon the first access control entry.07-25-2013
20130191488SYSTEM AND METHOD FOR EFFICIENT DELIVERY OF MULTI-UNICAST COMMUNICATION TRAFFIC - Disclosed is a system and method for the delivery of multi-unicast communication traffic. A multimedia router is adapted to analyze and identify contents which it handles and one or more access nodes are adapted to receive one or more of the identified contents, cache contents based on said identification; and use cached contents as substitutes for redundant traffic, received by the same access node.07-25-2013
20110320557INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information-processing apparatus includes a communication unit that transmits a first command to register in a memory a service provided by an application using a first communicative method. The communication unit transmits a second command to register in the memory a service indicator of the service using a second communicative method different from the first communicative method.12-29-2011
20120290676System and Method for Managing Information Retrievals for Integrated Digital and Analog Archives on a Global Basis - A system and method for managing information retrievals from all of an enterprises' archives across all operating locations. The archives include both digital and analog archives. A single “virtual archive” is provided which links all of the archives of the enterprise, regardless of the location or configuration of the archive. The virtual archive allows for data aggregation (regardless of location) so the a user can have data from multiple physical locations on a single screen in a single view. A single, consistent and user friendly interface is provided through which users are able to access multiple applications through a single sign-on and password. Logical tables that are used to direct information retrieval requests to the physical archives. The retrieved information is reformatted and repackaging to resolve any incompatibility between the format of the stored information and the distribution media.11-15-2012
20120030306RAPID MOVEMENT SYSTEM FOR VIRTUAL DEVICES IN A COMPUTING SYSTEM, MANAGEMENT DEVICE, AND METHOD AND PROGRAM THEREFOR - In a virtualized computer system having at least two computers connected via a network, the service suspension period while a virtual device is dynamically migrated from a first computer to a second computer is shortened.02-02-2012
20120030305METHOD AND SYSTEM FOR DELIVERING EMBEDDED OBJECTS IN A WEBPAGE TO A USER AGENT USING A NETWORK DEVICE - A method and system for delivering embedded objects in a webpage to a user agent using a network device is described. In one embodiment, a method for delivering embedded objects in a webpage to a user agent using a network device is described. The method for delivering embedded objects in a webpage to a user agent using a network device involves intercepting a webpage at a network device, where the webpage is transmitted from a web server and is destined to a user agent, scanning the webpage at the network device to discover links that are embedded in the webpage, obtaining an object that is identified by one of the links at the network device, and transmitting the object from the network device to the user agent as soon as the object is obtained at the network device. Other embodiments are also described.02-02-2012
20130198314METHOD OF OPTIMIZATION OF CACHE MEMORY MANAGEMENT AND CORRESPONDING APPARATUS - In order to optimize cache memory management, the invention proposes a method and corresponding apparatus that comprises application of different cache memory management policies according to data origin and possibly to data type and the use of increasing levels of exclusion from adding to cache of data the exclusion levels being increasingly restrictive with regard to adding data to cache as the cache memory fill level increases. The method and device allows among others to keep important information in cache memory and reduce time spent in swapping information in- and out of cache memory.08-01-2013
20130198315Method and System For Network Latency Virtualization In A Cloud Transport Environment - A cache device is disposed on a connection path between a user computer executing a software application and a network. The application exchanges data with a further computer via the network. The cache device includes a cache memory and a processor. The cache device is configured to measure, by the processor, a first latency between the user computer and the further computer. The cache device is further configured to determine an acceptable latency range based on the latency and a requirement of the software application. The cache device is further configured to measure a second latency between the user computer and the further computer. The cache device is further configured to store, in the cache memory, a set of data transmitted from the user computer to the further computer, if the second latency is not within the acceptable latency range.08-01-2013
20130198316SECURE RESOURCE NAME RESOLUTION USING A CACHE - Techniques for securing name resolution technologies and for ensuring that name resolution technologies can function in modern networks that have a plurality of overlay networks accessible via a single network interface. In accordance with some of the principles described herein, a set of resolution parameters may be implemented by a user to be used during a name resolution process. In some implementations, when an identifier is obtained for a network resource, the identifier may be stored in a cache with resolution parameters that were used in obtaining the identifier. When a new name resolution request is received, the cache may be examined to determine whether a corresponding second identifier is in the cache, and whether resolution parameters used to retrieve the second identifier in the cache match the resolution parameters for the new resolution request. If so, the second identifier may be returned from the cache.08-01-2013
20130198313USING ENTITY TAGS (ETAGS) IN A HIERARCHICAL HTTP PROXY CACHE TO REDUCE NETWORK TRAFFIC - Disclosed is a program for validating a web cache independent of an origin server. A computer in between a client computer and the origin server computer receives a request for a resource and an entity tag (ETag) corresponding to the request. The computer forwards the request to the origin server and subsequently receives the resource. The computer generates an ETag for the received resource and compares the generated ETag to the ETag corresponding to the request. If the ETags match, the computer sends an indication toward the client computer that the resource has not been modified.08-01-2013
20120066335DYNAMIC ADDRESS MAPPING - To permit communications between devices using different communication protocols, a mapping device is connected to one or more communication networks, and stores associations between communication addresses as dynamic address mappings. A dynamic address mapping is associated with an initiator address (from which the communication is initiated) and a recipient address (to which a communication is initially addressed) and minimally contains a final address (to which a communication is finally routed). A new dynamic address mapping can be created in response a request, typically from a communication initiator. Communications from the initiator address to the recipient address are routed to the final address, with appropriate format conversion if the protocol of the final address is different to that of the initiator address. A reply address may also be stored in a dynamic address mapping for return communications, and a reply mapping may be automatically generated to map the reply address to the initiator address.03-15-2012
20130204960ALLOCATION AND BALANCING OF STORAGE RESOURCES - A method and technique for allocation and balancing of storage resources includes: determining, for each of a plurality of storage controllers, an input/output (I/O) latency value based on an I/O latency associated with each storage volume controlled by a respective storage controller; determining network bandwidth utilization and network latency values corresponding to each storage controller; responsive to receiving a request to allocate a new storage volume, selecting a storage controller having a desired I/O latency value; determining whether the network bandwidth utilization and network latency values for the selected storage controller are below respective network bandwidth utilization and network latency value thresholds; and responsive to determining that the network bandwidth utilization and network latency values for the selected storage controller are below the respective thresholds, allocating the new storage volume to the selected storage controller.08-08-2013
20130204959SYSTEMS AND METHODS OF REAL-TIME DATA SUBSCRIPTION AND REPORTING FOR TELECOMMUNICATIONS SYSTEMS AND DEVICES - Systems and methods of performing real-time data subscription and reporting for telecommunications systems and devices. The systems and methods employ a real-time data aggregation component that can manage subscription requests for real-time data objects stored on the telecommunications systems and devices from one or more users over a network, dynamically start and stop such subscription requests, cache the requested real-time data objects, and supply the real-time data to the respective users. By employing the real-time data aggregation component to handle such subscription requests for data from one or more users, the systems and methods can supply such data, including real-time data, to the respective users, while reducing the overhead on the telecommunications systems and devices and increasing overall system performance.08-08-2013
20120072526METHOD AND NODE FOR DISTRIBUTING ELECTRONIC CONTENT IN A CONTENT DISTRIBUTION NETWORK - The present invention relates to a method and node for efficiently distributing electronic content in a content distribution network (CDN) comprising a plurality of cache nodes.03-22-2012
20120072525Extending Caching Network Functionality To An Existing Streaming Media Server - A content delivery network (CDN) includes multiple cluster sites, including sites with streaming media servers, caching servers and storage devices accessible to the caching servers for storing streaming content. Interface software is configured to initiate retrieval, by a caching server, of electronic streaming resources from the one or more storage devices in response to requests for the electronic streaming resource received by the streaming media server.03-22-2012
20120303736Method And Apparatus For Achieving Data Security In A Distributed Cloud Computing Environment - A distributed cloud storage system includes a cloud storage broker logically residing between a client platform and a plurality of remote cloud storage platforms. The cloud storage broker mediates execution of a cloud storage process that involves dividing a data item into multiple portions and allocating the portions to multiple selected cloud storage platforms according to first and second rules defining a key known only to the cloud storage broker or to the client. At some later time when it is desired to retrieve the data item, the key is retrieved from storage and the rules are executed in a reverse fashion to retrieve and reassemble the data item.11-29-2012
20120096110Registering, Transferring, and Acting on Event Metadata - A technique and associated mechanism is described for registering event metadata at a first site, transferring the event metadata to a second site using a portable module, and processing the event metadata at the second site. A user can register the event metadata at the first site in the course of consuming broadcast content. Namely, when the user encounters an interesting portion of the broadcast content, the user activates an input mechanism, resulting in the storage of event metadata associated with the interesting portion on the portable module. The second site can upload the event metadata from the portable module and, in response, provide content associated with the event metadata, including recommended content associated with the event metadata.04-19-2012
20120096108MANAGING APPLICATION INTERACTIONS USING DISTRIBUTED MODALITY COMPONENTS - A method for managing multimodal interactions can include the step of registering a multitude of modality components with a modality component server, wherein each modality component handles an interface modality for an application. The modality component can be connected to a device. A user interaction can be conveyed from the device to the modality component for processing. Results from the user interaction can be placed on a shared memory are of the modality component server.04-19-2012
20120096107HOME APPLIANCE MANAGING SYSTEM - The home appliance managing system includes a plurality of central managing devices and a center server. The center server is connected to the plurality of the central managing devices, and stores plural data used at home appliances. When the central managing device stores predetermined data requested by the home appliance, the central managing device sends the predetermined data to the home appliance. When the central managing device does not store the predetermined data, the central managing device requests the predetermined data from the center server. The center server sends the predetermined data to the central managing device in response to the request from the central managing device. The central managing device sends the predetermined data received from the center server to the home appliance and stores the same data. The center server selects the cache data from the plural data on the basis of the data previously sent to the central managing device, and sends the cache data to the central managing device. The central managing device stores the cache data received from the center server.04-19-2012
20120096106Extending a content delivery network (CDN) into a mobile or wireline network - A content delivery network (CDN) comprises a set of edge servers, and a domain name service (DNS) that is authoritative for content provider domains served by the CDN. The CDN is extended into one or more mobile or wireline networks that cannot or do not otherwise support fully-managed CDN edge servers. In particular, an “Extender” is deployed in the mobile or wireline network, preferably as a passive web caching proxy that is beyond the edge of the CDN but that serves CDN-provisioned content under the control of the CDN. The Extender may also be used to transparently cache and serve non-CDN content. An information channel is established between the Extender and the CDN to facilitate the Extender functionality.04-19-2012
20130212209INFORMATION PROCESSING APPARATUS, SWITCH, STORAGE SYSTEM, AND STORAGE SYSTEM CONTROL METHOD - In an information processing apparatus, a data controller performs transmission and reception of data with a storage apparatus having a storage region allocated to the information processing apparatus by a physical port or a virtual port set at the physical port. The physical port transmits and receives data by communicating with the storage apparatus. A management controller calculates a use rate based on a storage capacity of the allocated storage region of the storage apparatus and an amount of use of the storage region and determines whether to perform allocation based on the calculated use rate. When determining to perform the allocation, the management controller allocates an unallocated storage region allocated to none of information processing apparatuses to the information processing apparatus, and also sets a virtual port and connects the information processing apparatus to the allocated storage region by the virtual port.08-15-2013
20130212207ARCHITECTURE AND METHOD FOR REMOTE MEMORY SYSTEM DIAGNOSTIC AND OPTIMIZATION - A smart memory system preferably includes a memory including one or more memory chips and a smart memory controller. The smart memory controller includes a transmitter communicatively coupled to the cloud. The transmitter securely transmits a product identification (ID) associated with the memory to the cloud. A cloud-based data center receives and stores the product ID and related information associated with the memory. A smart memory tester receives a product specific test program from the cloud-based data center. The smart memory tester may remotely test the memory via the cloud in accordance with the product specific test program. The information stored in the cloud-based data center can be accessed anywhere in the world by authorized personnel. Repair solutions can be remotely determined based on the test results and the diagnostic information. The repair solutions are transmitted to the smart memory controller, which repairs the memory.08-15-2013
20130212208PARTIAL OBJECT CACHING - A method of providing media at multiple bit rates using partial object caching may include receiving, from a first user device, a first request for a media object encoded at a first bit rate; providing the first portion of the media object to the first user device; and caching, in a partial object cache, the first portion of the media object. The method may additionally include receiving, from a second user device, a subsequent request for the media object encoded at the first bit rate; providing the first portion of the media object as retrieved from the partial object cache; and receiving a request for the media object encoded at a second bit rate. The method may further include modifying the request for the media object encoded at the second bit rate to instead request a second portion of the media object at the second bit rate.08-15-2013

Patent applications in class MULTICOMPUTER DATA TRANSFERRING VIA SHARED MEMORY

Patent applications in all subclasses MULTICOMPUTER DATA TRANSFERRING VIA SHARED MEMORY