08th week of 2014 patent applcation highlights part 53 |
Patent application number | Title | Published |
20140052825 | ENHANCEMENT OF UPLOAD AND/OR DOWNLOAD PERFORMANCE BASED ON CLIENT AND/OR SERVER FEEDBACK INFORMATION - Systems and methods for providing enhancement of upload and/or download performance based on client and/or server feedback information are disclosed. In an embodiment, the disclosed method detects that a data transfer event is about to occur and based on a set of characteristics associated with the data transfer event, selects a host from a group of hosts as a pathway for transferring data associated with the data transfer event to optimize data transfer performance. The group of hosts can include a server providing cloud-based collaboration and/or storage services, one or more content delivery network servers and/or geographically distributed edge servers. | 2014-02-20 |
20140052826 | TECHNIQUES FOR PERFORMING PROCESSING FOR DATABASE - Embodiments relate to a method, system and program product for performing data processing. The system includes a plurality of computer servers configured to perform data processing, a client in processing communication with the computer servers and enabled to request data processing from any of the servers and a storing component included in the client for storing information relating to requested data to be processed. A processing component included in each computer server for applying a control lock to data being processed. A reprocessing request component is included in the client for enabling a new server to take over processing of requested data upon failure of previously processing computer server. The computer server obtains information relating to requested data from storing component and information relating to control lock information from the processing component such that the new computer server commences processing at a processing point exactly prior to the failure. | 2014-02-20 |
20140052827 | RELAY COMMUNICATION SYSTEM - A center terminal includes a target terminal list storing unit that registers a target terminal and an operator list storing that registers an operator ID and. Each of the maintenance terminal and the target terminal includes a center terminal information storing unit that registers a center terminal. The center terminal makes a connection job that associates a predetermined target terminal with the operator ID and registers the connection job to the center terminal. One of maintenance terminals receives the specific operator ID and the password from a connected client terminal and sends the center terminal an indication whether or not the maintenance terminal can log in to the center terminal, and, if the logging-in is granted, the maintenance terminal is configured to receive the connection job assigned to the operator ID from the center terminal and notify the operator. If the maintenance terminal receives a selection of the connection job from the operator, the maintenance terminal sends an inquiry to the center terminal whether or not the maintenance terminal can execute the connection job. If the connection job is allowed to be executed, the maintenance terminal can be connected to the target terminal included in the connection job. | 2014-02-20 |
20140052828 | SYSTEM AND METHOD FOR PROVIDING STREAMING DATA TO A MOBILE DEVICE - Various embodiments for a system and method for providing streaming data to a device are provided herein. In one example, a method comprises receiving a request for streaming data from a mobile device, receiving settings with regards to the delivery of the streaming data to the mobile device, retrieving the streaming data from a source of the streaming data, reformatting the streaming data for the mobile device according to the settings and sending the reformatted streaming data to the mobile device in accordance with the settings. | 2014-02-20 |
20140052829 | SYSTEM AND METHOD FOR EFFECTIVELY TRANSMITTING CONTENT ITEMS TO ELECTRONIC DEVICES - A system and method for effectively transmitting content items to electronic devices includes a content server that is configured to access and store various types of content information. A recommendation engine of the content server analyzes network statistics and client profiles to identify appropriate content items for device users of the electronic devices. A transmitter receives the targeted content items from the content server, and responsively provides the content items to the electronic devices by broadcasting the content items over a unidirectional telecommunications link. | 2014-02-20 |
20140052830 | DHCP COMMUNICATIONS CONFIGURATION SYSTEM - A Dynamic Host Configuration Protocol (DHCP) communications configuration system includes a client information handling system (IHS) coupled to a controller over a network. The client IHS creates a plurality of DHCP discover messages that include capability data that describes at least one hardware resource on the client IHS, and sends the plurality of DHCP discover messages over the network. The controller receives the plurality of DHCP discover messages and processes the capability data to determine configuration data for the client IHS, creates a plurality of DHCP offer messages including the configuration data for the client IHS, and sends the plurality of DHCP offer messages over the network to the client IHS. The client IHS then uses the configuration data to configure the client IHS. | 2014-02-20 |
20140052831 | MULTICAST SOURCE IN GROUP ADDRESS MAPPING - The present disclosure provides a source specific multicast service that maps multicast group addresses to corresponding source addresses. A boundary routing element can be configured to determine whether a received join request includes a mapped group address. If the join request does not include a mapped group address, boundary routing element can be configured to perform normal join request processing of the join request. If the join request includes a mapped group address, the boundary routing element can be configured to generate a corresponding source address using the mapped group address. The boundary routing element can also be configured to perform alternative join request processing as if the join request were an SSM join request that specified both a source address and a multicast group address. | 2014-02-20 |
20140052832 | WIRELESS COMMUNICATION NETWORK SENSOR INFORMATION FOR CONTROL OF INDUSTRIAL EQUIPMENT IN HARSH ENVIRONMENTS - In certain embodiments, a system includes a master node device. The master node device includes communication circuitry configured to facilitate communication with a welding power supply unit via a long-range communication link, and to facilitate wireless communication with one or more welding-related devices via a short-range wireless communication network. The master node device also includes control circuitry configured to receive sensor data from one or more sensors within a physical vicinity of the short-range wireless communication network, and to route the sensor data to final destinations for the one or more sensors. | 2014-02-20 |
20140052833 | NETWORK ELEMENT CONFIGURATION MANAGEMENT - A method and apparatus ( | 2014-02-20 |
20140052834 | PORTABLE UNIVERSAL PERSONAL STORAGE, ENTERTAINMENT, AND COMMUNICATION DEVICE - A method for synchronizing configuration states of a portable device across a plurality of computing platforms comprises associating a plurality of computing device platforms in a plurality of computing device types with a plurality of synchronization protocols; identifying a type of first computing device via a network; identifying a synchronization protocol associated with the computing device platform in the identified computing device; sending a configuration state from the portable device to the first computing device according to the identified synchronization protocol, and updating the configuration state according to user input on the first computing device; receiving an updated configuration state from the first computing device; translating the updated configuration state to a data format used by a second computing device platform in a second computing device; and storing the updated configuration state and the translated updated configuration state on the portable device. | 2014-02-20 |
20140052835 | CUSTOM ERROR PAGE ENABLED VIA NETWORKED COMPUTING SERVICE - An approach is provided for queuing clients when a web page is temporarily unavailable. The approach includes providing a computer infrastructure operable to: maintain a queue of clients requesting the web page; receive an indication of an availability number from a host of the web page; and release one or more of the clients from the queue equal to the availability number indicated by the host, based on the receiving the indication of the availability number. | 2014-02-20 |
20140052836 | NETWORK SWITCHING SYSTEM USING SOFTWARE DEFINED NETWORKING APPLICATIONS - A network switching system includes a storage device including a plurality of application-provided flow-based rules provided by a plurality of applications. A packet processor is coupled to the storage device and includes a flow-based handler that is operable to receive a packet, determine that the packet is associated with a flow session, and associate a plurality of the application-provided flow-based rules with the packet based the association of the packet with the flow session. The packet processor also includes a flow-based rule processing engine that is operable to determine a priority for the plurality of application-provided flow-based rules and apply at least one of the plurality of application-provided flow-based rules to the packet according to the priority. The system allows a plurality of SDN applications to operate in a network switching system independently and without knowledge of each other. | 2014-02-20 |
20140052837 | PRIORITZED BRANCH COMPUTING SYSTEM - A branch device that may be operable to: request to initiate access to a cloud computing application; map or link service level agreement information associated with the cloud computing application to performance and uptime specifications associated with a policy engine; and communicate with a first computational node that runs a first instance of the cloud computing application. Also, the branch device may be operable to: compare the performance data and the uptime data retrieved from the first computational node against the specifications, respectively; direct a request to the first instance, where the performance data and the uptime data at least satisfies the specifications, respectively; and direct a request to a second instance of the cloud computing application running on a second computational node, where the performance data and the uptime data do not satisfy the specifications, respectively. | 2014-02-20 |
20140052838 | SCRIPTING FOR IMPLEMENTING POLICY-BASED TRAFFIC STEERING AND MANAGEMENT - Methods, systems, and devices are described for managing network communications. A traffic manager module may receive a script over a management plane of a packet core, interpret the script to identify a traffic management policy; and dynamically modify at least one aspect of a proxy connection over a bearer plane of the packet core at the traffic manager module based on the identified traffic management policy. | 2014-02-20 |
20140052839 | SYSTEM AND METHOD FOR PROVISIONING USER ACCESS TO WEB SITE COMPONENTS IN A PORTAL FRAMEWORK - A site in a portal management framework may have a set of site objects given a single identity. The site may be created in the portal management framework by a console object. The portal management framework may have at least one portal providing a gateway for access to the site. Sets of users granted administrative privileges with respect to a site object may further grant and delegate administrative privileges to other sets of users to perform administration type operations on site objects over which they have administrative privileges. Server consoles may be provided for performing administration on object(s) in the portal management framework. Site consoles may be provided for performing administration on object(s) with respect to each site. | 2014-02-20 |
20140052840 | VERSATILE APPLICATION CONFIGURATION FOR DEPLOYABLE COMPUTING ENVIRONMENTS - Within a computing environment, an application may run in a variety of contexts, e.g., as a natively executable application, as a client-side interpretable application embedded in a web browser, or as a server-side application that communicates with the user through a web interface presented on a device. The application may also access resources of the computing environment stored on multiple devices. The configuration of the application to operate equivalently in these diverse environments may be facilitated by representing the application within an object hierarchy representing the computing environment. The application may be configured to operate on the objects of the object hierarchy regardless of the location of the stored objects, to execute on any device, and to execute upon a standard set of application programming interfaces. The configuration of the application in this manner promotes the versatility of the application in operating equivalently in different programming contexts. | 2014-02-20 |
20140052841 | COMPUTER PROGRAM, METHOD, AND INFORMATION PROCESSING APPARATUS FOR ANALYZING PERFORMANCE OF COMPUTER SYSTEM - In an information processing apparatus, a comparing unit determines whether the response time of each transaction falls within an acceptable time range that is specified previously. For each time window, a first calculation unit calculates a load of processes executed in parallel by the servers in a specified tier, based on transaction data of individual transactions. Further, a second calculation unit calculates a total progress quantity in each time window, based on the transaction data of transactions whose respective response times are determined to fall within the acceptable time range. A determination unit determines a specific load value as a threshold at which the total progress quantity begins to decrease in spite of an increase of the load. | 2014-02-20 |
20140052842 | MEASURING PROBLEMS FROM SOCIAL MEDIA DISCUSSIONS - Embodiments of the present invention provide a system, method, and program product to measure problems from a social media discussion. In exemplary embodiments, a computer extracts one or more problems from the social media discussion. The computer extracts one or more severity indicators and one or more complexity indicators from the social media discussion. The computer clusters the one or more problems into one or more sets of unique problems in a manner that related problems are clustered together into the one or more unique problems. The computer determines an overall severity and an overall complexity of the sets of unique problems. | 2014-02-20 |
20140052843 | Auto Management of a Virtual Device Context Enabled Network Infrastructure - In some embodiments, a virtual device context (vDC) domain may be advertised to other network devices. If at least a partition of each device is determined to belong to the same vDC domain, the network interface communicating with the at least one device may be activated. | 2014-02-20 |
20140052844 | MANAGEMENT OF A VIRTUAL MACHINE IN A STORAGE AREA NETWORK ENVIRONMENT - A computer-implemented method for management of a virtual machine in a storage area network (SAN) environment. A plurality of SAN devices for the virtual machine are discovered by a management server. Performance statistics for the plurality of SAN devices are monitored at the management server. Health of the virtual machine is determined based at least in part on the performance statistics for the plurality of SAN devices at the management server. | 2014-02-20 |
20140052845 | DISCOVERY OF STORAGE AREA NETWORK DEVICES FOR A VIRTUAL MACHINE - A computer-implemented method for discovering a plurality of storage area network (SAN) devices for a virtual machine. At a SAN device of the plurality of SAN devices, physically adjacent SAN devices connected to the SAN device are discovered. The physically adjacent SAN devices connected to the SAN device are registered at a name server. | 2014-02-20 |
20140052846 | ADAPTIVE VIDEO STREAMING OVER A CONTENT DELIVERY NETWORK - A system and method provides adaptively streaming a video over a content delivery network. A client sends a streaming request for a first portion of the video to a computer server, where the video has multiple video chunks, and each video chunk has one or more streaming parameters (e.g., priority and bitrate). The computer server retrieves the requested portion of the video and streams the first portion of the video over a content delivery network. The client monitors the video chunks received from the computer server and determines the video quality of the next portion of the video based on the monitoring. Responsive to the condition of the content delivery network being able to support streaming the next portion of the video with higher quality, the client updates the default video quality and requests the next portion of the video with the updated default video quality. | 2014-02-20 |
20140052847 | SYSTEM AND METHOD FOR NETWORK CAPACITY PLANNING - A method and system for network capacity planning are provided. The method includes: collecting utilization data related to a plurality of network resources on the network; determining a peak period for each of the network resources based on the utilization data; determining at least one key performance indicator (KPI) over the peak period for each of the network resources; aggregating each of the KPIs for each of the plurality of network resources; and outputting the aggregated KPIs. The system includes a data source module configured to collect utilization data related to a plurality of network resources; a peak period module configured to determine a peak period for each of the network resources based on the utilization data; a peak KPI module configured to determine at least one KPI over the peak period for each of the plurality of network resources; a KPI aggregation module configured to aggregate the KPIs for each of the network resources; and a processor module configured to output the aggregated KPIs. | 2014-02-20 |
20140052848 | LATENCY VIRTUALIZATION - A technique involves placing a data acceleration engine between an end user device and a host device. The host device provides data associated with a client application to the data acceleration engine, which provides the data to the end user device. If the data acceleration engine is on the host device, content from a datastore is served to the data acceleration engine as if the data acceleration engine were a client running the client application locally; therefore, latency normally associated with a network between the content datastore and the client device is eliminated. If the data acceleration engine is on the end user device and has received at least some data in advance of a relevant query, responses to the query also do not have latency associated with a network. The data acceleration engine can be implemented as a series of data acceleration engines between end user and host devices. | 2014-02-20 |
20140052849 | Sensor-based Detection and Remediation System - The invention comprises a method and system of deploying and managing sensor agents to provide services to networks and devices within a network. The invention dynamically deploys, initiates, and controls sensor agents that scan networks. Data obtained during the scan are returned to an analysis system for evaluation. Results are displayed to a user through a graphical interface or stored in a database. Results may also be used by the analysis system to remediate anomalies and provide graphical network information. Typically, a plurality of sensor agents are used to gather data in the aggregate and provide a more complete analysis on the operation and security of a network. | 2014-02-20 |
20140052850 | Datacenter Capacity Planning and Management - The present invention relates to the field of facility management, and more specifically, to methods and systems for datacenter capacity monitoring and planning. Embodiments of the present invention utilize various environmental variables to help execute and plan move/add/change work orders within a datacenter while remaining within desired guard bands. | 2014-02-20 |
20140052851 | SYSTEMS AND METHODS FOR DISCOVERING SOURCES OF ONLINE CONTENT - To determine an association between elements associated with a unified display on a screen, a request associated with the unified display is received from a browser, and a response to the request is identified as a first element associated with the unified display. A second element is identified as being spawned from the first element, if a parameter associated with the first element, which can be an event, a source, or both, is determined to be associated with the second element also. In that case, the second element is determined to be associated with the first element via the parameter. | 2014-02-20 |
20140052852 | VALIDATING NETWORK TRAFFIC POLICY - At least one inline probe is employed to test compliance of a network element with a network traffic policy. The testing capability of the probe is handled by specialized software or hardware. The inline probes hardware can be implemented in network elements such as routers or transceivers. The inline probes can be discovered, registered, and controlled by a dedicated controller disposed at a remote location. | 2014-02-20 |
20140052853 | Unmoderated Remote User Testing and Card Sorting - A computer-implemented method for performing unmoderated remote usability testing of an executable software module. The method includes identifying a multitude of participants, each of the multitude of participants being equipped with a data processing unit adapted to receive a multitude of responses from the multitude of participants. Each of the multitude of responses may be associated with using the executable software module. The method further includes connecting the multitude of participants with a server, automatically presenting at least one of a multitude of tasks associated with at least one usability metric of the executable software module to at least one of the multitude of participants, and gathering the at least one of the multitude of responses related to the at least one of the multitude of tasks. | 2014-02-20 |
20140052854 | CONTENT DELIVERY WITH LIMITED FREE SERVICE BASED ON PARAMETERIZED BEHAVIORAL MODEL - A user is allocated a first period of limited free access to content; the user's activity during the first period is monitored; and the user is assigned to a particular cohort based on the user's activity during the first period. Each cohort prescribes one or more conditions governing additional free access by users assigned to that cohort. The user is allocated a usage allowance based on the conditions prescribed in the cohort to which the user was assigned; and the user's actual usage is enforced according to the conditions prescribed in that cohort. After a time period prescribed by that cohort, the user is given an opportunity to become a subscriber to the system. If the user does not become a subscriber then the user's usage allowance is adjusted based on conditions prescribed in that cohort. | 2014-02-20 |
20140052855 | METHOD FOR PARSING AN INFORMATION STRING TO EXTRACT REQUESTED INFORMATION RELATED TO A DEVICE COUPLED TO A NETWORK IN A MULTI-PROTOCOL REMOTE MONITORING SYSTEM - A method, system, and computer program product for parsing an information string to extract requested information related to a remotely monitored device communicatively coupled to a network, including accessing the device using an HTTP protocol to obtain an information string associated with the device; determining, based on a type of the requested information, data extraction information for optimally extracting the requested information from the device; parsing the information string according to the data extraction information to identify substrings within the information string; and determining the requested information based on the information string, identified substrings, and the data extraction information. | 2014-02-20 |
20140052856 | NAMING OF DISTRIBUTED BUSINESS TRANSACTIONS - The present technology monitors a web application provided by one or more services. A service may be provided by applications. The monitoring system provides end-to-end business transaction visibility, identifies performance issues quickly and has dynamical scaling capability across monitored systems including cloud systems, virtual systems and physical infrastructures. In instances, a request may be received from a remote application. The request may be associated with a distributed transaction. Data associated with the request may be detected. A distributed transaction identifier may be generated for a distributed transaction based on the data associated with the request. | 2014-02-20 |
20140052857 | CORRELATION OF DISTRIBUTED BUSINESS TRANSACTIONS - The present technology monitors a web application provided by one or more services. A service may be provided by applications. The monitoring system provides end-to-end business transaction visibility, identifies performance issues quickly and has dynamical scaling capability across monitored systems including cloud systems, virtual systems and physical infrastructures. A first parameter may be received from a first computer by a server. A second parameter may be received from a second computer by the server. A distributed application processed on the first computer and the second computer may be correlated based on the first parameter and the second parameter. | 2014-02-20 |
20140052858 | POLICY DESCRIPTION ASSISTANCE SYSTEM AND POLICY DESCRIPTION ASSISTANCE METHOD - An object of the present invention is to assist in describing policies so that errors when describing policies are reduced. The present invention includes: referring to a parameter information storage unit that stores a plurality of parameters for a monitoring target system, and displaying the plurality of parameters on a screen of a display device; referring to a policy information storage unit that stores a plurality of policies in which a condition including at least one parameter of the plurality of parameters and a process executed when the condition is satisfied are described, displaying the plurality of policies on the screen, and adding or modifying a policy in the policy information storage unit according to a user input; dynamically determining association between the plurality of parameters stored in the parameter information storage unit and the plurality of policies stored in the policy information storage unit; and displaying the association between the plurality of parameters and the plurality of policies on the screen in an identifiable manner based on the determination results. | 2014-02-20 |
20140052859 | UPDATING A CURRENTLY UTILIZED DEVICE - In one example, a system includes an authentication server that is configured to receive an authentication request for a primary application, provide time-based authentication credentials for the primary application, receive an updated authentication request for the primary application, wherein the updated authentication request includes a client device identifier (ID) corresponding to a client device from which the authentication request is received, and transmit the client device ID; the system may further include a push server that is configured to receive the transmitted client device ID, and push an update to the client device having the client device ID. | 2014-02-20 |
20140052860 | IP ADDRESS ALLOCATION - Systems and methods are described for IP Address allocation. A computerized method includes receiving at a wireless access gateway a request from a subscriber to connect to a network, allocating a first IP address to the subscriber from a first pool of IP addresses at the wireless access gateway, and assigning a second IP address to the subscriber from a second pool of IP addresses at the wireless access gateway when the subscriber requests a network service. | 2014-02-20 |
20140052861 | APPARATUS, METHOD AND ARTICLE TO FACILITATE MATCHING OF CLIENTS IN A NETWORKED ENVIRONMENT - Information related to apparently successful matches between two entities is collected, and culled based on a later indication that the match failed. Matches between two entities may be generated based on comparative information with other entities who appear to share some characteristics or preferences. Matches may be based on actual actions, in contrast to expressed preferences. Actual actions may be taken into account in addition to expressed preferences. Generation of matches may take into account geographical and/or temporal proximity and/or likelihood of receiving a response, in addition to other attributes of an entity. Matching algorithms may be updated based on entity input. Potential matches may be presented to third party entities for evaluation. | 2014-02-20 |
20140052862 | EFFICIENT SERVICE DISCOVERY FOR PEER-TO-PEER NETWORKING DEVICES - Techniques for discovering and/or advertising services are described herein. A first bitmask is received from a remote device over a wireless network, the first bitmask having one or more bits that have a predetermined logical value. Each bit represents a particular service provided by the remote device. A logical operation is performed between the first bitmask and a second bitmask locally generated within a local device, where the second bitmask represents a service being searched by the local device. It is determined whether the remote device is potentially capable of providing the service being searched by the local device based on a result of the logical operation. | 2014-02-20 |
20140052863 | Node Address Allocation - There is provided a method for allocating node addresses in a computer network architecture and a computer network architecture for performing such a method. The computer network architecture comprises at least one master node and at least one slave node serially connected downstream of the master node. Each slave node includes a switch for connecting an upstream transmit line with a downstream transmit line at the slave node. When the switch is open, the master node and any upstream slave nodes are not connected via the transmit line to any downstream slave nodes. When the switch is closed, the master node and any upstream slave nodes are connected via the transmit line to any downstream slave nodes. | 2014-02-20 |
20140052864 | SYSTEMS AND METHODS FOR ESTABLISHING A CLOUD BRIDGE BETWEEN VIRTUAL STORAGE RESOURCES - Methods and systems for establishing a cloud bridge between two virtual storage resources and for transmitting data from one first virtual storage resource to the other virtual storage resource. The system can include a first virtual storage resource or cloud, and a storage delivery management service that executes on a computer and within the first virtual storage resource. The storage delivery management service can receive user credentials of a user that identify a storage adapter. Upon receiving the user credentials, the storage delivery management service can invoke the storage adapter which executes an interface that identifies a second virtual storage resource and includes an interface translation file. The storage delivery management service accesses the second virtual storage resource and establishes a cloud bridge with the second virtual storage resource using information obtained from the second virtual storage resource and information translated by the storage adapter using the interface translation file. | 2014-02-20 |
20140052865 | MANAGING CONTENT DELIVERY NETWORK SERVICE PROVIDERS - A system, method, and computer readable medium for managing CDN service providers are provided. A network storage provider storing one or more resources on behalf of a content provider obtains client computing device requests for content. The network storage provider processes the client computing device requests and determines whether a subsequent request for the resource should be directed to a CDN service provider as a function of the updated or processed by the network storage provider storage component. | 2014-02-20 |
20140052866 | System and Method for Providing Dynamic Roll-Back Reservations in Time - A systems, method and computer-readable media are disclosed for providing a dynamic roll-back reservation mask in a compute environment. The method of managing compute resources within a compute environment includes, based on an agreement between a compute resource provider and a customer, creating a roll-back reservation mask for compute resources which slides ahead of current time by a period of time. Within the roll-back reservation mask, the method specifies a subset of consumers and compute resource requests which can access compute resources associated with the roll-back reservation mask and, based on received data, the method dynamically modifies at least one of (1) the period of time the roll-back reservation mask slides ahead of current time and (2) the compute resources associated with the roll-back reservation mask. | 2014-02-20 |
20140052867 | POLICY ENGINE FOR CLOUD PLATFORM - A policy engine is situated within the communications path of a cloud computing environment and a user of the cloud computing environment to comply with an organization's policies for deploying web applications in the cloud computing environment. The policy engine intercepts communications packets to the cloud computing environment from a user, such as a web application developer, for example, in preparation for deploying a web application in the cloud computing environment. The policy engine identifies commands corresponding to the communications packets and directs the communications packets to appropriate rules engines corresponding to such commands in order to execute rules to comply with an organization's policies. Upon completion of execution of the rules, the communications packets are forwarded to the cloud computing environment if they comply with the policies. | 2014-02-20 |
20140052868 | COBROWSING MACROS - Methods and systems of conducting co-browsing sessions may involve joining a co-browsing session with another peer device, receiving a plurality of web requests in a particular sequence from a macro, and transmitting the plurality of web requests in the particular sequence to a server associated with the co-browsing session. In one example, the particular sequence defines a navigation path to a requested resource. | 2014-02-20 |
20140052869 | Operation Mode of Processor - A computing device to detect a communication protocol between the computing device and a second computing device, to identify operating parameters associated with the communication protocol, and to modify a mode of operation of a processor based on the operating parameter. | 2014-02-20 |
20140052870 | NAT TRAVERSAL FOR MEDIA CONFERENCING - Methods for establishing a direct peer-to-peer (“P2P”) connection between two computers are disclosed. In particular, the methods are designed to work in cases where one or both of the computers are connected to a private network, such private networks being interconnected via a public network, such as the Internet. The connections between the private network and the public network are facilitated by network address translation (“NAT”). | 2014-02-20 |
20140052871 | Methods And Apparatus For Management Of Data Privacy - Systems and techniques for managing user data privacy are described. Upon identification of a user device as a candidate for performing data collection relating to network performance experienced by the device, a network operator on whose behalf the data collection is to be performed is identified, and user consent information associated with the user device is examined to determine if a user of the device has given consent for data collection on behalf of the network operator. If the user has given consent, the user device is configured for data collection. | 2014-02-20 |
20140052872 | SYSTEM AND METHOD FOR IMPROVED CONTENT STREAMING - A system and methods for improved streaming of content. After streaming of a content item from a wireless device (e.g., a smart phone, a table computer) commences to a presentation device (e.g., a media receiver, a television), the presentation device determines that it can stream the content item from an alternative source, such as a web server, data server or other content repository residing on the Internet or other network. The presentation device initiates the alternative streaming and notifies the wireless device that it may stop streaming. The wireless device may continue to provide control inputs to allow a user to pause, play, fast forward or otherwise control the presentation, and may or may not present the content item locally. If the presentation device must cease streaming of the content item from the alternative source, it notifies the wireless device, which resumes streaming. | 2014-02-20 |
20140052873 | SPECULATIVE PRE-AUTHORIZATION OF ENCRYPTED DATA STREAMS - Techniques are disclosed for improving user experience of multimedia streaming over computer networks. For example, a method for presenting multimedia content may generally include receiving a request to stream a media title. In response to the request, unencrypted content for the media title is streamed to a client. While streaming the unencrypted content, a digital rights management (DRM) license to access encrypted content for the media title is requested. After receiving the DRM license, the client switches from streaming the unencrypted content for the media title to streaming encrypted content for the media title. The switching from streaming the unencrypted content to streaming the encrypted content does not interrupt playback of the media title. | 2014-02-20 |
20140052874 | METHOD AND APPARATUS FOR RECOVERING MEMORY OF USER PLANE BUFFER - The present invention discloses a method and an apparatus for recovering a memory of a user plane buffer and relates to the communication field. The method and apparatus are used to recover the memory of the user plane buffer immediately and quickly. The method for recovering a memory of a user plane buffer includes: monitoring memory usage of a buffer in real time; when the memory usage of the buffer is greater than or equal to a preset threshold, releasing the memory of the buffer, where the preset threshold is smaller than a memory capacity of the buffer. The solution of the present invention is applicable to any scenario where the memory of the buffer needs to be recovered. | 2014-02-20 |
20140052875 | Service Migration - At a migration server separate from a first server, a plurality of incoming internet protocol, IP, packets directed at an IP address associated with an IP service are received. On the basis of one or more source characteristics associated with IP packets in the plurality, it is determined that a first subset of packets in the plurality originated from one or more client devices which have not been migrated to a second server and that a second subset in the plurality originated from one or more client devices which have been migrated to the second server. IP packets determined to be in the first subset are forwarded to a first physical address associated with the first server for processing at the first server. IP packets determined to be in the second subset are processed at the second server. | 2014-02-20 |
20140052876 | METHOD AND DEVICE FOR STORING AND SENDING MAC ADDRESS ENTRY, AND SYSTEM - Embodiments of the present disclosure provide a method and a device for storing and sending a MAC address entry, and a system. The method includes: sending, by a PE, a first packet to an RR, so that the RR determines a MAC address entry required by the PE according to the first packet, where the RR pre-stores a MAC address table, and the MAC address table includes the required MAC address entry; and receiving, by the PE, a packet which includes the required MAC address entry and is sent by the RR, and storing the required MAC address entry. Through the embodiments of the present disclosure, it may be implemented that the PE stores the MAC address entry according to need. | 2014-02-20 |
20140052877 | Method and apparatus for tenant programmable logical network for multi-tenancy cloud datacenters - Conventional network technology is based on processing metadata in the head part of a network packet (e.g., addresses and context tags). In cloud computing, resources have dynamic properties of on-demand elasticity, trans-datacenter distribution, location motion, and tenant-defining arbitrary network topology. Conventional static networks can no longer satisfy these dynamic properties of IT provisioning. Provided is a network virtualization technology—“NVI”. The NVI technology achieves de-coupling between a logical network and the underlying physical network provided through cloud resources. Network control can be implemented on vNICs of VMs in the network. On NVI, a cloud tenant can construct a firewalled logic and virtual private network to protect rental IT infrastructure in global trans-datacenter distributions. The de-coupling enables the virtual network construction job to be completed as high-level language programming tasks (SDN), achieving cloud properties of automatic, fast, dynamic changing, unlimited size scalability, and tenant-defining arbitrary topology network provisioning. | 2014-02-20 |
20140052878 | COMMAND INTERFACE SYSTEMS AND METHODS - Apparatus, systems, and methods are disclosed that operate within a memory to execute internal commands, to suspend the execution of commands during a transfer period, and to execute external commands following the transfer period. Additional apparatus, systems, and methods are disclosed. | 2014-02-20 |
20140052879 | PROCESSOR, INFORMATION PROCESSING APPARATUS, AND INTERRUPT CONTROL METHOD - An input/output interface unit includes a plurality of ports connected to different external units, and adds predetermined identification information unique to each of the ports to an interrupt request received from each of the external units via the ports. An interrupt control unit stores information on the interrupt request received by the input/output interface unit in a vector storage unit based on the identification information. Each of cores executes a process corresponding to the interrupt request stored in the vector storage unit based on the identification information. | 2014-02-20 |
20140052880 | EXTERNAL CD MODULE WITH USB INTERFACE AND OPERATION METHOD THEREOF - The present invention provides an external CD module for connecting to a remote host device, comprising: a CD mechanism for accessing a data of a CD; a controller, which includes a FAT file system, and the controller is connected to the CD mechanism via a control interface and a data interface to access and convert the data into a file-based format; and a USB interface responsive to the controller, wherein the controller is configured to provide the converted data to the remote host device using MSC communication protocol via the USB interface. | 2014-02-20 |
20140052881 | SYSTEMS AND METHODS FOR CONCATENATING MULTIPLE DEVICES - System and methods are provided. In one embodiment, a system includes serial peripheral interface (SPI) bus and a master device communicatively coupled to the serial peripheral interface (SPI) bus. The system further includes a first slave device communicatively coupled to the SPI bus. The system additionally includes a second slave device communicatively coupled to the SPI bus and to the first slave device; wherein the first and the second slave devices are communicatively coupled in parallel to the SPI bus and wherein the first and the second slave devices are communicatively coupled to each other by using a first chain line, and wherein the master device is configured to communicate with the first and with the second slave devices over the SPI bus. | 2014-02-20 |
20140052882 | Latency Sensitive Software Interrupt and Thread Scheduling - Various embodiments provide an ability to schedule latency-sensitive tasks based, at least in part, upon one or more processor cores usage metrics. Some embodiments gather information associated with whether one or more processor cores are in a heavily loaded state. Alternately or additionally, some embodiments gather information identifying latency-sensitive tasks. Task(s) can be (re)assigned to different processor core(s) for execution when it has been determined that an originally assigned processor core has exceeded a usage threshold. | 2014-02-20 |
20140052883 | EXPANSION MODULE AND CONTROL METHOD THEREOF - An expansion module suitable for providing expansion functions to a mobile electronic device is provided. The expansion module includes a cloud device and a first expansion device. The cloud device includes a first expansion bus interface and a first network interface, wherein the cloud device provides the network function through the first network interface, and provides at least one first peripheral device through the first expansion bus interface. The first expansion device includes at least one second peripheral device, a second expansion bus interface, a third expansion bus interface and a second network interface, wherein the first expansion device provides the network function through the second network interface. The mobile electronic device detects the first network interface and the second network interface, such that the expansion module provides the network function to the mobile electronic device for use through the first network interface or the second network interface. | 2014-02-20 |
20140052884 | MOBILE DEVICE CASE WITH WIRELESS HIGH DEFINITION TRANSMITTER - The subject matter disclosed herein relates to a mobile device case to receive video signals from a physically connected mobile device and wirelessly transmit high definition video signals based, at least in part, on the video signals. | 2014-02-20 |
20140052885 | PARALLEL COMPUTER SYSTEM, DATA TRANSFER DEVICE, AND METHOD FOR CONTROLLING PARALLEL COMPUTER SYSTEM - A switch includes a plurality of ports and a combination determining unit that determines a central processing unit (CPU) to be paired with one of the ports. The port includes: an arbitration circuit that selects the CPU to be paired therewith when receiving an arbitration request from the CPU to be paired in a predetermined state, and selects one of the CPUs from which the arbitration request has been received in other cases to return transmission permission; and a data transfer unit that transfers the received data from the selected CPU to another CPU. The CPU includes: a request transmission unit that transmits the arbitration request to the ports; and a data transmission unit that transmits data to the paired port when the arbitration request is transmitted to the paired port in the predetermined state, and transmits data to the ports that have returned transmission permission in other cases. | 2014-02-20 |
20140052886 | All-to-All Comparisons on Architectures Having Limited Storage Space - Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory. | 2014-02-20 |
20140052887 | APPARATUSES FOR OPERATING, DURING RESPECTIVE POWER MODES, TRANSISTORS OF MULTIPLE PROCESSORS AT CORRESPONDING DUTY CYCLES - A device includes a first processor and a second processor. The first processor is configured to operate in accordance with a first power mode. The first processor includes a first transistor. The first processor is configured to, while operating in accordance with the first power mode, switch the first transistor at a first duty cycle. The second processor is configured to operate in accordance with a second power mode. The second processor includes a second transistor. The second processor is configured to, while operating in accordance with the second power mode, switch the second transistor at a second duty cycle. The second duty cycle is greater than the first duty cycle. The second processor consumes less power while operating in accordance with the second power mode than the first processor consumes while operating in accordance with the first power mode. | 2014-02-20 |
20140052888 | Method and apparatus for providing two way control and data communications to and from transportation refrigeration units (TRUs) - An RS-485 bus is directly connected to transportation refrigeration unit sensors for transmitting information from the sensors to a remote location and for controlling sensor parameters from the remote location. The GENSET associated with the transportation refrigeration unit also utilizes the RS-485 bus, in which the bus is directly connected to the GENSET sensors for bi-directional communication therewith. | 2014-02-20 |
20140052889 | Flexibly Integrating Endpoint Logic Into Varied Platforms - In one embodiment, the present invention is directed to an integrated endpoint having a virtual port coupled between an upstream fabric and an integrated device fabric that includes a multi-function logic to handle various functions for one or more intellectual property (IP) blocks coupled to the integrated device fabric. The integrated device fabric has a primary channel to communicate data and command information between the IP block and the upstream fabric and a sideband channel to communicate sideband information between the IP block and the multi-function logic. Other embodiments are described and claimed. | 2014-02-20 |
20140052890 | High Speed Data Transmission - A data reception circuit removes reliance on stacked transistors providing analog logic processing. A first trigger element outputs an up signal in response to receiving an indication of receipt of a data signal by a receiving device without consideration of an output signal from the receiving device. A second trigger element outputs a down signal in response to receiving an indication of receipt of a data signal by a receiving device without consideration of an output signal from the receiving device. Switches control provision of signals to a received signal line for the receiving device in response to the outputs of the trigger elements. A blocking feedback circuit provides a blocking signal for the receiving device to effect blocking the receiving device from sending data to the sending device when the receiving device is receiving data from the sending device. | 2014-02-20 |
20140052891 | SYSTEM AND METHOD FOR MANAGING PERSISTENCE WITH A MULTI-LEVEL MEMORY HIERARCHY INCLUDING NON-VOLATILE MEMORY - An apparatus and method for implementing non-volatile store (nvstore) and non-volatile flush (nvflush) instructions. For example, a method according to one embodiment comprises: executing a set of non-volatile store instructions indicating data to be persisted to a non-volatile memory (NVM) of a multi-level system memory hierarchy; generating an entry in an NVM store queue prior to storing the data to the NVM, each entry indicating that the data associated therewith has not yet been persisted to non-volatile memory; executing a non-volatile flush instruction at a time when the data associated with each entry in the non-volatile store queue should be persisted to non-volatile memory; and removing the entries from the NVM store queue as the data associated with each entry is written to non-volatile memory | 2014-02-20 |
20140052892 | METHODS AND APPARATUS FOR PROVIDING ACCELERATION OF VIRTUAL MACHINES IN VIRTUAL ENVIRONMENTS - A host server computer system that includes a hypervisor within a virtual space architecture running at least one virtualization, acceleration and management server and at least one virtual machine, at least one virtual disk that is read from and written to by the virtual machine, a cache agent residing in the virtual machine, wherein the cache agent intercepts read or write commands made by the virtual machine to the virtual disk, and a solid state drive. The solid state drive includes a non-volatile memory storage device, a cache device and a memory device driver providing a cache primitives application programming interface to the cache agent and a control interface to the virtualization, acceleration and management server. | 2014-02-20 |
20140052893 | FILE DELETION FOR NON-VOLATILE MEMORY - A device includes non-volatile memory and a controller. The controller receives a write request including data and a logical address associated with a file. The controller stores the data at a data storage segment having a physical address and associates the physical address with the logical address and a file identifier for the file. The controller receives a second write request including data and the logical address associated with the file. The controller stores the data at a second data storage segment having a second physical address and associates the second physical address with the logical address and the file identifier. When a file delete request for the file is received, the controller identifies the first physical address and the second physical address using the file identifier and erases the information stored at the first data storage segment and the second data storage segment based upon the file identification. | 2014-02-20 |
20140052894 | MEMORY CONTROLLER FOR MEMORY WITH MIXED CELL ARRAY AND METHOD OF CONTROLLING THE MEMORY - A memory controller, system including the memory controller and method of controlling the memory. The memory controller receives requests for memory and content sensitively allocates memory space in a mixed cell memory. The memory controller allocates sufficient space including performance memory storing a single bit per cell and dense memory storing more than one bit per cell. Some or all of the memory may be selectable by the memory controller as either Single Level per Cell (SLC) or Multiple Level per Cell (MLC). | 2014-02-20 |
20140052895 | MEMORY WITH MIXED CELL ARRAY AND SYSTEM INCLUDING THE MEMORY - A memory system, system including the memory system and method of reducing memory system power consumption. The memory system includes multiple memory units allocable to one of a number of processor units, e.g., processors or processor cores. A memory controller receives requests for memory from the processor units and allocates sufficient space from the memory to each requesting processor unit. Allocated memory can include some Single Level per Cell (SLC) memory units storing a single bit per cell and other memory units storing more than one bit per cell. Thus, two processor units may be assigned identical memory space, while half, or fewer, than the number of cells of one are assigned to the other. | 2014-02-20 |
20140052896 | SYSTEM AND METHOD FOR EMULATING AN EEPROM IN A NON-VOLATILE MEMORY DEVICE - The invention relates to an electronic memory system, and more specifically, to a system for emulating an electrically erasable programmable read only memory in a non-volatile memory device, and a method of emulating an electrically erasable programmable read only memory in a non-volatile memory device. According to an embodiment, a system for emulating an electrically erasable programmable read only memory is provided, the system including a Flash memory, wherein the Flash memory is configurable into a first region and a second region, wherein the first region is adapted to store a first class of data and the second region is adapted to store a second, different class of data. | 2014-02-20 |
20140052897 | DYNAMIC FORMATION OF GARBAGE COLLECTION UNITS IN A MEMORY - Method and apparatus for managing data in a memory, such as but not limited to a flash memory. In accordance with some embodiments, a memory is provided with a plurality of addressable data storage blocks which are arranged into a first set of garbage collection units (GCUs). The blocks are rearranged into a different, second set of GCUs responsive to parametric performance of the blocks. | 2014-02-20 |
20140052898 | METHOD FOR MAPPING MANAGEMENT - A method for mapping management is disclosed. The steps of the method comprises sending data from a host; programming a host data a non-volatile storage device; updating a mapping address to a Physical Entry to Logical (PE2L) mapping table stored in a SRAM; updating a Physical Entry (PE) status table; checking if the PE2L mapping table is full; if no, loop to the step of programming a non-violate storage device; if yes, remove invalid entries in the PE2L mapping table and update the PE status table, and then run next step; transferring part of the PE2L mapping table to a Logical to Physical (L2P) mapping table stored in the non-volatile storage device; and programming the L2P mapping table to the non-volatile storage device and looping to the step of removing invalid entries in the PE2L mapping table and updating the PE status table. | 2014-02-20 |
20140052899 | MEMORY ADDRESS TRANSLATION METHOD FOR FLASH STORAGE SYSTEM - A memory address translation method for flash storage system is disclosed. There are two level mapping tables to reduce overhead of mapping table management. In level-one mapping table, each entry contains two kinds of information, which one is the validation of this entry, called Valid Mark and the other is the location of level-two mapping. The level-one mapping table is always located on RAM, and never saved into flash memory. In level-two mapping table, each entry contains two kinds of information, which one is the validation of this entry and the other is the physical location of data in flash memory. The physical addresses of both data and level-two mapping table are dynamically determined. Level-two mapping table is loaded to RAM when it is needed to reference, and is saved into flash memory periodically if the content is updated. | 2014-02-20 |
20140052900 | MEMORY CONTROLLER FOR MEMORY WITH MIXED CELL ARRAY AND METHOD OF CONTROLLING THE MEMORY - A memory controller, system including the memory controller and method of controlling the memory. The memory controller receives requests for memory and content sensitively allocates memory space in a mixed cell memory. The memory controller allocates sufficient space including performance memory storing a single bit per cell and dense memory storing more than one bit per cell. Some or all of the memory may be selectable by the memory controller as either Single Level per Cell (SLC) or Multiple Level per Cell (MLC). | 2014-02-20 |
20140052901 | MEMORY WITH MIXED CELL ARRAY AND SYSTEM INCLUDING THE MEMORY - A memory system, system including the memory system and method of reducing memory system power consumption. The memory system includes multiple memory units allocable to one of a number of processor units, e.g., processors or processor cores. A memory controller receives requests for memory from the processor units and allocates sufficient space from the memory to each requesting processor unit. Allocated memory can include some Single Level per Cell (SLC) memory units storing a single bit per cell and other memory units storing more than one bit per cell. Thus, two processor units may be assigned identical memory space, while half, or fewer, than the number of cells of one are assigned to the other. | 2014-02-20 |
20140052902 | ELECTRONIC DEVICE AND METHOD OF GENERATING VIRTUAL UNIVERSAL SERIAL BUS FLASH DEVICE - An electronic device connected to several USB devices. The electronic device divides a memory of each USB device to a plurality of memory blocks. Corresponding memory blocks of each USB device are combined to sectors. All the sectors form a virtual USB flash device. When data need to be written to the USB flash drive, the data is divided to data blocks according to a size of each memory block. The data blocks of the data are stored into the memory blocks of each sector. When the data need to be read from the virtual USB flash device, the data blocks of the data are read from the memory blocks of corresponding sectors. The electronic device combines the data blocks to integrated data. | 2014-02-20 |
20140052903 | MEMORY SYSTEM AND BUS SWITCH - A memory system includes a nonvolatile memory having a plurality of nonvolatile memory chips incorporated therein, a control circuit that controls the nonvolatile memory, an MPU that controls the control circuit, and an interface circuit that communicates with a host, all of which are mounted on a board of the memory system, and the memory system further includes a bus switch that switches connection of a signal line between the control circuit and the nonvolatile memory chips. | 2014-02-20 |
20140052904 | SYSTEMS AND METHODS FOR RECOVERING ADDRESSING DATA - A memory includes first memory configured to store first data indicating relationships between logical addresses and respective physical addresses, wherein the physical addresses are arranged in a plurality of different groups, respective statuses of each of the plurality of different groups, and an activity log indicating when any of the respective statuses has changed. A second memory is configured to store second data in memory locations corresponding to the physical addresses and, in response to a respective status of one of the plurality of groups changing, store a portion of the first data corresponding to the one of the plurality of groups. A recovery module is configured to update, in response to the activity log indicating that the respective status of the one of the plurality of groups has changed, the first data with the portion of the first data corresponding to the one of the plurality of groups. | 2014-02-20 |
20140052905 | Cache Coherent Handshake Protocol for In-Order and Out-of-Order Networks - Disclosed herein is a processing network element (NE) comprising at least one receiver configured to receive a plurality of memory request messages from a plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, and a plurality of response messages to the memory requests from the plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, at least one transmitter configured to transmit the memory requests and memory responses to the plurality of memory nodes, and a controller coupled to the receiver and the transmitter and configured to enforce ordering such that memory requests and memory responses designating the same memory location and the same source node/destination node pair are transmitted by the transmitter in the same order received by the receiver. | 2014-02-20 |
20140052906 | MEMORY CONTROLLER RESPONSIVE TO LATENCY-SENSITIVE APPLICATIONS AND MIXED-GRANULARITY ACCESS REQUESTS - A multi-channel memory controller ( | 2014-02-20 |
20140052907 | DATA DEDUPLICATION IN A REMOVABLE STORAGE DEVICE - An apparatus and associated methodology contemplate a data storage system having a removable storage device operably transferring data between the data storage system and another device via execution of a plurality of input/output (I/O) commands. A commonality factoring (CF) module executing computer instructions stored in memory assigns a CF tag to a data pattern in the transferred data. A deduplication module executing computer instructions stored in memory determines if the data pattern corresponding to the CF tag is previously stored in the removable storage device. | 2014-02-20 |
20140052908 | METHODS AND STRUCTURE FOR NORMALIZING STORAGE PERFORMANCE ACROSS A PLURALITY OF LOGICAL VOLUMES - Methods and structure are disclosed for normalizing storage performance across a plurality of logical volumes. One embodiment is a storage controller. The storage controller is adapted to couple with a plurality of host systems and a storage device. The storage controller is adapted to receive one or more requests to create logical volumes for the plurality of hose systems, and adapted to identify a plurality of performance zones for storage areas of the storage device. The performance zones exhibit different performance criteria for one or more of: reading data from the storage device and writing data to the storage device. The storage controller is further adapted to allocate storage from each of the plurality of performance zones for each of the plurality of logical volumes such that the performance criteria for accessing the storage device is distributed substantially uniformly across the plurality of logical volumes. | 2014-02-20 |
20140052909 | CONTROL SYSTEM AND METHOD FOR STORAGE CONFIGURATION - A control system for storage configuration of a first computer includes a switch apparatus and a storage module. The switch apparatus determines whether a second computer or a hard disk drive (HDD) is connected to a first interface of the switch apparatus. The second computer accesses the storage module of the first computer in response to the storage module being idle. The HDD is added to the storage of the first computer to expand the storage space of the first computer in response to the HDD being connected to the first interface. | 2014-02-20 |
20140052910 | STORAGE CONTROL DEVICE, STORAGE DEVICE, STORAGE SYSTEM, STORAGE CONTROL METHOD, AND PROGRAM FOR THE SAME - A storage control device configured to control a storage device includes a first disk which is in active state and a second disk which is in standby state. The storage control device includes a communication unit and a control unit. The communication unit transmits a read-out request or a write request to the storage device and receives a response to the read-out request or the write request from the storage device. The control unit controls the communication unit so that the communication unit transmits a rotation start command which instructs a start of rotation of the second disk to the storage device, when a time to the point when receiving the response to the read-out request or the write request transmitted to the first disk which is in active state is longer than a predetermined threshold. | 2014-02-20 |
20140052911 | HYBRID CACHING SYSTEM - A system operable to: receive a request for an application unit from a first device; generating a key for the application unit; look up segment cache indices corresponding to the application unit, according to the key; and determine whether the segment cache indices are available. Where the segment cache indices are available, the system may retrieve a segment cache using the segment cache indices; and then retrieve the application unit using the retrieved segment cache. Otherwise, where the segment cache indices are not available, the system may communicate the request to a second device to receive a response from the second device including the segment indices. Further, the system may receive the response from the second device; store a segment index sequence for the application unit in an application optimizer cache based on the response; and retrieve the application unit via the segment index sequence. | 2014-02-20 |
20140052912 | MEMORY DEVICE WITH A LOGICAL-TO-PHYSICAL BANK MAPPING CACHE - A memory device with a logical-to-physical (LTP) bank mapping cache that supports multiple read and write accesses is described herein. The memory device allows for at least one read operation and one write operation to be received during the same clock cycle. In the event that the incoming write operation is not blocked by the at least one read operation, data for that incoming write operation may be stored in the physical memory bank corresponding to a logical memory bank that is associated with the incoming write operation. In the event that the incoming write operation is blocked by the at least one read operation, then data for that incoming write operation may be stored in an unmapped physical bank that is not associated with any logical memory bank. | 2014-02-20 |
20140052913 | MULTI-PORTED MEMORY WITH MULTIPLE ACCESS SUPPORT - A multi-ported memory that supports multiple read and write accesses is described herein. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for at least one read operation and at least one write operation to be received during the same clock cycle. In the event that an incoming write operation is blocked by the at least one read operation, data for that incoming write operation may be stored in a cache included in the multi-port memory. That cache is accessible to both write operations and read operations. In the event than the incoming write operation is not blocked by the at least one read operation, data for that incoming write operation is stored in the memory bank targeted by that incoming write operation. | 2014-02-20 |
20140052914 | MULTI-PORTED MEMORY WITH MULTIPLE ACCESS SUPPORT - A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for read operation(s) and write operation(s) to be received during the same clock cycle. In the event that an incoming write operation is blocked by read operation(s), data for that write operation may be stored in one of a plurality of cache banks included in the multi-port memory. The cache banks are accessible to both write and read operations. In the event than the write operation is not blocked by read operation(s), a determination is made as to whether data for that incoming write operation is stored in the memory bank targeted by that incoming write operation or in one of the cache banks. | 2014-02-20 |
20140052915 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a plurality of cache memories, a plurality of processors configured to respectively access the plurality of cache memories, and a memory, in which each of the plurality of processors executes a program to function as a cache processing unit configured to perform cache processing including at least one of transfer to the memory and discard with respect to all the pieces of data stored in the cache memory. | 2014-02-20 |
20140052916 | Reduced Scalable Cache Directory - A processing network comprising a cache configured to store copies of memory data as a plurality of cache lines, a cache controller configured to receive data requests from a plurality of cache agents, and designate at least one of the cache agents as an owner of a first of the cache lines, and a directory configured to store cache ownership designations of the first cache line, and wherein the directory is encoded to support substantially simultaneous ownership of the first cache line by a plurality but less than all of the cache agents. Also disclosed is a method comprising receiving coherent transactions from a plurality of cache agents, and storing ownership designations of a plurality of cache lines by the cache agents in a directory, wherein the directory is configured to support storage of substantially simultaneous ownership designations for a plurality but less than all of the cache agents. | 2014-02-20 |
20140052917 | USING A SHARED LAST-LEVEL TLB TO REDUCE ADDRESS-TRANSLATION LATENCY - The disclosed embodiments provide techniques for reducing address-translation latency and the serialization latency of combined TLB and data cache misses in a coherent shared-memory system. For instance, the last-level TLB structures of two or more multiprocessor nodes can be configured to act together as either a distributed shared last-level TLB or a directory-based shared last-level TLB. Such TLB-sharing techniques increase the total amount of useful translations that are cached by the system, thereby reducing the number of page-table walks and improving performance. Furthermore, a coherent shared-memory system with a shared last-level TLB can be further configured to fuse TLB and cache misses such that some of the latency of data coherence operations is overlapped with address translation and data cache access latencies, thereby further improving the performance of memory operations. | 2014-02-20 |
20140052918 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR MANAGING CACHE MISS REQUESTS - A system, method, and computer program product are provided for managing miss requests. In use, a miss request is received at a unified miss handler from one of a plurality of distributed local caches. Additionally, the miss request is managed, utilizing the unified miss handler. | 2014-02-20 |
20140052919 | SYSTEM TRANSLATION LOOK-ASIDE BUFFER INTEGRATED IN AN INTERCONNECT - System TLBs are integrated within an interconnect, use a and share a transport network to connect to a shared walker port. Transactions are able to pass STLB allocation information through a second initiator side interconnect, in a way that interconnects can be cascaded, so as to allow initiators to control a shared STLB within the first interconnect. Within the first interconnect, multiple STLBs share an intermediate-level translation cache that improves performance when there is locality between requests to the two STLBs. | 2014-02-20 |
20140052920 | DOMAIN STATE - Method and apparatus to efficiently maintain cache coherency by reading/writing a domain state field associated with a tag entry within a cache tag directory. A value may be assigned to a domain state field of a tag entry in a cache tag directory. The cache tag directory may belong to a hierarchy of cache tag directories. | 2014-02-20 |
20140052921 | STORE-EXCLUSIVE INSTRUCTION CONFLICT RESOLUTION - A data processing system includes a plurality of transaction masters ( | 2014-02-20 |
20140052922 | RANDOM ACCESS OF A CACHE PORTION USING AN ACCESS MODULE - A data processing system having a first processor, a second processor, a local memory of the second processor, and a built-in self-test (BIST) controller of the second processor which can be randomly enabled to perform memory accesses on the local memory of the second processor and which includes a random value generator is provided. The system can perform a method including executing a secure code sequence by the first processor and performing, by the BIST controller of the second processor, BIST memory accesses to the local memory of the second processor in response to the random value generator. Performing the BIST memory accesses is performed concurrently with executing the secure code sequence. | 2014-02-20 |
20140052923 | PROCESSOR AND CONTROL METHOD FOR PROCESSOR - A processor includes a plurality of nodes arranged two dimensionally in the X-axis direction and in the Y-axis direction, and each of the nodes includes a processor core and a distributed shared cache memory. The processor also includes a first connecting unit and a second connecting unit. The first connecting unit connects adjacent nodes in the X-axis direction among the nodes, in a ring shape. The second connecting unit connects adjacent nodes in the Y-axis direction among the nodes, in a ring shape. The cache memories included in the respective nodes are divided into banks in the Y-axis direction. Coherency of the cache memories in the X-axis direction is controlled by a snoop system. The cache memories are shared by the nodes. | 2014-02-20 |
20140052924 | Selective Memory Scrubbing Based on Data Type - A method for minimizing soft error rates within caches by controlling a memory scrubbing rate selectively for a cache memory at an individual bank level. More specifically, the disclosure relates to maintaining a predetermined sequence and process of storing all modified information of a cache in a subset of ways of the cache, based upon for example, a state of a modified indication within status information of a cache line. A cache controller includes a memory scrubbing controller which is programmed to scrub the subset of the ways with the modified information at a smaller interval (i.e., more frequently) compared to the rest of the ways with clean information (i.e., information where the information stored within the main memory is coherent with the information stored within the cache). | 2014-02-20 |