43rd week of 2014 patent applcation highlights part 65 |
Patent application number | Title | Published |
20140317264 | SYSTEM AND METHOD FOR DETERMINING WHETHER A COMPUTER DEVICE IS COMPATIBLE WITH A COMPUTER NETWORK - A system and method are provided for allowing an administrator to automatically determine whether networked computer devices are configured to use governance software that allocates resource in, or controls or restricts the access of other network devices, to certain portions of the networked storage based upon IT governance protocols, network efficiency and economics. To do this, a company server having governance software stored thereon polls a range of device addresses (e.g., IP addresses) specified by the network administrator or stored on a DNS server with a message formatted using protocols such as WebDAV, SMB/CIFS, FTP, etc., and specific to the governance software. If the device responds to the message, the address of the device, along with an indicia that the device is compatible with the governance software is stored in memory. | 2014-10-23 |
20140317265 | HARDWARE LEVEL GENERATED INTERRUPTS INDICATING LOAD BALANCING STATUS FOR A NODE IN A VIRTUALIZED COMPUTING ENVIRONMENT - A computing node includes at least one hardware layer comprising a plurality of hardware resources and at least one virtualization layer operative to manage at one virtual machine defined by at least one resource from among the plurality of hardware resources. The computing node includes load balancing interrupt logic configured in the hardware layer of the node. The load balancing interrupt logic is operative to compare at least one resource utilization level of the plurality of hardware resources by the at least one virtual machine with at least one threshold. The load balancing interrupt logic is operative to generate at least one load balancing interrupt indicating at least one load balancing status of the computing node based on the comparison of the at least one resource utilization level with the at least one threshold. | 2014-10-23 |
20140317266 | Identification of Consumers Based on a Unique Device ID - Machines, systems and methods for identification of a consumer are provided. The method comprises capturing a unique identifier (ID) associated with a computing device, wherein the computing device is configured to access content stored on one or more content servers; and associating the unique ID with tracking data associated with the computing device, wherein when the computing device submits a request to a content server to access content, wherein in response to retrieving at least one of the unique ID or the tracking data of the computing device, the computing device is identified and content pages accessed by the computing device are tracked by a machine that is aware of the association between the unique ID and the tracking data for the computing device. | 2014-10-23 |
20140317267 | High-Density Server Management Controller - The described embodiments include a system management controller for managing servers on a plurality of sled devices. The system management controller includes a processing mechanism and a plurality of internal interfaces coupled to the processing mechanism. Each internal interface in the system management controller is coupled to at least one embedded management controller on each of the sled devices, the at least one embedded management controller on each sled device facilitating communications between the processing mechanism in the system management controller and a corresponding server on the sled device. In these embodiments, the processing mechanism in the system management controller is configured to manage one or more operations of the servers on the plurality of sled devices. | 2014-10-23 |
20140317268 | Automatic detection of optimal devices in a wireless personal network - The proposed embodiment provides a method and system for automatically detecting an optimal device over a network. The method includes receiving parameters associated with devices in the network, prioritizing the received parameters based on one or more rule, and detecting an optimized device based on the assigned priorities of the parameters associated with the devices. | 2014-10-23 |
20140317269 | Installation and Enforcement of Dynamic and Static PCC Rules in Tunneling Scenarios - A Policy and Charging Enforcement Function (PCEF) device of a network having a Policy and Charging Rules Function (PCRF) device. The PCEF device includes a processing unit that detects a tunneled packet and the packet's Internet Protocol version type and determines whether activation of PCC rules in accordance with the IP version type of the tunneled packet is required from the PCRF device. The PCEF device includes a network interface unit in communication with the processing unit and the network that requests from the PCRF device required activation of PCC rules and identifies the IP version type of the tunneled packet to the PCRF device with the request and receives from the PCRF device the PCC rules activation. The processing unit enforces the PCC rules on the tunneled packet. Methods of handling and enforcing rules at a PCEF device of a network and at a PCRF device are also disclosed. | 2014-10-23 |
20140317270 | SYSTEMS, METHODS, AND APPARATUS TO IDENTIFY MEDIA DEVICES - Systems, methods, and apparatus to identify media devices are disclosed. An example method includes determining an internet protocol address of a requesting device of a received network communication. A first lookup is performed to identify a media access control address of the requesting device based on the internet protocol address. Data identifying the network communication is stored in association with the media access control address. | 2014-10-23 |
20140317271 | METHOD AND NODE APPARATUS FOR COLLECTING INFORMATION IN CONTENT NETWORK BASED ON INFORMATION-CENTRIC NETWORKING - In a content network over which a plurality of smart nodes is coupled, each smart node receives advertisement messages broadcasted by adjacent smart nodes. The advertisement message may be one of a link state advertisement (LSA) message including link state information indicative of a link that is a network interface, a server state advertisement (SSA) message including server state information indicative of a data storage state and a processing state of a processing unit of the smart node, and a content state advertisement (CSA) message including content state information indicative of content stored in the smart node. Each smart node updates its own database based on the information included in the received advertisement message. | 2014-10-23 |
20140317272 | METHOD OF COLLECTING INFORMATION, CONTENT NETWORK MANAGEMENT SYSTEM, AND NODE APPARATUS USING MANAGEMENT INTERFACE IN CONTENT NETWORK BASED ON INFORMATION-CENTRIC NETWORKING - In a content network over which a plurality of smart nodes is coupled, a content network management system receives information response messages including pieces of management interface base (MIB) information from smart nodes. Next, the content network management system classifies the pieces of MIB information included in the received response messages into server resource information, topology information, and network resource information, and stores and manages the pieces of information. | 2014-10-23 |
20140317273 | DATACENTER BORDER-ISSUED ANALYTICS FOR MONITORING FEDERATED SERVICES - Technologies are generally described for providing datacenter border-issued analytics for monitoring federated services. In some examples, a deployment manager, which manages placement of application deployment instances across a federation and thus already knows which datacenter each instance is in, may register a package trigger with a gateway at each datacenter when an application is placed in each datacenter. The datacenter gateway(s) may then search through data packets for registered package properties such as content of a packet header that indicates it is a monitoring packet, and inject additional data according to instructions from the deployment manager. For example, the deployment manager may instruct the gateway(s) to inject a datacenter identifier or a network location identifier to each monitoring data packet. The additional data may be customer-defined and the modified monitoring data including the additional data may be sent to a monitoring system to be analyzed. | 2014-10-23 |
20140317274 | MONITORING USER ACTIVITY ON A MOBILE DEVICE - Monitoring user activity on a mobile device is described. In one aspect, video content is received and played to a user of the mobile device. The monitoring activity detects an interruption of playback of the video content and determines an event associated with the interruption. The event is stored in the mobile device and communicated to a remote device. | 2014-10-23 |
20140317275 | CONTENT DISPLAY MONITOR - The invention can enable monitoring of the display of content by a computer system. Moreover, the invention can enable monitoring of the displayed content to produce monitoring information from which conclusions may be deduced regarding the observation of the displayed content by an observer. The invention can also enable monitoring of the display at a content display site of content that is provided by a content provider site over a network to the content display site. Additionally, the invention can enable the expeditious provision of updated and/or tailored content over a network from a content provider site to a content display site so that the content provider's current and appropriately tailored content is always displayed at the content display site. Aspects of the invention related to transfer of content over a network are generally applicable to any type of network. However, it is contemplated that the invention can be particularly useful with a computer network, including private computer networks (e.g., America Online™) and public computer networks (e.g., the Internet). In particular, the invention can be advantageously used with computer networks or portions of computer networks over which video and/or audio content are transferred from one network site to another network site for observation, such as the World Wide Web portion of the Internet. | 2014-10-23 |
20140317276 | APPLICATION BASED DATA TRAFFIC ROUTING USING NETWORK TUNNELING - Various implementations described herein relate to routing network data traffic using network tunnels. In some implementations, one or more tunnels are established between a remote gateway device and a central gateway device central gateway system. The remote gateway device can receive data traffic from one or more client devices and analyzed the data traffic. Based at least in part on the resulting analysis, the remote gateway device identified an application or an application type associated with the data traffic. The remote gateway device can select one or more select tunnels, from the one or more tunnels, based at least in part on the identification of the application or the application type associated with the data traffic. Eventually, the remote gateway device can route the data traffic to the central gateway system using the one or more select tunnels. | 2014-10-23 |
20140317277 | NETWORK INFRASTRUCTURE MANAGEMENT - In one embodiment, the present invention is a network infrastructure management system that allows monitoring and controlling network devices while dynamically discovering them on demand during the process. New management protocols can be dynamically added to the system or built on demand without refactoring the existing algorithms. | 2014-10-23 |
20140317278 | Traffic Analysis for HTTP User Agent Based Device Category Mapping - A traffic analysis system monitors data traffic in a communication network. In the data traffic, flows are detected which are based on the Hypertext Transfer Protocol (HTTP). For each of the flows, a data record is created. The data record comprises at least a User Agent identifier from a message header of a HTTP message of the flow and a device identifier of a user equipment transmitting the flow. The data records are analyzed to determine a mapping of at least one User Agent identifier in the data records to a corresponding device category. | 2014-10-23 |
20140317279 | IDENTIFICATION OF THE PATHS TAKEN THROUGH A NETWORK OF INTERCONNECTED DEVICES - The invention relates to a computer implemented method of identifying in a network of interconnected devices a path through the network from a source device to a final destination device, the path comprising a connected sequence of devices, the method comprising at a monitor computer connected to the network: identifying a first device connected to the source device; transmitting a first query to the first device, the query including a destination identifier and requesting identification of an egress port for messages addressed to the destination identified by the destination identifier when the query is received at the first device; receiving a result message identifying the egress port and identifying the second device connected to the first device based on a network topology accessible by the monitor computer; and addressing a next query to the second device and receiving a next result message identifying an egress port from the second device; and identifying from the network topology a third device connected to the second device, wherein the path is identified to include the first, second and third devices. | 2014-10-23 |
20140317280 | User Bandwidth Notification Model - A user bandwidth notification method is disclosed, where the method includes: detecting, by a network side, available bandwidth of a user and a bandwidth requirement of a service currently used by the user; comparing, by the network side, the available bandwidth of the user and the bandwidth requirement of the service currently used by the user; and notifying, by the network side, the user of a bandwidth condition according to a comparison result. By using the present invention, the user can explicitly know that current poor service experience is caused by a mismatch between a user bandwidth condition and a current service bandwidth requirement and may further take an appropriate measure for handling and solving the problem. | 2014-10-23 |
20140317281 | SYSTEM AND METHOD FOR THE APPLICATION OF PSYCHROMETRIC CHARTS TO DATA CENTERS - A system and method of displaying the temperature and relative humidity data of sensors on a psychrometric chart. The system and method operate to display an environmental envelope on the psychrometric chart in order to compare the data of the sensors to the environmental envelope of the psychrometric chart, in order to ensure safe operating conditions for data center equipment. | 2014-10-23 |
20140317282 | TECHNIQUES FOR MEASURING ABOVE-THE-FOLD PAGE RENDERING - Techniques for measuring above-the-fold (ATF) page rendering are provided. Visible objects for an ATF portion of a browser page are identified. A start and end time for each visible object is recorded. Furthermore, a total elapsed time to finish loading each of the visible objects to the ATF portion of a browser is determined. | 2014-10-23 |
20140317283 | FORECASTING CAPACITY AVAILABLE FOR PROCESSING WORKLOADS IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for forecasting a capacity available for processing a workload in a networked computing environment (e.g., a cloud computing environment). Specifically, aspects of the present invention provide service availability for cloud subscribers by forecasting the capacity available for running or scheduled applications in a networked computing environment. In one embodiment, capacity data may be collected and analyzed in real-time from a set of cloud service providers and/or peer cloud-based systems. In order to further increase forecast accuracy, historical data and forecast output may be post-processed. Data may be post-processed in a substantially continuous manner so as to assess the accuracy of previous forecasts. By factoring in actual capacity data collected after a forecast, and taking into account applications requirements as well as other factors, substantially continuous calibration of the algorithm can occur so as to improve the accuracy of future forecasts and enable functioning in a self-learning (e.g., heuristic) mode. | 2014-10-23 |
20140317284 | MANAGING DATA USAGE OF A COMPUTING DEVICE - Example embodiments disclosed herein relate to managing data usage of a computing device, in example embodiments, an executable component of a computing device is managed, wherein the executable component is to communicate data with a network interface of the computing device. | 2014-10-23 |
20140317285 | INFORMATION PROCESSING SYSTEM - A system that displays the content of games in accordance with the impact on each of the games resulting from network states including an execution unit that executes at least one application from among multiple applications, a sending unit that sends to a terminal the result of the execution, a communication information acquisition unit that acquires communication information representing the communication states between the sending unit and the terminal, a suitability information storage unit that stores suitability information representing the suitability of executing each of the applications in relation to application identification information identifying each of the applications and to the scope of the communication information, a suitability information acquisition unit that acquires the suitability information based on the acquired communication information, and a display that generates display information for displaying part or all of the content of the multiple applications in accordance with the acquired suitability information. | 2014-10-23 |
20140317286 | MONITORING COMPUTER AND METHOD - To achieve a balance between reduction of a disk capacity required to maintain measurement data and retention of necessary measurement data to analyze events. A monitoring computer stores measurement data about a monitoring target computer at a plurality of points in time in a storage device, specifies an event, which has occurred at the monitoring target computer, and event occurrence time based on the measurement data, and selects part of the measurement data at the plurality of points in time as a deletion target in consideration of the measurement data which should not be deleted, based on a capacity of the storage device or a predetermined retention period of the measurement data, and a deletion exception period calculated from the event occurrence time. | 2014-10-23 |
20140317287 | PROCESSING EVENT DATA STREAMS - A computer ( | 2014-10-23 |
20140317288 | DETERMINATION OF A QUALITY INDUCED TERMINATION RATE OF COMMUNICATION SESSIONS - This invention relates to methods and an apparatus for detecting quality induced terminations of media streams of real-time communication sessions within a packet-switched network. To enable the determination of the quality of live media streams passing by the tapping points in a network, and to use this information to determine media transmissions being aborted due to bad quality the invention evaluates quality data records of a terminated media stream to detect, whether the media stream was terminated due to bad quality or another reason. Advantageously, a threshold number of quality data records that were generated for the media stream just before termination thereof are considered in the evaluation. In case each of this threshold number of quality data records yields a bad quality of the media stream, the termination of the media stream may be judged or assumed to be induced due to bad quality. | 2014-10-23 |
20140317289 | DYNAMICALLY AFFINITIZING USERS TO A VERSION OF A WEBSITE - Systems and methods for providing user's access to a particular version of an electronic resource (e.g., a website, web resource or the like) where versions of such electronic resources are stored across a set of servers are disclosed. In one embodiment, user's requests may be received—either requesting a particular version or as an unversioned request. A version control module (for example, a load balancer) may receive these requests and assign the user's request to a first server according to different metrics, e.g., regarding version control rules and/or effective load balancing considerations. If the initial server assigned is not able to handle the user's request, the user's request may be proxied to another server, according to different metrics. If there is no server that may handle the user's request (after a certain number of proxied requests), the request may be returned to the user as not handled. | 2014-10-23 |
20140317290 | DEVICE-TO-DEVICE TAPPING SERVICE LAYER - Embodiments of the present disclosure describe device, methods, computer-readable media and system configurations for providing a device-to-device (“D2D”) tapping service (“DTS”) layer. In various embodiments, a DTS layer of a communication stack of a computing device may receive, from an application executing within an application layer of the communication stack, a request for a resource. In various embodiments, the DTS layer may determine whether the resource is available locally on the computing device. In various embodiments, the DTS layer may issue a domain name system (“DNS”) request through a network layer of the communication stack to facilitate transparent access by the application to the resource on a remote computing device, where it is determined that the resource is unavailable locally on the computing device. Other embodiments may be described and/or claimed. | 2014-10-23 |
20140317291 | SYSTEM AND METHOD FOR INTEGRATING TWO APPLICATION SERVER CONTAINERS IN A TELECOMMUNICATION NETWORK - A system for integrating a first application server container with a second application server container in a telecommunication network. The system comprising a bidirectional messaging bus having a first end and a second end, a first messaging adapter conforming to the first application server container, the first messaging adapter communicably connected to the first end of the messaging bus and is deployed in the first application server container and a second messaging adapter conforming to the second application server container, the second messaging adapter communicably connected to the second end of the messaging bus and is deployed in the second application server container. | 2014-10-23 |
20140317292 | Distributed Multiple-tier Task Allocation - Described is a system and methods for multiple tier distribution of task portions for distributed processing. Essentially, a task is divided into portions by a first computer and a task portion transferred to second participatory computer on the network, whereupon an allocated task portion is again portioned by the second computer into subtask portions, and a subtask portion transferred by the second computer to a third participatory computer on the network, whereby distributed processing transpires, and results collated as required. | 2014-10-23 |
20140317293 | APP STORE PORTAL PROVIDING POINT-AND-CLICK DEPLOYMENT OF THIRD-PARTY VIRTUALIZED NETWORK FUNCTIONS - In one embodiment, a method comprises receiving by an apparatus, via a wide area network, a request for deployment of a selected one of available virtualized network services advertised by the apparatus, the request identifying a host service provider to deploy the one virtualized network service; identifying, by the apparatus, virtualized network functions required by the host service provider for implementation of the one virtualized network service, each virtualized network function having a corresponding and distinct virtualized container specifying attributes for defining execution of the corresponding virtualized network function within one or more physical machines of the host service provider; and sending to the host service provider, by the apparatus, a service container specifying instructions for deploying the one virtualized network service, the service container including instructions for deploying the virtualized network functions as interdependent for implementation of the one virtualized network service by the host service provider. | 2014-10-23 |
20140317294 | BANDWIDTH ALLOCATION FOR SHARED NETWORK INFRASTRUCTURE - Methods and systems are provided for adaptive management of local networks (e.g., in-premises networks, which may access or be connected to cable or satellite networks). A network device (e.g., a gateway device) may be configured to function as a network manager in a local network, to manage internal connections and/or communications within the local network. The managing may comprise assessing effects of the internal connections and/or communications on external connections and/or communications with one or more devices and/or networks external the local network; and setting and/or adjusting based on the assessed effects, one or more communication parameters associated with each one of the internal connections and/or communications. The effects of the internal connections and/or communications may result from utilizing one or more physical mediums that are shared with and/or are commonly used by the external connections and/or communications with one or more devices and/or networks external the local network. | 2014-10-23 |
20140317295 | Allocating a Pool of Shared Bandwidth - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating a pool of shared Internet bandwidth. One of the methods includes providing a first communications channel having a first bandwidth, the first bandwidth being shared by a first group of first users, providing a second communications channel having a second bandwidth different than the first bandwidth, the second bandwidth being shared by a second group of second users, detecting that at least one first data connection for a particular first user in the first group has satisfied a first predetermined condition, and moving, based on the detecting, the at least one first data connection for the particular first user from the first communications channel to the second communications channel. | 2014-10-23 |
20140317296 | ALLOCATING INTERNET PROTOCOL (IP) ADDRESSES TO NODES IN COMMUNICATIONS NETWORKS WHICH USE INTEGRATED IS-IS - Previously it has only been possible to allocate unique internet protocol (IP) addresses to nodes in open systems interconnection (OSI) communications networks such as those using integrated IS-IS, by manual configuration. This is time consuming and expensive because an operator must travel to the site of the node. By exploiting features of the OSI routing protocol the present invention enables IP addresses to be automatically allocated to the new network nodes. This is particularly advantageous for new intermediate systems such as optical multiplexers with integral routers. Once an IP address has been allocated, the node can be managed by a remote management system or operator using internet protocol methods. | 2014-10-23 |
20140317297 | COMPUTER SYSTEM AND MANAGEMENT METHOD FOR THE COMPUTER SYSTEM AND PROGRAM - Even when a configuration in which instances of plural kinds of storage management software having equivalent functions are arranged to cooperatively manage a large-scale storage system is adopted, to prevent occurrence of a management inoperability and configuration information inconsistency and enable the same management operation and information reference as those performed when all management target objects are managed by a single instance. In the present invention, a representative management computer serving as a representative among management computers is determined. The representative management computer collects, from storage apparatuses and host computers, information concerning the management target objects and configuration summary information including a relation type among the objects and determines, on the basis of the configuration summary information, management target objects which each of the management computers should take charge of. | 2014-10-23 |
20140317298 | ASSET SHARING WITHIN AN ENTERPRISE USING A PEER-TO-PEER NETWORK - An approach for sharing an asset in a peer-to-peer network is provided. After determining a locally stored first list does not include meta data specifying the asset, a new node is identified. In response to receiving a subscription from the new node, a second list locally stored at the new node is received. The second list includes the meta data and an identification of a source node that has the asset. The first list is updated to include the meta data and the identification of the source node. The updated first list is searched and in response, the meta data and the identification of the source node are detected. Based on the detected meta data and identification, the source node is identified. A request to retrieve the asset is sent to the source node, and in response, the asset is received. | 2014-10-23 |
20140317299 | USING TEMPLATES TO CONFIGURE CLOUD RESOURCES - The present invention extends to methods, systems, and computer program products for using templates to configure cloud resources. Embodiments of the invention include encapsulating cloud configuration information in an importable/exportable node template. Node templates can also be used to bind groups of nodes to different cloud subscriptions and cloud service accounts. Accordingly, managing the configuration of cloud based resources can be facilitated through an interface at a (e.g., high performance) computing component. Templates can also specify a schedule for starting/stopping instance running within a resource cloud. | 2014-10-23 |
20140317300 | Methods and Network Nodes for Controlling Resources of a Service Session as Well as Corresponding System and Computer Program - The present invention relates to a methods and network nodes for controlling resources of a service session in a communication network as well as to a corresponding system and computer program to improve the handling of resources in the network, and particularly to optimize signaling in the network. The method for controlling resources for a service session by a policy and charging system in a communication network comprises the steps of obtaining, at a first network node, a request including service session data indicating the type of service; determining, based on the service session data obtained at said first network node, a resource type to be assigned to said service; and sending to a second network node an indication of said resource type assigned to said service, according to which resource type it is determined when a service session associated with said service is terminated. | 2014-10-23 |
20140317301 | SYSTEMS AND METHODS FOR ESTABLISHING TELECOMMUNICATION CONNECTION BETWEEN A REQUESTER AND AN INTERPRETER - A representative telecommunication system that establishes communication between an interpreter and a requester is disclosed herein comprising a plurality of computing devices associated with at least one interpreter and at least one requester; a network that interconnects the plurality of computing devices; and a match server that is interconnected to the plurality of computing devices by way of the network. The match server includes a processing device, and memory including an match manager which has instructions that are executed by the processing device. The instructions include the following logics: establish connection between the match server and the computing device associated with the interpreter; assess a request for an interpreter having at least one language interpretation and for an availability of the interpreter; and establish a telecommunication connection between the plurality of the computing devices associated with the interpreter and requester based on the connection established between the match server and the computing device associated with the interpreter, and the assessment of the request for the interpreter having the at least one language interpretation and for an availability of the interpreter. | 2014-10-23 |
20140317302 | VIRTUAL COLLABORATION SESSION ACCESS - Methods are provided that include receiving a request to couple a first client device to a communication session, wherein the request includes user identification information. The method may include determining a number of client devices coupled to the communication session and comparing the number of client devices coupled to the communication session to a maximum number of client devices to determine whether the maximum number of client devices are coupled to the communication session. The method may also include when the maximum number of client devices are coupled to the communication session, determining whether a user associated with the first client device is a preferred user based on at least the user identification information and when the user is the preferred user, coupling the client device associated with the preferred user to the communication session. | 2014-10-23 |
20140317303 | APPLICATION LAUNCHING IN CONJUNCTION WITH AN ACCESSORY - An application can be launched in response to a launch request from an accessory. For example, the mobile computing device can determine whether it is in a state that allows launching of an application and/or can determine whether the application or application type requested in the launch command is available for launching. In response to the request, and if the mobile computing device is capable, the mobile computing device can launch the application. The mobile computing device can also send a positive acknowledgment message to the accessory indicating that the application may be launched. An open communication session message may also be sent to the accessory. In response thereto the accessory can open a communication session and interoperate with the application. | 2014-10-23 |
20140317304 | RUNTIME TUPLE ATTRIBUTE COMPRESSION - A method, system, and computer program product for initializing a stream computing application are disclosed. The method may include receiving a plurality of tuples to be processed by one or more processing elements operating on one or more computer processors. Each processing element may have one or more stream operators. The method may also include determining a first attribute to be processed at a first stream operator that is configured to transmit a tuple having the first attribute along an execution path including at least one intervening stream operator to a second stream operator. The method may include compressing the first attribute when the first attribute is to be next processed by the second stream operator. | 2014-10-23 |
20140317305 | COMPILE-TIME TUPLE ATTRIBUTE COMPRESSION - A method, system, and computer program product for initializing a stream computing application are disclosed. The method may include, during a compiling of code, determining whether an attribute of a tuple to be processed at a first stream operator is to be next processed at a second stream operator. The first stream operator may be configured to transmit the tuple along an execution path to the second stream operator. The execution path includes one or more intervening stream operators between the first and second stream operators. The method may invoke a compression condition when the first attribute of the tuple to be processed at the first stream operator is to be next processed at the second stream operator. | 2014-10-23 |
20140317306 | Fragment Interface Into Dynamic Adaptive Streaming Over Hypertext Transfer Protocol Presentations - A method of Dynamic Adaptive Streaming over Hypertext Transfer Protocol (HTTP) (DASH) comprising accessing a DASH media presentation at a given time of a period on a media timeline of the DASH media presentation, and determining one or more parameters to express a state of the DASH media presentation, wherein the parameters comprise a temporal parameter that indicates the given time, and wherein the given time is relative to a start of the period. | 2014-10-23 |
20140317307 | Period Labeling in Dynamic Adaptive Streaming Over Hypertext Transfer Protocol - A method of Dynamic Adaptive Streaming over Hypertext Transfer Protocol (HTTP) (DASH) comprising receiving an asset that comprises a media presentation described in a media presentation description (MPD), wherein the media presentation comprises one or more periods, and wherein each period comprises at least one adaptation set, and identifying the asset on a period level using one or more asset identifiers specified in the MPD. | 2014-10-23 |
20140317308 | Media Quality Information Signaling In Dynamic Adaptive Video Streaming Over Hypertext Transfer Protocol - A media representation adaptation method comprising obtaining an media presentation description (MPD) that comprises instructions for retrieving a plurality of media segments and their quality information, sending a quality information request, receiving the quality information that comprises a plurality of quality segments, selecting a media segment based on the quality information, sending a media segment request that requests the media segment, and receiving the media segment. A computer program product that when executed by a processor causes a network device to obtain an MPD that comprises instructions for retrieving a media content stream and quality information, determine a quality level threshold, request a quality information associated with the media content stream, receive the quality information, select a media segment with a corresponding quality segment that is greater than the quality level threshold, send a media segment request that requests the media segment, and receive the media segment. | 2014-10-23 |
20140317309 | SYSTEM AND DEVICES FACILITATING DYNAMIC NETWORK LINK ACCELERATION - A peer to peer dynamic network acceleration method and apparatus provide enhanced communications directly between two or more enhanced devices, such as enhanced clients. The enhanced clients may comprise a front-end, a back-end, or both. In general, the front-end and back-end of the enhanced clients work in concert to translate data into an enhanced protocol for communication between the enhanced clients. The enhanced protocol may provide acceleration, security, error correction, and other benefits. Data from various applications may be seamlessly translated between a first protocol and the enhanced protocol, such that the applications need not be modified to use the enhanced protocol. The enhanced clients may automatically detect one another to establish an enhanced communications channel automatically. | 2014-10-23 |
20140317310 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing device receives a user ID, adds domain information to the received user ID after a user is successfully authenticated based on the received user ID, sets the domain information and the user ID as a part of path information of a folder to which image data is sent, and transmits the image data to the folder indicated by the path information. | 2014-10-23 |
20140317311 | COMMUNICATION ROUTING PLANS THAT ARE BASED ON COMMUNICATION DEVICE CONTACT LISTS - In a communication system, a processing system receives contact lists with contact items from wireless communication devices and processes the contact lists to identify a redundant contact item. In response, the processing system transfers a notification to a master wireless communication device. The processing system receives a response from the master wireless communication device and processes the response to generate and transfer a routing instruction. A routing system receives the routing instruction. The routing system receives a message for the master wireless communication device, and in response, selects another wireless communication device for the message based on the routing instruction and transfers the message to the other wireless communication device. | 2014-10-23 |
20140317312 | System and Methods for Dynamic Network Address Modification - The invention presented herein permits split-routing to occur without any changes, modifications, or configuration of the requesting host, network stacks, network architectures and routing and forwarding behavior. The invention is carried out by way of a Module that intercepts the normal and standard DHCP communication between a requesting device and a DHCP server, and substitutes the elements within the server response with the Module's own predefined elements. These substitute elements leverage the behavior of standard protocols to gain desired device network behavior. | 2014-10-23 |
20140317313 | NAT SUB-TOPOLOGY MANAGEMENT SERVER - In a network where network address translation (NAT) has been introduced, a problem occurs in which, when an IP host operating in a network is automatically categorized with automatic IP host discovery using an ARP cache, a plurality of IP hosts with the same IP address are recognized as one IP host by NAT. To resolve this problem, a network management server specifies network sub-topology on the basis of topology information, public addresses translated by NAT, and IP host corresponding relationships. | 2014-10-23 |
20140317314 | METHOD AND APPARATUS FOR ASSIGNING A LOGICAL ADDRESS IN A COMMUNICATION SYSTEM - A method and a system for assigning a unique logical address to a mobile station in a cloud cell are provided. The method includes selecting, by the master base station, a unique logical address from an associated set of addresses, wherein the set of addresses is a subset of a common address space, and assigning the unique logical address to the mobile station so that the mobile station and each of the plurality of base stations communicate in the cloud cell using the assigned unique logical address. | 2014-10-23 |
20140317315 | COMPUTING INFRASTRUCTURE - An affordable, highly trustworthy, survivable and available, operationally efficient distributed supercomputing infrastructure for processing, sharing and protecting both structured and unstructured information. A primary objective of the SHADOWS infrastructure is to establish a highly survivable, essentially maintenance-free shared platform for extremely high-performance computing (i.e., supercomputing)—with “high performance” defined both in terms of total throughput, but also in terms of very low-latency (although not every problem or customer necessarily requires very low latency)—while achieving unprecedented levels of affordability at its simplest, the idea is to use distributed “teams” of nodes in a self-healing network as the basis for managing and coordinating both the work to be accomplished and the resources available to do the work. The SHADOWS concept of “teams” is responsible for its ability to “self-heal” and “adapt” its distributed resources in an “organic” manner. Furthermore, the “teams” themselves are at the heart of decision-making, processing, and storage in the SHADOWS infrastructure. Everything that's important is handled under the auspices and stewardship of a team | 2014-10-23 |
20140317316 | MINIMIZING LATENCY FROM PERIPHERAL DEVICES TO COMPUTE ENGINES - Methods, systems, and computer program products are provided for minimizing latency in a implementation where a peripheral device is used as a capture device and a compute device such as a GPU processes the captured data in a computing environment. In embodiments, a peripheral device and GPU are tightly integrated and communicate at a hardware/firmware level. Peripheral device firmware can determine and store compute instructions specifically for the GPU, in a command queue. The compute instructions in the command queue are understood and consumed by firmware of the GPU. The compute instructions include but are not limited to generating low latency visual feedback for presentation to a display screen, and detecting the presence of gestures to be converted to OS messages that can be utilized by any application. | 2014-10-23 |
20140317317 | ASSIGNING PRIORITIES TO DATA FOR HYBRID DRIVES - A hybrid drive includes multiple parts: a performance part (e.g., a flash memory device) and a base part (e.g., a magnetic or other rotational disk drive). A drive access system, which is typically part of an operating system of a computing device, issues input/output (I/O) commands to the hybrid drive to store data to and retrieve data from the hybrid drive. The drive access system assigns, based on various available information, a priority level to groups of data identified by logical block addresses (LBAs). With each I/O command, the drive access system includes an indication of the priority level of the LBA(s) associated with the I/O command. The hybrid drive determines, based on the priority level indications received from the drive access system, which LBAs are stored on which part or parts of the hybrid drive. | 2014-10-23 |
20140317318 | DEVICE AND METHOD FOR UPDATING FIRMWARE OF A PERIPHERAL DEVICE - An electronic device executes a software application that includes instructions for updating firmware of a peripheral device and one or more firmware images. The electronic executes the firmware update instructions to initiate the firmware update of the peripheral device and transfers a firmware image from the software application to the peripheral device according to a response from the peripheral device. The software application sends information to the peripheral device for verifying the transferred firmware image and causes the peripheral device to use the transferred firmware image upon successful verification. | 2014-10-23 |
20140317319 | DISPLAY DEVICE, PROJECTOR, DISPLAY SYSTEM, AND METHOD OF SWITCHING DEVICE - A display device is capable of switching a function of an indication body in accordance with the need of the user in the case in which the indication body is made to function as a pointing device. The display device is provided with a function device having a first interface. The configuration information of the first interface is stored in the storage section, and is supplied to a host device by a supply section via the function device. A change section is capable of change the configuration information in accordance with the operation of the user received by a reception section. | 2014-10-23 |
20140317320 | UNIVERSAL SERIAL BUS DEVICES SUPPORTING SUPER SPEED AND NON-SUPER SPEED CONNECTIONS FOR COMMUNICATION WITH A HOST DEVICE AND METHODS USING THE SAME - Universal Serial Bus (USB) devices supporting super speed and non-super speed connections for communication with a host device includes a plurality of endpoints (EPs), a non-super speed connection port, a super speed connection port and a configuration unit. The non-super speed connection port and the super speed connection port are connected to the host device. The configuration unit is arranged for dividing the EPs to first and second groups of EPs according to a bandwidth requirement, determining whether a super speed connection with the host device is successfully established and configuring the first group of EPs to operate at a super speed and configuring the second group of EPs to operate at a non-super speed when the super speed connection with the host device is successfully established such that the USB device communicates with the host device at both the super speed and the non-super speed. | 2014-10-23 |
20140317321 | SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD - A signal processing device includes an operation control unit configured to control a timing of an operation process executed by an operation unit; and a transfer control unit configured to control a timing of transferring data that is a target of the operation process, such that the data that is the target of the operation process is loaded by the operation unit according to the timing of the operation process controlled by the operation control unit. | 2014-10-23 |
20140317322 | METHOD FOR OPERATING A COMMUNICATION SYSTEM - A method for transmitting frames containing data between users of a ring-shaped communication system which has a master and at least one slave as users. Each user has at least one interrupt register, and one field of the at least one interrupt register is associated with an interrupt request and includes a value for an interrupt bit. An interrupt request which includes the interrupt bit is transmitted to the master by a slave in a frame designed as an empty frame. In addition, the empty frame has a toggle bit for all slaves which indicates the state of an interrupt request. | 2014-10-23 |
20140317323 | Method and Apparatus for Arbitration with Multiple Source Paths - A method and apparatus for arbitration. In one embodiment, a point in a network includes first and second arbiters. Arbitration of transactions associated with an address within a first range are conducted in the first arbiter, while arbitration of transactions associated with an address within a second range are conducted in the second arbiter. Each transaction is one of a number of different transaction types having a respective priority level. A measurement circuit is coupled to receive information from the first and second arbiters each cycle indicating the type of transactions that won their respective arbitrations. The measurement circuit may update a number of credits associated with the types of winning transactions. The updated number of credits may be provided to both the first and second arbiters, and may be used as a basis for arbitration in the next cycle. | 2014-10-23 |
20140317324 | INTERRUPT CONTROL SYSTEM AND METHOD - An interrupt control system includes a plurality of interrupt sources and a processor. Each interrupt source when activated includes a flag bit. The processor includes a parallel port with multiple pins and a decoding module. The different interrupt sources are connected to different pins of the parallel port. The parallel port thus receives different codes when different interrupt sources generate an interrupt. The decoding module decodes the code received by the parallel port to establish the interrupt source which has generated the interrupt. | 2014-10-23 |
20140317325 | WARNING TRACK INTERRUPTION FACILITY - A program (e.g., an operating system) is provided a warning that it has a grace period in which to perform a function, such as cleanup (e.g., complete, stop and/or move a dispatchable unit). The program is being warned, in one example, that it is losing access to its shared resources. For instance, in a virtual environment, a guest program is warned that it is about to lose its central processing unit resources, and therefore, it is to perform a function, such as cleanup. | 2014-10-23 |
20140317326 | INHIBITION DEVICE, METHOD FOR CONTROLLING INHIBITION DEVICE, DEVICE UNDER CONTROL, ELECTRONIC EQUIPMENT, AND COMPUTER READABLE STORAGE MEDIUM - An inhibition device includes: a location information obtaining section that obtains, from a computing device, information on a touch location; a operation determining section that determines, in accordance with the information on the touch location, whether or not an operation of a user is an operation for causing the computing device to execute a predetermined process; and an inhibition information transmitting section that transmits inhibition information. | 2014-10-23 |
20140317327 | SOFTWARE DEBOUNCING AND NOISE FILTERING MODULES FOR INTERRUPTS - Systems and methods for debouncing a signal line within a computer device are provided. The mechanical nature of physical buttons and switches oftentimes present irregular or noisy signals on a signal line when depressed by a user. Thus, noise and/or irregular waveforms may be present on a signal line that is monitored to produce interrupt signals, when deemed valid and genuine. In many embodiments given herein, debounce modules and techniques set a debounce interval timer and/or a noise filtering interval timer in which debounce modules and/or techniques may note whether the signal line is still asserted (e.g., possibly a genuine interrupt signal) during the debounce interval timer and stable (e.g., no further interrupts have fired) during the noise filtering interval timer. | 2014-10-23 |
20140317328 | SERIAL ATTACHED SCSI EXPANDER AND INTERFACE EXPANDING DEVICE WITH THE SAME - A serial attached SCSI (SAS) expander includes a first port used to connect to a port of a computer, a second port used to electrically connect to corresponding hard disks. Several third ports are respectively used to connect to one corresponding adjacent SAS expander, a firmware, and a transmission management module. The firmware stores a register value reflecting data traffic of the SAS expander. The transmission management module obtains the register value from the firmware, and determines whether the data traffic of the SAS expander is greater than a predetermined value according to the register value. In addition, distributes a part of data to at least one adjacent SAS expander and then transmits the part of data via the at least one adjacent SAS expander, when determining the data traffic of the SAS expander is greater than the predetermined value. | 2014-10-23 |
20140317329 | Docking Connector Platform For Mobile Electronic Devices - Docking platforms formed in one of the largest-surface-area surfaces of mobile electronic devices. Such a docking platform may comprise a docking accessory cavity having a docking connection system comprising one or more docking connectors formed within the cavity, and optionally two or more electrical contacts within the cavity, the contacts electrically connected to electronics within the electronic device and constructed and arranged to allow electrical connection to detachable docking accessories. The docking connection system is operable to form detachable attachments to multiple independent docking accessories simultaneously. The cavities of the docking platforms are shaped to accommodate a broad range of docking accessories that are specially adapted to sit in a generally flush manner with the back surface of the mobile electronic device while attached to the docking connectors. One type of accessory forms an assembly with an expandable accordion attached to the docking platform. | 2014-10-23 |
20140317330 | TWO WIRE SERIAL VOLTAGE IDENTIFICATION PROTOCOL - In one embodiment a system comprises an integrated circuit, a plurality of voltage regulators; and a data bus coupled to the integrated circuit and the plurality of voltage regulators. In some embodiments the integrated circuit comprises logic to embed a timing signal on the data bus. Other embodiments may be described. | 2014-10-23 |
20140317331 | EXTERNAL ELECTRONIC DEVICE AND INTERFACE CONTROLLER AND EXTERNAL ELECTRONIC DEVICE CONTROL METHOD - An interface controller, coupling a device main body of an external electronic device to a host, is disclosed, which transmits a termination-on signal to the host prior to a mechanically stable state of a device main body of the external electronic device. When the device main body has not reached the mechanically stable state yet, the interface controller responds to the host with default link information in a delayed manner. The default link information is contained in the interface controller. When the device main body reaches the mechanically stable state, the interface controller transmits specific link information retrieved from the device main body to the host. | 2014-10-23 |
20140317332 | SEMICONDUCTOR DEVICE FOR PERFORMING TEST AND REPAIR OPERATIONS - A semiconductor device may include: a storage unit configured to store program codes provided through control of a processor core; and a control unit configured to perform a control operation on a semiconductor memory device according to the program codes. | 2014-10-23 |
20140317333 | Direct Memory Access Controller with Hybrid Scatter-Gather Functionality - A direct memory access (DMA) controller stores a set of DMA instructions in a list, where each entry in the list includes a bit field that identifies the type of the entry. Based on the bit field, the DMA controller determines whether each DMA instruction is a buffer pointer or a jump pointer. If a DMA instruction is identified as a buffer pointer, the DMA controller transfers data to or from the location specified by the buffer pointer. If a DMA instruction is identified as a jump pointer, the DMA controller jumps to the location in the list specified by the jump pointer. A subset of the list of DMA instructions may be cached, and the DMA controller executes the cache entries sequentially. If a jump pointer is encountered in the cache, the DMA controller flushes the cache and reloads it from main memory based on the jump pointer. | 2014-10-23 |
20140317334 | STORAGE OF GATE TRAINING PARAMETERS FOR DEVICES UTILIZING RANDOM ACCESS MEMORY - Methods and structure are provided for maintaining gate training parameters for Random Access Memory. The system comprises a memory controller and a management unit. The management unit is able to initialize the system after the system returns from an unpowered state by accessing a non-volatile memory to retrieve timing intervals for electrical impulses sent between the memory controller and a Random Access Memory. The timing intervals previously enabled communication between the memory controller and the Random Access Memory. The management unit is further able to initialize the system after the system returns from an unpowered state by calibrating the memory controller to enable communication with the Random Access Memory based on the retrieved timing intervals. | 2014-10-23 |
20140317335 | DATA STORAGE DEVICE, STORAGE CONTROLLER, AND DATA STORAGE CONTROL METHOD - According to one embodiment, a data storage device includes a first storage medium, a second nonvolatile storage medium, and a controller. The controller allows write data requested to be written from a host device to be recorded into the first storage medium which is a cache memory of the second storage medium according to a first recording method and allows read data that is read from the second storage medium to be recorded into the first storage medium according to a second recording method that provides lower reliability but larger memory capacity than the first recording method. | 2014-10-23 |
20140317336 | LOCAL DIRECT STORAGE CLASS MEMORY ACCESS - A queued, byte addressed system and method for accessing flash memory and other non-volatile storage class memory, and potentially other types of non-volatile memory (NVM) storage systems. In a host device, e.g., a standalone or networked computer, having attached NVM device storage integrated into a switching fabric wherein the NVM device appears as an industry standard OFED™ RDMA verbs provider. The verbs provider enables communicating with a ‘local storage peer’ using the existing OpenFabrics RDMA host functionality. User applications issue RDMA Read/Write directives to the ‘local peer (seen as a persistent storage) in NVM enabling NVM memory access at byte granularity. The queued, byte addressed system and method provides for Zero copy NVM access. The methods enables operations that establish application private Queue Pairs to provide asynchronous NVM memory access operations at byte level granularity. | 2014-10-23 |
20140317337 | METADATA MANAGEMENT AND SUPPORT FOR PHASE CHANGE MEMORY WITH SWITCH (PCMS) - Methods and apparatus related to management and/or support of metadata for PCMS (Phase Change Memory with Switch) devices are described. In one embodiment, a PCMS controller allows access to a PCMS device based on metadata. The metadata may be used to provide efficiency, endurance, error correction, etc. as discussed in the disclosure. Other embodiments are also disclosed and claimed. | 2014-10-23 |
20140317338 | MEMORY DEVICE AND MEMORY SYSTEM INCLUDING THE SAME, AND OPERATION METHOD OF MEMORY DEVICE - A memory device includes a memory cell array having a plurality of memory cells, a storage unit suitable for storing a fail address corresponding to a fail memory cell in the memory cell array, an available storage capacity determination unit suitable for generating available capacity information indicating an available storage capacity in the storage unit, and an output circuit suitable for outputting the available capacity information. | 2014-10-23 |
20140317339 | DATA ACCESS SYSTEM, DATA ACCESSING DEVICE, AND DATA ACCESSING CONTROLLER - A data access system, device and controller are provided. The data access system includes a plurality of storage units and first controllers, a second controller, and a host. The first controller is utilized to parallel access the storage units, and each first controller includes a plurality of first storage unit controllers, a buffer and a multiplexer. The first storage unit controllers are coupled one-to-one with the storage units. The multiplexer is coupled to the first storage unit controllers and the buffer. The second controller is coupled to the first controllers. The second controller includes a plurality of second storage unit controllers which are coupled one-to-one with the first controllers. The host is coupled to the second controller, and accesses the storage units through the second controller and the first controllers. | 2014-10-23 |
20140317340 | STORAGE SYSTEM AND STORAGE CONTROL METHOD - A storage system includes: a storage device including a recording medium that stores data and a device controller that executes addition processing involving a change of state of the data with respect to the data; and a storage controller that controls input and output of data for the storage device. The storage controller transmits, to the storage device, determination information that can be utilized by the device controller for determining whether or not to execute the addition processing along with input-output processing relating to input-output target data. The device controller controls execution of the addition processing with respect to the input-output target data based on the determination information transmitted from the storage controller. | 2014-10-23 |
20140317341 | SYSTEM AND APPARATUS FOR FLASH MEMORY DATA MANAGEMENT - The system and apparatus for managing flash memory data includes a host transmitting data, wherein when the data transmitted from the host have a first time transmission trait and the address for the data indicates a temporary address, temporary data are retrieved from the temporary address to an external buffer. A writing command is then executed and the temporary data having a destination address are written to a flash memory buffer. When the flash memory buffer is not full, the buffer data are written into a temporary block of the flash memory. The writing of buffer data into the temporary block includes using an address changing command, or executing a writing command to rewrite the external buffer data to the flash memory buffer so that the data are written into the temporary block. | 2014-10-23 |
20140317342 | MICROCOMPUTER AND STORING APPARATUS - In a microcomputer provided with a program storing device for storing instruction codes and a micro-processor for reading and executing the instruction codes stored in the program storing device, the program storing device have plural memories for storing instruction codes, an output unit for receiving plural pieces of data output from the plural memories, and selecting and outputs one of the plural pieces of data received from the plural memories, a selecting unit for receiving address data sent from the micro-processor to select one of the plural memories, an activating unit for activating the memory selected by the selecting unit, and a controlling unit for controlling the output unit to output data of the memory activated by the activating unit. | 2014-10-23 |
20140317343 | CONFIGURATION OF DATA STROBES - Disclosed embodiments may include a circuit having a plurality of data terminals, no more than two pairs of differential data strobe terminals associated with the plurality of data terminals, and digital logic circuitry. The digital logic circuitry may be coupled to the data terminals and configured to use the no more than two pairs of differential data strobe terminals concurrently with the plurality of data terminals to transfer data. Other embodiments may be disclosed. | 2014-10-23 |
20140317344 | SEMICONDUCTOR DEVICE - A semiconductor device may include a storage unit configured to store a number of times a first command has been provided to a memory cell array, a control unit configured to generate a second command operable to activate at least one word line in the memory cell array based on a comparison of the number stored at the storage unit with a threshold value, when the first command is received, and a selection unit configured to select one of the first command and the second command based on a result of the comparison and transmit the selected command to the memory cell array. | 2014-10-23 |
20140317345 | METHOD AND APPARATUS FOR AN EFFICIENT HARDWARE IMPLEMENTATION OF DICTIONARY BASED LOSSLESS COMPRESSION - A method, computer readable medium, and apparatus for implementing a compression are disclosed. For example, the method receives a first portion of an input data at a first register, determines a first address based upon the first portion of the input data, reads the first address in a memory to determine if a value stored in the first address is zero, stores a code for the first address of the memory in the first register if the value of the first address is zero, receives a second portion of the input data at a second register, determines a second address based upon the second portion of the input data in the memory, obtains the code from the first register if the second address and the first address are the same and writes the code from the first register in the first address of the memory. | 2014-10-23 |
20140317346 | REDUNDANT ARRAY OF INDEPENDENT DISKS SYSTEMS THAT UTILIZE SPANS WITH DIFFERENT STORAGE DEVICE COUNTS FOR A LOGICAL VOLUME - Methods and structure are provided for defining span sizes for Redundant Array of Independent Disks (RAID) systems. One embodiment is a RAID controller that includes a control system and a span manager. The control system is able to identify storage devices coupled with the controller and is able to receive input requesting the creation of a RAID logical volume. The span manager is able to define multiple RAID spans to implement the volume, each span comprising one or more of the coupled storage devices, at least one of the spans including a different number of drives than at least one other span. | 2014-10-23 |
20140317347 | SYSTEM AND METHOD FOR LOGICAL REMOVAL OF PHYSICAL HEADS IN A HARD DISK DRIVE (HDD) - A hard disk drive (HDD) provides for the logical removal of defective physical heads. The HDD includes one or more disks organized into a plurality of regions, each region having a plurality of physical block addresses (PBAs). A number of physical heads are used to read and write information to the disks. A controller is configured to translate logical block addresses (LBAs) received from an external system to PBAs used to access the one or more disks, wherein the controller is configured to logically remove a defective physical head from service by dynamically re-assigning LBAs to each of the plurality of regions while preventing LBAs from being assigned to regions associated with the defective physical head. | 2014-10-23 |
20140317348 | CONTROL SYSTEM, CONTROL APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM RECORDING CONTROL PROGRAM THEREON - A control system includes: a superordinate apparatus that includes a multi-path driver controlling an access path; and a second control unit that transmits a control signal used for an instruction for setting an access path to a first control unit that is newly connected to be communicable with the superordinate apparatus to the superordinate apparatus. The multi-path driver sets the access path to the first control unit based on the control signal supplied from the second control unit, thereby autonomously setting the access path in a case where a control unit is additionally installed. | 2014-10-23 |
20140317349 | Distributed Storage Time Synchronization Based On Retrieval Delay - A method begins with a processing module receiving a data retrieval request and obtaining a real-time indicator corresponding to when the data retrieval request was received. The method continues with the processing module determining a time-based data access policy based on the data retrieval request and the real-time indicator and accessing a plurality of dispersed storage (DS) units in accordance with the time-based data access policy to retrieve encoded data slices. The method continues with the processing module decoding the threshold number of encoded data slices in accordance with an error coding dispersal storage function when a threshold number of the encoded data slices have been retrieved. | 2014-10-23 |
20140317350 | PORTABLE STORAGE DEVICES FOR ELECTRONIC DEVICES - A portable storage device ( | 2014-10-23 |
20140317351 | METHOD AND APPARATUS FOR PREVENTING NON-TEMPORAL ENTRIES FROM POLLUTING SMALL STRUCTURES USING A TRANSIENT BUFFER - A method for preventing non-temporal entries from entering small critical structures is disclosed. The method comprises transferring a first entry from a higher level memory structure to an intermediate buffer. It further comprises determining a second entry to be evicted from the intermediate buffer and a corresponding value associated with the second entry. Subsequently, responsive to a determination that the second entry is frequently accessed, the method comprises installing the second entry into a lower level memory structure. Finally, the method comprises installing the first entry into a slot previously occupied by the second entry in the intermediate buffer. | 2014-10-23 |
20140317352 | MEMORY OBJECT REFERENCE COUNT MANAGEMENT WITH IMPROVED SCALABILITY - Generally, this disclosure provides systems, devices, methods and computer readable media for memory object reference count management with improved scalability based on transactional reference count elision. The device may include a hardware transactional memory processor configured to maintain a read-set associated with a transaction and to abort the transaction in response to a modification of contents of the read-set by an entity external to the transaction; and a code module configured to: enter the transaction; locate the memory object; read the reference count associated with the memory object, such that the reference count is added to the read-set associated with the transaction; access the memory object; and commit the transaction. | 2014-10-23 |
20140317353 | Method and Apparatus for Managing Write Back Cache - A network services processor includes an input/output bridge that avoids unnecessary updates to memory when cache blocks storing processed packet data are no longer required. The input/output bridge monitors requests to free buffers in memory received from cores and IO units in the network services processor. Instead of writing the cache block back to the buffer in memory that will be freed, the input/output bridge issues don't write back commands to a cache controller to clear the dirty bit for the selected cache block, thus avoiding wasteful write-backs from cache to memory. After the dirty bit is cleared, the buffer in memory is freed, that is, made available for allocation to store data for another packet. | 2014-10-23 |
20140317354 | ELECTRONIC DEVICE, DATA CACHING SYSTEM AND METHOD - A data caching system applied in an electronic device is provided. The electronic device includes a processor, a cache, a main storage. Data stored in the cache is assigned with a weight value to present times that the data has been read. The data caching system includes a receiving module receiving requests for reading data from the processor. A reading module reads data according to the reading requests, and determines whether a requested data is stored in the cache. A weight value calculating module calculates the weight value, of the requested data that is stored in the cache. The weight value calculating module plus one to the weight value of the requested data when the requested data is read. If the cache is full, the data whose weight value is equal to zero is randomly selected to be cleared from the cache to release space. | 2014-10-23 |
20140317355 | CACHE ALLOCATION SCHEME OPTIMIZED FOR BROWSING APPLICATIONS - Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power. | 2014-10-23 |
20140317356 | MERGING DEMAND LOAD REQUESTS WITH PREFETCH LOAD REQUESTS - A processor includes a processing unit, a cache memory, and a central request queue. The central request queue is operable to receive a prefetch load request for a cache line to be loaded into the cache memory, receive a demand load request for the cache line from the processing unit, merge the prefetch load request and the demand load request to generate a promoted load request specifying the processing unit as a requestor, receive the cache line associated with the promoted load request, and forward the cache line to the processing unit. | 2014-10-23 |
20140317357 | PROMOTING TRANSACTIONS HITTING CRITICAL BEAT OF CACHE LINE LOAD REQUESTS - A processor includes a cache memory, a first core including an instruction execution unit, and a memory bus coupling the cache memory to the first core. The memory bus is operable to receive a first portion of a cache line of data for the cache memory, the first core is operable to identify a plurality of data requests targeting the cache line and the first portion and select one of the identified plurality of data requests for execution, and the memory bus is operable to forward the first portion to the instruction execution unit and to the cache memory in parallel. | 2014-10-23 |
20140317358 | GLOBAL MAINTENANCE COMMAND PROTOCOL IN A CACHE COHERENT SYSTEM - A system may include a command queue controller coupled to a number of clusters of cores, where each cluster includes a cache shared amongst the cores. An originating core of one of the clusters may detect a global maintenance command and send the global maintenance command to the command queue controller. The command queue controller may broadcast the global maintenance command to the clusters including the originating core's cluster. Each of the cores of the clusters may execute the global maintenance command. Each cluster may send an acknowledgement to the command queue controller upon completed execution of the global maintenance command by each core of the cluster. The command queue controller may also send, upon receiving an acknowledgement from each cluster, a final acknowledgement to the originating core's cluster. | 2014-10-23 |
20140317359 | CLUSTERED FILE SYSTEM CACHING - A method for accessing data stored in a distributed caching storage system containing a home cluster and a secondary cluster is provided. A first copy of a file is stored on the home cluster and a second copy of the file is stored on the secondary cluster. The second copy of the file is associated with an inode data structure having a consistency attribute. An input/output request is received directed to the file and indicates that file is in an inconsistent state by updating the inode's consistency attribute. The first copy and the second copy of the file is updated according to the received input/output request and it is determined whether the first copy and the second copy were updated successfully. The maintaining of the inode's consistency attribute is indicative of the inconsistent state of the file. | 2014-10-23 |
20140317360 | MEMORY ACCESS CONTROL - Memory access circuitry for controlling access to a memory comprising multiple memory units arranged in parallel with each other. The memory access circuitry comprising: two access units each configured to select one of the multiple memory units in response to a received memory access request and to control and track subsequent accesses to the selected memory unit, the multiple memory units comprising at least three memory units; arbitration circuitry configured to receive the memory access requests from a system and to select and forward the memory access requests to one of the two access units, the arbitration circuitry being configured to forward a plurality of memory access requests for accessing one memory unit to a first of the two access units, and to direct a plurality of memory access requests for accessing a further memory unit to a second of the two access units and to subsequently direct a plurality of memory access requests for accessing a yet further memory unit to one of the first or second access units. The two access units comprise storing circuitry to store requests in a queue prior to transmitting the requests to the respective memory unit; and tracking circuitry to track requests sent to the respective memory units and to determine when to transmit subsequent requests from the queue. The control circuitry is configured to set a state of each of the two access units, the state being one of active, prepare and dormant, the access unit in the active state being operable to transmit both access and activate requests to the respective memory unit, the activate request preparing the access in the respective memory unit and the access request accessing the data, the access unit in the prepare state being operable to transmit the activate requests and not the access requests, the access unit in the dormant state being operable not to transmit any access or activate requests, the control circuitry being configured to switch states of the two access units periodically and to set not more than one of the access units to the active state at a same time. | 2014-10-23 |
20140317361 | SPECULATIVE MEMORY CONTROLLER - A method and a system are provided for controlling memory accesses. Memory access requests including at least a first speculative memory access request and a first non-speculative memory access request are received and a memory access request is selected from the memory access requests. A memory access command is generated to process the selected memory access request. | 2014-10-23 |
20140317362 | INTERFACE CONTROL APPARATUS, DATA STORAGE APPARATUS AND INTERFACE CONTROL METHOD - According to one embodiment, an interface control apparatus includes an interface, a table, a command processor, and a controller. The interface transmits and receives information to and from a host. The table holds management information for managing an address in a memory space in the host. The command processor carries out a command process of accessing the memory space in the host using the management information. The controller releases the management information corresponding to the command process from the table in response to completion of the command process. | 2014-10-23 |
20140317363 | SYSTEM AND METHOD FOR OPTIMIZING MEMORY USAGE IN A UNIVERSAL CONTROLLING DEVICE - A method for optimizing memory usage in a device having a universal controlling application includes receiving into the device data for use in configuring the universal controlling application wherein the data is used to identify from within a library of command code sets stored in a memory of the device a command code set that is appropriate for use in commanding functional operations of the appliance and causing a non-identified one or more of the command code sets of the library of command code sets stored in the memory of the device to be discarded to thereby create freed space in the memory of the device. | 2014-10-23 |