Patent application number | Description | Published |
20100074125 | DISCOVERING COMMUNICATION RULES IN A NETWORK TRACE - The claimed subject matter provides a system and/or a method that facilitates managing a network by mining a communication rule. An analysis engine can employ a packet trace within a network in order to provide timing information, wherein the network includes at least one of a host, a protocol, or an application. A traffic evaluator can extract a communication rule for the network based upon an activity matrix generated from the timing information in which the activity matrix includes at least one of a row of a time window for the packet trace and a column for a flow in the packet trace. | 03-25-2010 |
20110087799 | Flyways in Data Centers - Described is a technology by which additional network communications capacity is provided to an oversubscribed base network where needed, through the use of dynamically provisioned communications links referred to as flyways. A controller detects a need for additional network communications capacity between two network machines, e.g., between two racks of servers with top-of-rack switches. The controller configures flyway mechanisms (e.g., one per rack) to carry at least some of the network traffic between the machines of the racks and thereby provide the additional network communications capacity. The flyway mechanisms may be based on any wireless or wired technologies, including 60 GHz technology, optical links, 802.11n or wired commodity switches. | 04-14-2011 |
20110087924 | Diagnosing Abnormalities Without Application-Specific Knowledge - Methods, articles, and systems for determining a probable cause of a component's abnormal behavior are described. To determine the probable cause, a computing device computes, for one or more pairs of components having dependency relationships, a likelihood that behavior of one component of a pair is impacting behavior of the other component of the pair. This computing is based on joint historical behavior of the pair of components. The computing device then determines that one of a plurality of components is a probable cause of the abnormal behavior based on the computed likelihoods. | 04-14-2011 |
20110246897 | INTERACTIVE VISUALIZATION TO ENHANCE AUTOMATED FAULT DIAGNOSIS IN NETWORKS - Described is a visual analytics system for network diagnostics. The visual analytics system obtains network diagnostic-related information from a diagnostic system. The visual analytics system includes an interactive user interface that displays the representations of network components, including network machines and, zero or more links between those components, (e.g., as appropriate based upon selection or dynamic conditions). The user interface includes a main network view that displays representations of network components, a diagnostics view that displays suggested diagnosis results obtained from the diagnostic system, and a performance counter view that displays performance counter data. User interaction with one of the views correspondingly changes the displays in the other views. The system allows effective exploration of multiple levels of detail, e.g., variable, component, edge level and network levels, for example, via flexible navigation across these levels from the top, the bottom, or anywhere in the middle, while retaining context. | 10-06-2011 |
20120167101 | SYSTEM AND METHOD FOR PROACTIVE TASK SCHEDULING - The described implementations relate to distributed computing. One implementation provides a system that can include an outlier detection component that is configured to identify an outlier task from a plurality of tasks based on runtimes of the plurality of tasks. The system can also include a cause evaluation component that is configured to evaluate a cause of the outlier task. For example, the cause of the outlier task can be an amount of data processed by the outlier task, contention for resources used to execute the outlier task, or a communication link with congested bandwidth that is used by the outlier task to input or output data. The system can also include one or more processing devices configured to execute one or more of the components. | 06-28-2012 |
20120311127 | Flyway Generation in Data Centers - The subject disclosure is directed towards configuring and controlling wireless flyways (e.g., communication links between server racks provisioned on demand in a data center) to operate efficiently and without interfering with one another. Control and flyway selection may be based upon steered antenna directionality, channel, location in the data center, transmit power, and measured and/or predicted (estimated) network traffic. Flyways also may be used to route indirect traffic to reduce traffic on a bottleneck (e.g., wired) link. A payload may be sent over a over a wireless flyway with acknowledgment via a wired backchannel so that wireless communication is in one direction. The lack of interference and communication in one direction facilitates flyway operation without a backoff function and/or without clear channel assessment. | 12-06-2012 |
20130003538 | PERFORMANCE ISOLATION FOR CLOUDS - Traffic in a cloud is controlled by the nodes participating in the cloud. Tenants of the cloud each have a ratio. On any given node, a current transmission rate of the node is allocated among the tenants of the node, or more specifically, their execution units (e.g., virtual machines) on the node. Thus each tenant receives a predefined portion of the transmission capacity of the node. The transmission capacity can vary as conditions on the network change. For example, if congestion occurs, the transmission capacity may be decreased. Nonetheless, each tenant receives, according to its ratio, a same relative portion of the overall transmission capacity. | 01-03-2013 |
20130212277 | COMPUTING CLUSTER WITH LATENCY CONTROL - A computing cluster operated according to a resource allocation policy based on a predictive model of completion time. The predictive model may be applied in a resource control loop that iteratively updates resources assigned to an executing job. At each iteration, the amount of resources allocated to the job may be updated based on of the predictive model so that the job will be scheduled to complete execution at a target completion time. The target completion time may be derived from a utility function determined for the job. The utility function, in turn, may be derived from a service level agreement with service guarantees and penalties for late completion of a job. Allocating resources in this way may maximize utility for an operator of the computing cluster while minimizing disruption to other jobs that may be concurrently executing. | 08-15-2013 |
20130343191 | ENSURING PREDICTABLE AND QUANTIFIABLE NETWORKING PERFORMANCE - The ensuring of predictable and quantifiable networking performance. Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates. A lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. A receiving hypervisor can detect congestion and communicate back to sending hypervisors that rates are to be regulated. In response, sending hypervisors can reduce transmission rate to mitigate congestion at the receiving hypervisor. In some embodiments, the principles are extended to any message processors communicating over a congestion free network. | 12-26-2013 |
20130343399 | OFFLOADING VIRTUAL MACHINE FLOWS TO PHYSICAL QUEUES - The present invention extends to methods, systems, and computer program products for offloading virtual machine flows to physical queues. A computer system executes one or more virtual machines, and programs a physical network device with one or more rules that manage network traffic for the virtual machines. The computer system also programs the network device to manage network traffic using the rules. In particular, the network device is programmed to determine availability of one or more physical queues at the network device that are usable for processing network flows for the virtual machines. The network device is also programmed to identify network flows for the virtual machines, including identifying characteristics of each network flow. The network device is also programmed to, based on the characteristics of the network flows and based on the rules, assign one or more of the network flows to at least one of the physical queues. | 12-26-2013 |
20130346465 | APPLICATION ENHANCEMENT USING EDGE DATA CENTER - A management service that receives requests for the cloud computing environment to host applications, and improves performance of the application using an edge server. In response to the original request, the management service allocates the application to run on an origin data center, evaluates the application by evaluating at least one of the application properties designated by an application code author or provider, or the application performance, and uses an edge server to improve performance of the application in response to evaluating the application. For instance, a portion of application code may be offloaded to run on the edge data center, a portion of application data may be cached at the edge data center, or the edge server may add functionality to the application. | 12-26-2013 |
20130346558 | DELIVERY CONTROLLER BETWEEN CLOUD AND ENTERPRISE - A delivery controller for use in an enterprise environment that communicates with a cloud computing environment that is providing a service for the enterprise. As the cloud service processing progresses, some cloud service data is transferred from the cloud computing environment to the enterprise environment, and vice versa. The cloud service data may be exchanged over any one of a number of different types of communication channels. The delivery controller selects which communication channel to use to transfer specific data, depending on enterprise policy. Such policy might consider any business goals of the enterprise, and may be applied at the application level. | 12-26-2013 |
20130346968 | Automated controlling of host over network - The provisioning of a host computing system by a controller located over a wide area network. The host computing system has power-on code that automatically executes upon powering up, and causes the host to notify the controller of the host address. In a first level of bootstrapping, the controller instructs the host to download a maintenance operating system. The host responds by downloading and installing a maintenance operating system, enabling further bootstrapping. The persistent memory may further have security data, such as a public key, that allows the host computing system to securely identify the source of the download instructions (and subsequent instructions) as originating from the controller. A second level of bootstrapping may accomplish the configuring of the host with a hypervisor and a host agent. A third level of bootstrapping may accomplish the provisioning of virtual machines on the host. | 12-26-2013 |
20130346988 | PARALLEL DATA COMPUTING OPTIMIZATION - The use of statistics collected during the parallel distributed execution of the tasks of a job may be used to optimize the performance of the task or similar recurring tasks. An execution plan for a job is initially generated, in which the execution plan includes tasks. Statistics regarding operations performed in the tasks are collected while the tasks are executed via parallel distributed execution. Another execution plan is then generated for another recurring job, in which the additional execution plan has at least one task in common with the execution plan for the job. The additional execution plan is subsequently optimized based at least on the statistics to produce an optimized execution plan. | 12-26-2013 |
20140082048 | NETWORK SERVICES PROVIDED IN CLOUD COMPUTING ENVIRONMENT - A cloud computing environment providing a network service for a client computing entity. The network service is not an application level service, but rather a service that operates at or below the network layer in the protocol stack. For instance, the network service might be a network endpoint service such as a network address service (such as DNS) or a dynamic network service (such as DHCP), or a network traffic service such as a firewall service or a secure tunneling service (such as VPN). The service might also provide a pipeline of network services for network level traffic to and from the client computing entity. The cloud environment uses policy to determine which of a plurality of communication channels to use when exchanging cloud service data for the network service. | 03-20-2014 |
20140195689 | SWAN: ACHIEVING HIGH UTILIZATION IN NETWORKS - Greater network utilization is implemented through dynamic network reconfiguration and allocation of network services and resources based on the data to be transferred and the consumer transferring it. A hierarchical system is utilized whereby requests from lower layers are aggregated before being provided to upper layers, and allocations received from upper layers are distributed to lower layers. To maximize network utilization, paths through the network are reconfigured by identifying specific types of packets that are to be flagged in a specific manner, and then by further identifying specific routing rules to be applied in the transmission of such packets. Network reconfiguration is performed on an incremental basis to avoid overloading a path, and capacity can be reserved along one or more paths to prevent such overloading. Background data is agnostic as to specific transmission times and is utilized to prevent overloading due to reconfiguration. | 07-10-2014 |
20140278047 | ENRICHING DRIVING EXPERIENCE WITH CLOUD ASSISTANCE - Described is a technology by which driver safety technology such as collision detection is implemented via mobile device (e.g., smartphone) sensors and a cloud service that processes data received from vehicles associated with the devices. Trajectory-related data is received at the cloud service and used to predict collisions between vehicles and/or lane departures of vehicles. To operate the service in real-time with low latency, also described is dividing driving areas into grids, e.g., based upon traffic density, having parallel grid servers each responsible for only vehicles in or approaching its own grid, and other parallel/distributed mechanisms of the cloud service. | 09-18-2014 |
20140347998 | ENSURING PREDICTABLE AND QUANTIFIABLE NETWORKING PERFORMANCE - The ensuring of predictable and quantifiable networking performance. Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates. A lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. A receiving hypervisor can detect congestion and communicate back to sending hypervisors that rates are to be regulated. In response, sending hypervisors can reduce transmission rate to mitigate congestion at the receiving hypervisor. In some embodiments, the principles are extended to any message processors communicating over a congestion free network. | 11-27-2014 |