Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Load balancing

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

718102000 - Process scheduling

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20130031562MECHANISM FOR FACILITATING DYNAMIC LOAD BALANCING AT APPLICATION SERVERS IN AN ON-DEMAND SERVICES ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for facilitating dynamic load balancing at application servers in an on-demand services environment. In one embodiment and by way of example, a method includes polling a plurality of application servers for status, receiving status from each of the plurality of application servers, assigning a priority level to each of the plurality of application servers based on its corresponding status, and facilitating load balancing at the plurality of application servers based on their corresponding priority levels.01-31-2013
20120174118STORAGE APPARATUS AND LOAD DISTRIBUTION METHOD - A storage apparatus having plural control processors that interpret and process requests sent from a host computer includes a distribution judgment unit for judging, after a control processor receives a request sent from the host computer, whether or not to allocate processing relevant to the request from the control processor that received the request to another control processor, and a control processor selection unit for selecting an allocation target control processor if the distribution judgment unit judges to allocate the processing to another control processor.07-05-2012
20120174117MEMORY-AWARE SCHEDULING FOR NUMA ARCHITECTURES - A topology reader may determine a topology of a Non-Uniform Memory Access (NUMA) architecture including a number of, and connections between, a plurality of sockets, each socket including one or more cores and at least one memory configured to execute a plurality of threads of a software application. A core list generator may generate, for each designated core of the NUMA architecture, and based on the topology, a proximity list listing non-designated cores in an order corresponding to a proximity of the non-designated cores to the designated core. A core selector may determine, at a target core and during the execution of the plurality of threads, that the target core is executing an insufficient number of the plurality of threads, and may select a source core at the target core, according to the proximity list associated therewith, for subsequent transfer of a transferred thread from the selected source core to the target core for execution thereon.07-05-2012
20130031563STORAGE SYSTEM - The storage system includes a progress status detection unit that detects respective progress statuses representing proportions of the amounts of processing performed by respective processing units to the amount of processing performed by the entire storage system, each of the processing units being implemented in the storage system and performing a predetermined task; a target value setting unit that sets target values of processing states of the processing units, based on the detected progress statuses of the respective processing units and ideal values of the progress statuses which are preset for the respective processing units; and a processing operation controlling unit that controls the processing states of the processing units such that the processing states of the processing units meet the set target values.01-31-2013
20130047166Systems and Methods for Distributing an Aging Burden Among Processor Cores - Systems and methods are presented for reducing the impact of high load and aging on processor cores in a processor. A Power Management Unit (PMU) can monitor aging, temperature, and increased load on the processor cores. The PMU instructs the processor to take action such that aging, temperature, and/or increased load are approximately evenly distributed across the processor cores, so that the processor can continue to efficiently process instructions.02-21-2013
20130047165Context-Aware Request Dispatching in Clustered Environments - The present disclosure involves systems, software, and computer implemented methods for providing context-aware request dispatching in a clustered environment. One process includes operations for receiving an event at a first computer node. The contents of the event are analyzed to determine a target process instance for handling the event. A target computer node hosting the target process instance is determined, and the event is sent to the target computer node for handling by the target process instance.02-21-2013
20090193428Systems and Methods for Server Load Balancing - In one embodiment a system and a method relate to generating a server load balancing algorithm configured to distribute workload across multiple application servers, publishing the server load balancing algorithm to switches of the network, and the switches applying the server load balancing algorithm to received network packets to determine how to distribute the network packets among the multiple application servers.07-30-2009
20100011371Performance of unary bulk IO operations on virtual disks by interleaving - A method and system are provided for executing a unary bulk input/output operation on a virtual disk using interleaving. The performance improvement due to the method is expected to increase as more information about the configuration of the virtual disk and its implementation are taken into account. Performance factors considered may include contention among tasks implementing the parallel process, load on the storage system from other processes, performance characteristics of components of the storage system, and the virtualization relationships (e.g., mirroring, striping, and concatenation) among physical and virtual storage devices within the virtual configuration.01-14-2010
20080216087AFFINITY DISPATCHING LOAD BALANCER WITH PRECISE CPU CONSUMPTION DATA - A system for distributing a plurality of tasks over a plurality of nodes in a network includes: a plurality of processors for executing tasks; a plurality of nodes comprising processors; a task dispatcher; and a load balancer. The task dispatcher receives as input the plurality of tasks; calculates a task processor consumption value for the tasks; calculates a node processor consumption value for the nodes; calculates a target node processor consumption value for the nodes; and then calculates a load index value as a difference between the calculated node processor consumption for a node i and the target node processor consumption value for the node i. The balancer distributes the tasks among the nodes to balance the processor workload among the nodes according to the calculated load index value of each node, such that the calculated load index value of each node is substantially zero.09-04-2008
20090049450METHOD AND SYSTEM FOR COMPONENT LOAD BALANCING - A system for balancing component load. In response to receiving a request, data is updated to reflect a current number of pending requests. In response to analyzing the updated data, it is determined whether throttling is necessary. In response to determining that throttling is not necessary, a corresponding request to the received request is created and a flag is set in the corresponding request. Then, the corresponding request is sent to one of a plurality of lower level components of an input/output stack of an operating system for processing based on the analyzed data to balance component load in the input/output stack of the operating system.02-19-2009
20130086593AUTOMATED WORKLOAD PERFORMANCE AND AVAILABILITY OPTIMIZATION BASED ON HARDWARE AFFINITY - A method, apparatus, and program product deploy a workload on a host within a computer system having a plurality of hosts. Different hosts may be physically located in proximity to different resources, such as storage and network I/O modules, and therefore exhibit different latency when accessing the resources required by the workload. Eligible hosts within the system are evaluated for their capacity to take on a given workload, then scored on the basis of their proximity to the resources required by the workload. The workload is deployed on a host having sufficient capacity to run it, as well as a high affinity score.04-04-2013
20130081048POWER CONTROL APPARATUS, POWER CONTROL METHOD, AND COMPUTER PRODUCT - A power control apparatus includes a processor that causes thermal fluid analysis of the amount of increase in power consumption for cooling a plurality of servers, where the increase in power consumption is consequent to an increase in the volume of tasks at each server among the servers. Based on analysis results obtained by the thermal fluid analysis, the processor selects from among the servers, a server to execute a task and causes the selected server to execute the task.03-28-2013
20100043010DATA PROCESSING METHOD, CLUSTER SYSTEM, AND DATA PROCESSING PROGRAM - Provided is a data processing system which includes: a first computer for receiving a processing request for a task processing, executing the processing, and holding data used therein; and a second computer for holding a duplicate of the data held in the first computer, halting the first computer if the first computer is determined to be halted, and receiving and processing the processing request. The first computer receives at least an update request as the processing request including request identification information to which unique numbers assigned to the individual processing requests in an ascending order are allocated, updates the held data, and transmits the update request including the request identification information to the second computer. The second computer stores a transmitted reference request and the update request as the processing requests, and processes the processing requests in an ascending order of the unique numbers included in the individual processing requests.02-18-2010
20100095304INFORMATION PROCESSING DEVICE AND LOAD ARBITRATION CONTROL METHOD - The information processing device in the simultaneous multi-threading system is operated in an inter-thread performance load arbitration control method, and includes: an instruction input control unit for sharing among threads control of inputting an instruction in an arithmetic unit for acquiring the instruction from memory and performing an operation on the basis of the instruction; a commit stack entry provided for each thread for holding information obtained by decoding the instruction; an instruction completion order control unit for updating the memory and a general purpose register depending on an arithmetic result obtained by the arithmetic unit in an order of the instructions input from the instruction input control unit; and a performance load balance analysis unit for detecting the information registered in the commit stack entry and controlling the instruction input control unit.04-15-2010
20080271038SYSTEM AND METHOD FOR EVALUATING A PATTERN OF RESOURCE DEMANDS OF A WORKLOAD - A method comprises receiving, by pattern evaluation logic, a plurality of occurrences of a prospective pattern of resource demands in a representative workload. The method further comprises evaluating, by the pattern evaluation logic, the received occurrences of the prospective pattern of resource demands, and determining, by the pattern evaluation logic, based on the evaluation of the received occurrences of the prospective pattern of resource demands, how representative the prospective pattern is of resource demands of the representative workload.10-30-2008
20090328056Entitlement model - Some embodiments of an entitlement model have been presented. In one embodiment, a centralized server distributes copies of an operating system from a software vendor to a set of virtual guests of a virtual host running on a physical computing machine. The centralized server and the physical computing machine are coupled to each other within an internal network of a customer of the software vendor, whereas the centralized server has access to the software vendor external to the internal network of the customer. The centralized server may interact with a hypervisor of the physical computing machine to determine what type of license of the operating system the virtual host has and a number of copies of the operating system requested by the virtual guests.12-31-2009
20120192200Load Balancing in Heterogeneous Computing Environments - Load balancing may be achieved in heterogeneous computing environments by first evaluating the operating environment and workload within that environment. Then, if energy usage is a constraint, energy usage per task for each device may be evaluated for the identified workload and operating environments. Work is scheduled on the device that maximizes the performance metric of the heterogeneous computing environment.07-26-2012
20090094613METHOD OF MANAGING WORKLOADS IN A DISTRIBUTED PROCESSING SYSTEM - An embodiment of the present invention is a method for generating a simulated processor load on a system of CPU's, and introducing a controlled workload into the system that is spread evenly across the available CPU resources and may be arranged to consume a precise, controllable portion of the resources.04-09-2009
20090094612Method and System for Automated Processor Reallocation and Optimization Between Logical Partitions - A method and system for reallocating processors in a logically partitioned environment. The present invention comprises a Performance Enhancement Program (PEP) and a Reallocation Program (RP). The PEP allows an administrator to designate several parameters and identify donor and recipient candidates. The RP compiles the performance data for the processors and calculates a composite parameter. For each processor in the donor candidate pool, the RP compares the composite parameter to the donor load threshold to determine if the processor is a donor. For each processor in the recipient candidate pool, the RP compares the composite parameter to the recipient load threshold to determine if the processor is a recipient. The RP then allocates the processors from the donors to the recipients. The RP continues to monitor and update the workload statistics based on either a moving window or a discrete window sampling system.04-09-2009
20090094611Method and Apparatus for Load Distribution in Multiprocessor Servers - A method and arrangement for handling incoming requests for multimedia services in an application server having a plurality of processors. A service request is received from a user, requiring the handling of user-specific data. The identity of the user or other consistent user-related parameter is extracted from the received service request. Then, a scheduling algorithm is applied using the extracted identity or other user-related parameter as input, for selecting a processor associated with the user and that stores user-specific data for the user locally. Thereafter, the service request is transferred to the selected processor in order to be processed by handling the user-specific data.04-09-2009
20090094610Scalable Resources In A Virtualized Load Balancer - In one embodiment, a load balancing system may include a first physical device that provides a resource. The first physical device may have a first virtual device running actively thereon. The first virtual device may have the resource allocated to it on the physical device. The first physical device may also have a virtual server load balancer running actively thereon. The server load balancer may be adapted to balance a workload associated with the resource between the first virtual device and a second virtual device. The second virtual device may be running in active mode on a second physical device, and in standby mode on the first physical device. The first virtual device may be in standby mode on the second physical device.04-09-2009
20130061238OPTIMIZING THE DEPLOYMENT OF A WORKLOAD ON A DISTRIBUTED PROCESSING SYSTEM - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes.03-07-2013
20130061237Switching Tasks Between Heterogeneous Cores - The present disclosure describes techniques for switching tasks between heterogeneous cores. In some aspects it is determined that a task being executed by a first core of a processor can be executed by a second core of a processor, the second core having an instruction set that is different from that of the first core, and execution of the task is switched from the first core to the second core effective to decrease an amount of energy consumed by the processor.03-07-2013
20090271798Method and Apparatus for Load Balancing in Network Based Telephony Application - Techniques are disclosed for load balancing in networks such as those networks handling telephony applications. By way of example, a method for directing requests associated with calls to servers in a system comprised of a network routing calls between a plurality of nodes wherein a node participates in a call as a caller or a receiver and wherein a load balancer sends requests associated with calls to a plurality of servers comprises the following steps. A request associated with a node belonging to a group including a plurality of nodes is received. A server is selected to receive the request. A subsequent request is received. A determination is made whether or not the subsequent request is associated with a node belonging to the group. The subsequent request is sent to the server based on determining that the subsequent request is associated with a node belonging to the group. By way of another example, a method for balancing requests among servers in a client server environment wherein a load balancer sends requests associated with a client to a plurality of servers comprises the following steps. Information is maintained regarding a weighted number of requests assigned to each server. The load balancer receives a request from a client. A server s10-29-2009
20090031321BUSINESS PROCESS MANAGEMENT SYSTEM, METHOD THEREOF, PROCESS MANAGEMENT COMPUTER AND PROGRAM THEREOF - A business process management computer, when the load of a service execution computer etc. is increased, determines the condition of a service call step which is calling a service execution unit, etc. of said service execution computer, etc. If said condition is the bottleneck condition, it determines the condition of the service call step in other process which is calling said service execution unit, etc. If there is no condition other than the bottleneck in that condition, the addition of the resource for said service execution computer, etc. is determined and if there is a condition in which the throughput can be limited, it is determined that the throughput should be limited. In a process which is configured with a plurality of service call steps, when the resource insufficiency has occurred, a means to make the adequate addition of the resource possible can be provided.01-29-2009
20110023049OPTIMIZING WORKFLOW EXECUTION AGAINST A HETEROGENEOUS GRID COMPUTING TOPOLOGY - Optimizing workflow execution by the intelligent dispatching of workflow tasks against a grid computing system or infrastructure. For some embodiments, a grid task dispatcher may be configured to dispatch tasks in a manner that takes into account information about an entire workflow, rather than just an individual task. Utilizing information about the tasks (task metadata), such a workflow-scoped task dispatcher may more optimally assign work to compute resources available on the grid, leading to a decrease in workflow execution time and more efficient use of grid computing resources.01-27-2011
20090025007Method and apparatus for managing virtual ports on storage systems - A storage system is configured to create and manage virtual ports on physical ports. The storage system can transfer associations between virtual ports and physical ports when a failure occurs in a physical port or a link connected to the physical port so that a host can access volumes under the virtual ports through another physical port. The storage system can also change associations between virtual ports and physical ports by taking into account the relative loads on the physical ports. When a virtual machine is migrated from one host computer to another, the loads on the physical ports in the storage system can be used to determine whether load balancing should take place. Additionally, the storage system can transfer virtual ports to a remote storage system that will take over the virtual ports, so that a virtual machine can be migrated to remote location.01-22-2009
20090013328CONTENT SWITCHING PROGRAM, CONTENT SWITCHING METHOD, AND CONTENT MANAGEMENT APPARATUS - A computer-readable storage medium on which is recorded a content switching program used to direct a device for transmitting a content corresponding to data to a requester in response to a data acquire request from the requester to perform a content switching process, the process comprising: a load acquiring step of acquiring a load on the device; a content selecting step of selecting, on the basis of the acquired load, one of a plurality of contents that can be a content to be transmitted and each of which has a different volume; and a storage location changing step of changing a storage location of the content to be transmitted into a storage location of the selected content.01-08-2009
20090013327CUSTOMER INFORMATION CONTROL SYSTEM WORKLOAD MANAGEMENT BASED UPON TARGET PROCESSORS REQUESTING WORK FROM ROUTERS - The invention provides for customer information control system (CICS) workload management in performance of computer processing tasks based upon “target” processors requesting work from “routers”, by providing for a target process(or) to first initiate a request to a router seeking distribution of processing task(s) before a new task is assigned by the router to that target for completion.01-08-2009
20130167153IMFORMATION PROCESSING SYSTEM FOR DATA TRANSFER - A disclosed method includes: determining whether a value of a load caused by a transfer processing to transmit data received from first processing apparatuses to second processing apparatuses in response to a request from the second processing apparatuses exceeds a threshold; upon determining that the value of the load exceeds the threshold, counting, for each first processing apparatus, the number of second processing apparatuses that request data transmitted by the first processing apparatus; identifying a first processing apparatus that is a transmission source of data transferred in the transfer processing to be allocated to another transfer apparatus of plural transfer apparatuses, based on the counted number; and transmitting a change request requesting that the transfer processing of data transmitted by the identified first processing apparatus is to be allocated to the another transfer apparatus, to a management apparatus managing allocation of the transfer processing for the plural transfer apparatuses.06-27-2013
20130167154ENERGY EFFICIENT JOB SCHEDULING IN HETEROGENEOUS CHIP MULTIPROCESSORS BASED ON DYNAMIC PROGRAM BEHAVIOR - Methods for efficient job scheduling in a heterogeneous chip multiprocessor that include logic comparisons of performance metrics to determine if programs should be moved from an advanced core to a simple core or vice versa.06-27-2013
20090055835System and Method for Managing License Capacity in a Telecommunication Network - According to teachings herein, a telecommunication network manages licensed transaction capacity for a licensed service provided by the network, based on dynamically adjusting the allocation of licensed capacity across multiple traffic processors providing the service. Reallocation of licensed capacity is performed with respect to the actual traffic loads at the traffic processors. For example, licensed capacity at a lightly loaded traffic processor is decreased and licensed capacity is correspondingly increased at a heavily loaded traffic processor. This dynamic redistribution of licensed capacity to reflect variations in the distribution of traffic loads across the traffic processors provides for more efficient utilization of the licensed transaction capacity.02-26-2009
20110035754WORKLOAD MANAGEMENT FOR HETEROGENEOUS HOSTS IN A COMPUTING SYSTEM ENVIRONMENT - Methods and apparatus involve managing workload migration to host devices in a data center having heterogeneously arranged computing platforms. Fully virtualized images include drivers compatible with varieties of host devices. The images also include an agent that detects a platform type of a specific host device upon deployment. If the specific host is a physical platform type, the agent provisions native drivers. If the specific host is a virtual platform type, the agent also detects a hypervisor. The agent then provisions front-end drivers that are most compatible with the detected hypervisor. Upon decommissioning of the image, the image is returned to its pristine state and saved for later re-use. In other embodiments, detection methods of the agent are disclosed as are computing systems, data centers, and computer program products, to name a few.02-10-2011
20120291044Routing Workloads Based on Relative Queue Lengths of Dispatchers - Mechanisms for distributing workload items to a plurality of dispatchers are provided. Each dispatcher is associated with a different computing system of a plurality of computing systems and workload items comprise workload items of a plurality of different workload types. A capacity value for each combination of workload type and computing system is obtained. For each combination of workload type and computing system, a queue length of a dispatcher associated with the corresponding computing system is obtained. For each combination of workload type and computing system, a dispatcher's relative share of incoming workloads is computed based on the queue length for the dispatcher associated with the computing system. In addition, incoming workload items are routed to a dispatcher, in the plurality of dispatchers, based on the calculated dispatcher's relative share for the dispatcher.11-15-2012
20100269118SPECULATIVE POPCOUNT DATA CREATION - A method and a data processing system by which population count (popcount) operations are efficiently performed without incurring the latency and loss of critical processing cycles and bandwidth of real time processing. The method comprises: identifying data to be stored to memory for which a popcount may need to be determined; speculatively performing a popcount operation on the data as a background process of the processor while the data is being stored to memory; storing the data to a first memory location; and storing a value of the popcount generated by the popcount operation within a second memory location. The method further comprises: determining a size of data; determining a granular level at which the popcount operation on the data will be performed; and reserving a size of said second memory location that is sufficiently large to hold the value of the popcount.10-21-2010
20100083274HARDWARE THROUGHPUT SATURATION DETECTION - Improved hardware throughput can be achieved when a hardware device is saturated with IO jobs. Throughput can be estimated based on the quantifiable characteristics of incoming IO jobs. When IO jobs are received a time cost for each job can be estimated and stored in memory. The estimates can be used to calculate the total time cost of in-flight IO jobs and a determination can be made as to whether the hardware device is saturated based on completion times for IO jobs. Over time the time cost estimates for IO jobs can be revised based on a comparison between the estimated time cost for an IO job and the actual time cost for the IO job using aggregate IO job completion sequences.04-01-2010
20110289508METHODS AND SYSTEMS FOR EFFICIENT API INTEGRATED LOGIN IN A MULTI-TENANT DATABASE ENVIRONMENT - Methods and systems for efficient API integrated login in a multi-tenant database environment and for decreasing latency delays during an API login request authentication including receiving a plurality of API login requests at a load balancer of a datacenter, where each of the plurality of API login requests specify a user identifier (userID) and/or an organizational identifier (orgID), fanning the plurality of API login requests across a plurality of redundant instances executing within the datacenter, assigning each API login request to one of the plurality of redundant instances for authentication, and for each of the respective plurality of API login requests, performing a recursive query algorithm at the assigned redundant instance, at one or more recursive redundant instances within the datacenter, and at a remote recursive redundant instance executing in a second datacenter, as necessary, until the login request is authenticated or determined to be invalid.11-24-2011
20090089792METHOD AND SYSTEM FOR MANAGING THERMAL ASYMMETRIES IN A MULTI-CORE PROCESSOR - In general, the invention relates to a system that includes a multi-core processor and a dispatcher operatively connected to the multi-core processor. The dispatcher is configured to receive a first plurality of threads during a first period of time, dispatch the first plurality of threads only to a first core of the plurality of cores, receive a second plurality of threads during a second period of time, dispatch the second plurality of threads only to a second core of the plurality of cores, migrate to the second core any of the first plurality of threads that are still executing on the first after the first period of time has elapsed. The duration of the first period of time and the duration of the second period of time are determined using a thread migration schedule, and thread migration schedule is determined using at least one thermal characteristic of the multi-core processor.04-02-2009
20090037925SMART STUB OR ENTERPRISE JAVA BEAN IN A DISTRIBUTED PROCESSING SYSTEM - A clustered enterprise distributed processing system. The distributed processing system includes a first and a second computer coupled to a communication medium. The first computer includes a virtual machine (JVM) and kernel software layer for transferring messages, including a remote virtual machine (RJVM). The second computer includes a JVM and a kernel software layer having a RJVM. Messages are passed from a RJVM to the JVM in one computer to the JVM and RJVM in the second computer. Messages may be forwarded through an intermediate server or rerouted after a network reconfiguration. Each computer includes a Smart stub having a replica handler, including a load balancing software component and a failover software component. Each computer includes a duplicated service naming tree for storing a pool of Smart stubs at a node.02-05-2009
20090064170COMMUNICATION APPARATUS AND METHOD FOR CONTROLLING COMMUNICATION APPARATUS - A communication apparatus includes a control unit including a controller configured to control the communication apparatus, a first communication unit configured to perform communication under control of the controller, and a second communication unit including a subcontrol unit and configured to perform communication under control of the subcontrol unit, wherein a load condition of the controller is determined, and one of the first communication unit and the second communication unit is selected to perform communication processing based on the determined load condition.03-05-2009
20100146518All-To-All Comparisons on Architectures Having Limited Storage Space - Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory.06-10-2010
20110191783Techniques for managing processor resource for a multi-processor server executing multiple operating systems - A multiprocessor server system executes a plurality of multiprocessor or single-processor operating systems each using a plurality of storage adapters and a plurality of network adapters. Each operating system maintains load information about all its processors and shares the information with other operating systems. Upon changes in the processor load of the operating systems, processors are dynamically reassigned among operating systems to improve performance if the maximum load of the storage adapters and network adapters of the reassignment target operating system is not already reached. Processor reassignment includes shutting down and restarting dynamically operating systems to allow the reassignment of the processors used by single-processor operating systems. Furthermore, the process scheduler of multi-processor operating systems keeps some processors idle under light processor load conditions in order to allow the immediate reassignment of processors to heavily loaded operating systems.08-04-2011
20090144745Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - Apparatus for evaluating the performance of DMA-based algorithmic tasks on a target multi-core processing system includes a memory and at least one processor coupled to the memory. The processor is operative: to input a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; to evaluate performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and to provide results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system.06-04-2009
20090144744Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - A method for evaluating performance of DMA-based algorithmic tasks on a target multi-core processing system includes the steps of: inputting a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; evaluating performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and providing results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system.06-04-2009
20130219406COMPUTER SYSTEM, JOB EXECUTION MANAGEMENT METHOD, AND PROGRAM - In a computer system of the present invention, whether or not master data has been updated is managed for each division key as master data management information. If the master data has been updated, a job is re-executed, but when the job is re-executed, data is divided using only a division key corresponding to updated master data, and thereby a sub-job which is a re-execution target is localized with the division key unit so as to re-execute a job (refer to FIG. 08-22-2013
20100169893Computing Resource Management Systems and Methods - An information handling system may include a first subsystem operable to receive data associated with computing resources from at least one computing resource provider. The system may further include a second subsystem in communication with the first subsystem, the second subsystem operable to provide the computing resources to at least one computing resource customer, wherein the at least one computing resource provider receives compensation paid by the at least one computing resource customer for completion of a workload. A method for managing a computing resource within an information handling system may include receiving data associated with the computing resource from at least one computing resource provider and providing the computing resources to at least one computing resource customer. The at least one computing resource provider may receive compensation paid by the at least one resource customer for completion of a workload.07-01-2010
20100169892Processing Acceleration on Multi-Core Processor Platforms - Embodiments disclosed herein include an accelerator module that modifies a single application to run on multiple processing cores of a single CPU. In one aspect, the application performs a task that includes some parallel operations and some serial operations. The parallel tasks may be run on different cores concurrently. In addition, serial tasks may be broken up to execute among different cores simultaneously without errors. In a particular embodiment, a FFMPEG decoding application is modified by the accelerator module to execute on multiple cores and perform video decoding in real time or faster than real time.07-01-2010
20110219383PROCESSING MODEL-BASED COMMANDS FOR DISTRIBUTED APPLICATIONS - The present invention extends to methods, systems, and computer program products for processing model based commands for distributed applications. Embodiments facilitate execution of model-based commands, including software lifecycle commands, using model-based workflow instances. Data related to command execution is stored in a shared repository such that command processors can understand their status in relationship to workflow instances. Further, since the repository is shared, command execution can be distributed and balanced across a plurality of different executive services. Embodiments also include model-based error handling and error recovery mechanisms.09-08-2011
20100031267Distribution Data Structures for Locality-Guided Work Stealing - A data structure, the distribution, may be provided to track the desired and/or actual location of computations and data that range over a multidimensional rectangular index space in a parallel computing system. Examples of such iteration spaces include multidimensional arrays and counted loop nests. These distribution data structures may be used in conjunction with locality-guided work stealing and may provide a structured way to track load balancing decisions so they can be reproduced in related computations, thus maintaining locality of reference. They may allow computations to be tied to array layout, and may allow iteration over subspaces of an index space in a manner consistent with the layout of the space itself. Distributions may provide a mechanism to describe computations in a manner that is oblivious to precise machine size or structure. Programming language constructs and/or library functions may support the implementation and use of these distribution data structures.02-04-2010
20120233626SYSTEMS AND METHODS FOR TRANSPARENTLY OPTIMIZING WORKLOADS - Systems, methods, and media for transparently optimizing a workload of a containment abstraction are provided herein. Methods may include monitoring a workload of the containment abstraction, the containment abstraction being at least partially hardware bound, the workload corresponding to resource utilization of the containment abstraction, converting the containment abstraction from being at least partially hardware bound to being entirely central processing unit (CPU) bound by placing the containment abstraction in a memory store, based upon the workload, and allocating the workload of the containment abstraction across at least a portion of a data center to optimize the workload of the containment abstraction.09-13-2012
20120233625TECHNIQUES FOR WORKLOAD COORDINATION - Techniques for workload coordination are provided. An automated discovery service identifies resources with hardware and software specific dependencies for a workload. The dependencies are made generic and the workload and its configuration with the generic dependencies are packaged. At a target location, the packaged workload is presented and the generic dependencies automatically resolved with new hardware and software dependencies of the target location. The workload is then automatically populated in the target location.09-13-2012
20100251256Scheduling Data Analysis Operations In A Computer System - A technique receiving identifiers from a plurality of nodes. Each identifier identifies an associated data object, and at least some of the data objects being replicated on different nodes. The technique includes scheduling analysis of the data objects on the nodes based at least in part on a distribution of replicas of the data objects among the nodes and modeled performances of the nodes.09-30-2010
20120110594LOAD BALANCING WHEN ASSIGNING OPERATIONS IN A PROCESSOR - A method and apparatus for assigning operations in a processor are provided. An incoming instruction is received. The incoming instruction is capable of being processed: only by a first processing unit (PU), only by a second PU or by either first and second PUs. The processing of first and second PUs is load balanced by assigning the received instructions capable of being processed by either the first and the second PUs based on a metric representing differential loads placed on the first and the second PUs.05-03-2012
20130219405APPARATUS AND METHOD FOR MANAGING DATA STREAM DISTRIBUTED PARALLEL PROCESSING SERVICE - Disclosed herein are an apparatus and method for managing a data stream distributed parallel processing service. The apparatus includes a service management unit, a Quality of Service (QoS) monitoring unit, and a scheduling unit. The service management unit registers a plurality of tasks constituting the data stream distributed parallel processing service. The QoS monitoring unit gathers information about the load of the plurality of tasks and information about the load of a plurality of nodes constituting a cluster which provides the data stream distributed parallel processing service. The scheduling unit arranges the plurality of tasks by distributing the plurality of tasks among the plurality of nodes based on the information about the load of the plurality of tasks and the information about the load of the plurality of nodes.08-22-2013
20110197198LOAD AND BACKUP ASSIGNMENT BALANCING IN HIGH AVAILABILITY SYSTEMS - Among other things, embodiments described herein enable systems, e.g., Availability Management Forum (AMF) systems, having service units to operate with balanced loads both before and after the failure of one of the service units. A configuration can be generated which provides for distributed backup roles and balanced active loads. When a failure of a service unit occurs, the active loads previously handled by that service unit are substantially evenly picked up as active loads by remaining service units.08-11-2011
20120240129RANKING SERVICE UNITS TO PROVIDE AND PROTECT HIGHLY AVAILABLE SERVICES USING N+M REDUNDANCY MODELS - Among other things, embodiments described herein enable systems, e.g., Availability Management Forum (AMF) systems, having service units to operate with balanced loads both before and after the failure of one of the service units. A method for balancing standby workload assignments and active workload assignments for a group of service units in a system which employs an N+M redundancy model, wherein N service units are active service units and M service units are standby service units is described. An active workload that the N active service units need to handle is calculated and each of the N active service units in the group is provided with an active workload assignment based on the calculated active workload. Standby workload assignments are distributed among the M standby service units substantially equally.09-20-2012
20080263562Management Information System for Allocating Contractors with Requestors - A management information system, computer implemented method and computer product for allocating contractors and requestors. A networked server is provided which includes a processor, a memory coupled to the processor and a database operatively stored in the memory. The database comprises a first database component operative to maintain a plurality of contract service records, each contract service record being associated with a contractor and including contractor data representing an available contract services and a contractor locality; a second database component is operative to maintain a plurality of requester records, each requester record being associated with an individual requester and including requester data representing a requested contract service and a requester locality. A database engine is operatively loaded into the memory and includes instructions executable by the processor to determine a suggested contractor/requestor allocation in dependence on a correspondence of at least the data representing a contract service and locality among the contract services records and requester records and output a suggested contractor/requestor allocation in a tiered order of preference of contractors and sends notices to the identified contractors.10-23-2008
20080271037METHOD AND APPARATUS FOR LOAD BALANCE SERVER MANAGEMENT - A computer implemented method, apparatus, and computer usable program code for balancing management loads. Loads are analyzed for a plurality of hardware control points to form an analysis in response to receiving a notification from a hardware control point indicating that a new manageable data processing system has been discovered. One of the plurality of hardware control points is selected using the analysis to form a selected hardware control point. The message is sent to the selected hardware control point to manage the new manageable data processing system, wherein the selected hardware control point manages the new manageable data processing system.10-30-2008
20120240130VIRTUAL WORLD SUBGROUP DETERMINATION AND SEGMENTATION FOR PERFORMANCE SCALABILITY - A system and method of decreasing server loads and, more particularly, to decrease server load by automatically determining subgroups based on object interactions and computational expenditures. The system includes a plurality of servers; a subgroup optimization module configured to segment a plurality of objects into optimal subgroups; and a server transfer module configured to apportion one or more of the optimal subgroups between the plurality of servers based on a load of each of the plurality of servers. The method includes determining a relationship amongst a plurality of objects; segmenting the objects into optimized subgroups based on the relationships; and apportioning the optimized subgroups amongst a plurality of servers based on server load.09-20-2012
20090089793Method and Apparatus for Performing Load Balancing for a Control Plane of a Mobile Communication Network - The invention includes a method and apparatus for providing load balancing of control traffic received by a mobility home agent implemented using multiple control elements. A method includes receiving, from a node, a control message intended for the network element, performing a load-balancing operation to select one of the control elements to handle the control message, and propagating the control message toward the selected one of the control elements. The load-balancing operation is performed using at least two load-balancing metrics comprising a first metric and a second metric. The load-balancing operation is performed in a manner for maintaining a context between the node from which the control message is received and the selected one of the control elements, such that subsequent control messages received from the node are propagated to the selected one of the control elements.04-02-2009
20090089794APPARATUS, SYSTEM, AND METHOD FOR CROSS-SYSTEM PROXY-BASED TASK OFFLOADING - An apparatus, system, and method are disclosed for offloading data processing. An offload task 04-02-2009
20080209434Distribution of data and task instances in grid environments - A partition analyzer may be configured to designate a data partition within a database of a grid network, and to perform a mapping of the data partition to a task of an application, the application to be at least partially executed within the grid network. A provisioning manager may be configured to determine a task instance of the task, and to determine the data partition, based on the mapping, where the data partition may be stored at an initial node of the grid network. A processing node of the grid network having processing resources required to execute the task instance and a data node of the grid network having memory resources required to store the data partition may be determined. The task instance may be deployed to the processing node, and the data partition may be re-located from the initial node to the data node, based on the comparison.08-28-2008
20090288096LOAD BALANCING FOR IMAGE PROCESSING USING MULTIPLE PROCESSORS - A method and system for load balancing the work of NP processors (NP≧3) configured to generate each image of multiple images in a display area of a display device. The process for each image includes: dividing the display area logically into NP initial segments ordered along an axis of the display area; assigning each processor to a corresponding initial segment; assigning a thickness to each initial segment; simultaneously computing an average work function per pixel for each initial segment; generating a cumulative work function from the average work function per pixel for each initial segment; partitioning a work function domain of the cumulative work function into NP sub-domains; determining NP final segments of the display area by using the cumulative work function to inversely map boundaries of the sub-domains onto the axis; assigning each processor to a final segment, and displaying and/or storing the NP final segments.11-19-2009
20090288095Method and System for Optimizing a Job Scheduler in an Operating System - A workload scheduler determines how to submit jobs to several scheduler agents across multiple systems. The scheduler engine determines the systems to which it is able to submit jobs. A job is received and analyzed to determine systems to which the job can be submitted. The scheduler engine determines which system will receive the job by evaluating the next system in line and determining if the job can be sent to that system and if that system is currently in a healthy state. The scheduler engine sends the job to the selected system. The scheduler agents inform the scheduler engine when the job is submitted and when it is executed. Once a time period has expired, the engine evaluates the health of each of the systems based on the number of jobs submitted and executed by each system.11-19-2009
20080276247Method for the Real-Time Analysis of a System - The invention relates to a method for the real-time analysis of a system, especially a technical system, which is to process tasks (τ). A job that is defined by processing of a task (τ) generates system expenses. In order to create a particularly quick and accurate method, an approximation of the method is cancelled when it is considered that an interval (I, I11-06-2008
20080216086METHOD OF ANALYZING PERFORMANCE IN A STORAGE SYSTEM - A method of balancing a load in a computer system having at least one storage system, and a management computer, each of the storage systems having physical disks and a disk controller, the load balancing method including the steps of: setting at least one of the physical disks as a parity group; providing a storage area of the set parity group as at least one logical volumes to the host computer; calculating a logical volume migration time when a utilization ratio of the parity group becomes equal to or larger than a threshold; and choosing, as a data migration source volume, one of the logical volumes included in the parity group that has the utilization ratio equal to or larger than the threshold, by referring to the calculated logical volume migration time, the data migration source volume being the logical volume from which data migrates.09-04-2008
20130219407OPTIMIZED JOB SCHEDULING AND EXECUTION IN A DISTRIBUTED COMPUTING GRID - A disclosed example involves determining whether there is at least one valid combination of nodes and links from the network of nodes with capability and capacity over time to complete a computer-executable job by a deadline. A total cost combination of nodes and links is selected from among the at least one valid combination of nodes and links with the capability and capacity over time to complete the computer-executable job by the deadline. The computer-executable job is scheduled to be executed on at least one selected node. The scheduling is based on compiled instructions comprising the computer-executable job. At least some of the link capacity of at least one of the links connected to the at least one selected node is reserved, to match a job transport capacity requirement of the computer-executable job.08-22-2013
20100146517SYSTEM AND METHOD FOR A RATE CONTROL TECHNIQUE FOR A LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL OVER MQSERIES (LOM) SERVER - A system and method for controlling rates for a Lightweight Directory Access Protocol (LDAP) over MQSeries (LoM) server. The system comprises a health metrics engine configured to calculate an actual delay value, at least one LoM server configured to asynchronously obtain the actual delay value from the health metrics engine and place the delay value between one or more requests, and a LDAP master configured to accept the one or more requests and send information in the one or more requests to a LDAP replica.06-10-2010
20090265713PROACTIVE CORRECTION ALERTS - Computerized methods and systems for creating and documenting protocol orders in a molecular diagnostic laboratory environment are provided. Utilizing the methods and systems described herein, protocol statements may require values to be entered in association therewith prior to permitting access to subsequent protocol orders. Accordingly, more accurate test runs and, consequently, more accurate test results, may be achieved. Additionally, as values associated with protocol statements are electronically captured, in accordance with embodiments hereof, such values may be searched to evaluate trends or identify protocol orders and/or results that may be affected by a later discovered error or the like.10-22-2009
20090064167System and Method for Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history.03-05-2009
20080288952PROCESSING APPARATUS AND DEVICE CONTROL UNIT - A processing apparatus including a plurality of task-processing devices includes a calculation control unit and a device control unit configured to cause the task-processing devices to perform tasks of at least one kind in parallel in accordance with control performed by the calculation control unit. The device control unit sends a command for starting task processing to each of the task-processing devices in accordance with the task group generated by and sent from the calculation control unit. The task-processing devices each execute a task issued from the device control unit, and when the task is complete, each provide a notification that the task is complete to the device control unit. The device control unit provides, in the case in which all tasks included in the task group are complete, a notification that the task group is complete to the calculation control unit.11-20-2008
20130191845LOAD CONTROL DEVICE AND LOAD CONTROL METHOD - A load control device controlling a load of an executed program includes an arithmetic processing unit configured to execute the program, a load detection unit configured to detect a load factor of the arithmetic processing unit, a load-difference detection unit configured to obtain a difference between a predetermined load factor and the load factor detected by the load detection unit and a load controller configured to control, for a predetermined time, the start or stop of the program executed by the arithmetic processing unit so that the arithmetic processing unit has the predetermined load factor on the basis of the difference detected by the load-difference detection unit.07-25-2013
20100146516Distributed Task System and Distributed Task Management Method - A distributed task system has a task transaction server and at least one task server. Instead of being merely passively called by the task transaction server for executing a task, the task server performs self-balancing according to task execution conditions and operation conditions of the task server. The task transaction server receives task requests from the task server, records the execution conditions, and provides feedback to the task server, and the task server executes the task according to the received feedback and the operation conditions of the task server. The task transaction server may determine if the task server can execute the task according to the execution conditions of the task, and feedback to the task server. A self-balancing unit of the task server may further determine whether the task server is busy, and if not busy, trigger a task execution unit of the task server to execute the task.06-10-2010
20120198470COMPACT NODE ORDERED APPLICATION PLACEMENT IN A MULTIPROCESSOR COMPUTER - A multiprocessor computer system comprises a plurality of nodes, wherein the nodes are ordered using a snaking dimension-ordered numbering. An application placement module is operable to place an application in nodes with preference given to nodes ordered near one another.08-02-2012
20090165012SYSTEM AND METHOD FOR EMBEDDED LOAD BALANCING IN A MULTIFUNCTION PERIPHERAL (MFP) - The invention relates to multifunction peripherals (MFPs). More particularly, the invention relates to an embedded load balancer in a multifunction peripheral. An MFP with an embedded load balancer may determine that another suitable device is more capable of handling a job request, and, subsequently, may transfer the job request to the other device.06-25-2009
20090178053DISTRIBUTED SCHEMES FOR DEPLOYING AN APPLICATION IN A LARGE PARALLEL SYSTEM - Embodiments of the invention provide a method for deploying and running an application on a massively parallel computer system, while minimizing the costs associated with latency, bandwidth, and limited memory resources. The executable code of a program may be divided into multiple code fragments and distributed to different compute nodes of a parallel computing system. During program execution, one compute node may fetch code fragments from other compute nodes as necessary.07-09-2009
20090165014Method and apparatus for migrating task in multicore platform - Provided are a method and apparatus for migrating a task in a multi-core platform including a plurality of cores. The method includes transmitting codes of the task that is being performed in a first core among the plurality of cores to a second core among the plurality of cores, the transmitting of the codes being performed while performing the task at the first core, and resuming performing of the task in the second core based on the transmitted codes.06-25-2009
20090019450APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT FOR TASK MANAGEMENT - A task management apparatus comprises a plurality of processors, and correspondingly stores, a plurality of tasks to be assigned to the processors within a predetermined period of time, and temporal groups each of which is assigned to the plurality of the tasks. The task management apparatus assigns one of the tasks to one of the processors. After having assigned the task, the task management apparatus assigns, to the one of the processors that has finished processing the assigned task, the other tasks that are in correspondence with the same temporal group as the temporal group with which the assigned task is in correspondence, before assigning the tasks that are not in correspondence with the temporal group.01-15-2009
20090178052LATENCY-AWARE THREAD SCHEDULING IN NON-UNIFORM CACHE ARCHITECTURE SYSTEMS - A system and method for latency-aware thread scheduling in non-uniform cache architecture are provided. Instructions may be provided to the hardware specifying in which banks to store data. Information as to which banks store which data may also be provided, for example, by the hardware. This information may be used to schedule threads on one or more cores. A selected bank in cache memory may be reserved strictly for selected data.07-09-2009
20090144746ADJUSTING WORKLOAD TO ACCOMMODATE SPECULATIVE THREAD START-UP COST - Methods and apparatus provide for a workload adjuster to estimate the startup cost of one or more non-main threads of loop execution and to estimate the amount of workload to be migrated between different threads. Upon deciding to parallelize the execution of a loop, the workload adjuster creates a scheduling policy with a workload for a main thread and workloads for respective non-main threads. The scheduling policy distributes iterations of a parallelized loop to the workload of the main thread and iterations of the parallelized loop to the workloads of the non-main threads. The workload adjuster evaluates a start-up cost of the workload of a non-main thread and, based on the start-up cost, migrates a portion of the workload for that non-main thread to the main thread's workload.06-04-2009
20110225594Method and Apparatus for Determining Resources Consumed by Tasks - In a computer system comprising a plurality of computing devices wherein the plurality of computing devices processes a plurality of tasks and each task has a task type, a method for determining overheads associated with task types comprises the following steps. Overheads are estimated for a plurality of task types. One of the plurality of computing devices is selected to execute one of the plurality of tasks, wherein the selection comprises estimating load on at least a portion of the plurality of computing devices from tasks assigned to at least a portion of the plurality of computing devices and the estimates of overheads of the plurality of task types. One or more of the estimates of overheads of the plurality of task types are varied.09-15-2011
20090199199Backup procedure with transparent load balancing - In an embodiment of the invention, an apparatus and method provides a backup procedure with transparent load balancing. The apparatus and method perform acts including: performing a preamble phase in order to determine if a file will be backed up from an agent to a portal; and applying a chunking policy on the file, wherein the chunking policy comprises performing chunking of the file on an agent, performing chunking of the file on the portal, or transmitting the file to the portal without chunking.08-06-2009
20090199200Mechanisms to Order Global Shared Memory Operations - A method and data processing system for performing fence operations within a global shared memory (GSM) environment having a local task executing on a processor and providing GSM commands for processing by a host fabric interface (HFI) window that is allocated to the task. The HFI window has one or more registers for use during local fence operations. A first register tracks a first count of task-issued GSM commands, and a second register tracks a second count of GSM operations being processed by the HFI. The processing logic detects a locally-issued fence operation, and responds by performing a series of operations, including: automatically stopping the task from issuing additional GSM commands; monitoring for completion of all the task-issued GSM commands at the HFI; and triggering a resumption of issuance of GSM commands by the task when the completion of all previous task-issued GSM commands is registered by the HFI.08-06-2009
20090083751INFORMATION PROCESSING APPARATUS, PARALLEL PROCESSING OPTIMIZATION METHOD, AND PROGRAM - According to one embodiment, an information processing apparatus includes a plurality of execution units and a scheduler which controls assignment of a plurality of basic modules of a program to the plurality of execution units. The scheduler detects a parallel degree representing a parallelization ratio in parallel processing of a program by the plurality of execution units, and detects a load associated with control of assigning the plurality of basic modules in the parallel processing of the program by the plurality of execution units. And then, the scheduler combines two or more basic modules which are successively executed according to a paralleled execution description in order to assign two or more basic modules as a module to a single execution unit, when a value of the parallel degree exceeds a predetermined value and a value of the load exceeds a predetermined value.03-26-2009
20080263563METHOD AND APPARATUS FOR ONLINE SAMPLE INTERVAL DETERMINATION - In one embodiment, functional system elements are added to an autonomic manager to enable automatic online sample interval selection. In another embodiment, a method for determining the sample interval by continually characterizing the system workload behavior includes monitoring the system data and analyzing the degree to which the workload is stationary. This makes the online optimization method less sensitive to system noise and capable of being adapted to handle different workloads. The effectiveness of the autonomic optimizer is thereby improved, making it easier to manage a wide range of systems.10-23-2008
20110231860LOAD DISTRIBUTION SYSTEM - A load distribution system for allocating a job to one of a plurality of arithmetic devices includes a temperature data acquirer, a candidate selector, and a job allocator. The temperature data acquirer acquires temperature data indicating temperature of each of the plurality of arithmetic devices. The candidate selector selects at least one of the plurality of arithmetic devices as a candidate for a device to which the job is to be allocated. The job allocator allocates the job to the selected candidate.09-22-2011
20090070771METHOD AND SYSTEM FOR EVALUATING VIRTUALIZED ENVIRONMENTS - A system and method are provided for incorporating compatibility analytics and virtualization rule sets into a transformational physical to virtual (P2V) analysis for designing a virtual environment from an existing physical environment and for ongoing management of the virtual environment to refine the virtualization design to accommodate changing requirements and a changing environment.03-12-2009
20090064169System and Method for Sensor Scheduling - A system for sensor scheduling includes a plurality of sensors operable to perform one or more tasks and a processor operable to receive one or more missions and one or more environmental conditions associated with a respective mission. Each mission may include one or more tasks to be performed by one or more of the plurality of sensors. The processor is further operable to select one or more of the plurality of sensors to perform a respective task associated with the respective mission. The processor may also schedule the respective task to be performed by the selected one or more sensors. The scheduling is based at least on a task value that is determined based on an options pricing model. The options pricing model is based at least on the importance of the respective task to the success of the respective mission and one or more scheduling demands.03-05-2009
20090254918Mechanism for Performance Optimization of Hypertext Preprocessor (PHP) Page Processing Via Processor Pinning - A method, system, and computer program product for optimizing “Hypertext Preprocessor” (PHP) processes by identifying the PHP pages which are active on a server and forwarding requests for specific pages to a processor which has recently processed that page. A request processing optimization (RPO) utility assigns an initial request received at the server for a PHP page based on a number of factors which may include a relative usage level of a processor within a pool of available processors on a server. The RPO utility assigns a request to additional processors based on: (1) a threshold frequency of page requests; and (2) a resource intensive factor of a page request measured by average response time of the page request. The assignment of PHP pages to a particular processor(s) enhances cache performance since the requisite code for a specific PHP page is loaded into the processor's cache.10-08-2009
20090210881PROCESS PLACEMENT IN A PROCESSOR ARRAY - There is provided a method for placing a plurality of processes onto respective processor elements in a processor array, the method comprising (i) assigning each of the plurality of processes to a respective processor element to generate a first placement; (ii) evaluating a cost function for the first placement to determine an initial value for the cost function, the result of the evaluation of the cost function indicating the suitability of a placement, wherein the cost function comprises a bandwidth utilisation of a bus interconnecting the processor elements in the processor array; (iii) reassigning one or more of the processes to respective different ones of the processor elements to generate a second placement; (iv) evaluating the cost function for the second placement to determine a modified value for the cost function; and (v) accepting or rejecting the reassignments of the one or more processes based on a comparison between the modified value and the initial value.08-20-2009
20090222837WORK FLOW MANAGEMENT SYSTEM AND WORK FLOW MANAGEMENT METHOD - A work flow management method is executed by a system including a person-in-charge terminal, and a management device, wherein the person-in-charge terminal receives n input of information containing a value requested for decision through an operation by a person in charge, transmits the input information as application information to the management device, and receives result information of an examination about the application information from the management device, and the management device notifies a decider of the application information received from the person-in-charge terminal, receives an input of the result information containing a range in which at least the application information can take a value when approving the application information through the operation of the decider; and transmits the result information to the person-in-charge terminal.09-03-2009
20080244611PRODUCT, METHOD AND SYSTEM FOR IMPROVED COMPUTER DATA PROCESSING CAPACITY PLANNING USING DEPENDENCY RELATIONSHIPS FROM A CONFIGURATION MANAGEMENT DATABASE - The invention discloses a computer data processing capacity planning system that utilizes known workload planning information along with hardware and/or software configuration information from the actual operating environment to accurately estimate the production system capacity available for use in carrying out one or more processing task(s).10-02-2008
20080282254Geographic Resiliency and Load Balancing for SIP Application Services - A mechanism for achieving resiliency and load balancing for SIP application services and, in particular, in geographic distributed sites. A method performs a distribution of SIP requests among SIP servers, where at least two sites with a load balancer in each site is configured. The method includes receiving a SIP request by a first load balancer in a first site; determining whether the SIP request should be redirected to a second site; and redirecting the SIP request to an address of a second load balancer in the second site. The invention also includes a SIP proxy including a receiving unit receiving SIP requests; a load balancing unit distributing SIP requests between SIP entities; and a health monitoring unit verifying availability of the SIP entities. The SIP proxy may further be configured with a proximity measuring unit determining a proximity to a SIP entity.11-13-2008
20090260016SYSTEM AND/OR METHOD FOR BULK LOADING OF RECORDS INTO AN ORDERED DISTRIBUTED DATABASE - In a large-scale transaction such as the bulk loading of new records into an ordered, distributed database, a transaction limit such as an insert limit may be chosen, partitions on overfull storage servers may be designated to be moved to underfull storage servers, and the move assignments may be based, at least in part on the degree to which a storage server is underfull and the move and insertion costs of the partitions to be moved.10-15-2009
20100153966TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER USING LOCAL JOB TABLES - A technique for operating a high performance computing cluster includes monitoring workloads of multiple processors. The high performance computing cluster includes multiple nodes that each include two or more of the multiple processors. Workload information for the multiple processors is periodically updated in respective local job tables maintained in each of the multiple nodes. Based on the workload information in the respective local job tables, one or more threads are periodically moved to a different one of the multiple processors.06-17-2010
20100153965TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON INTER-THREAD COMMUNICATIONS - A technique for operating a high performance computing (HPC) cluster includes monitoring communication between threads assigned to multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more of the threads are moved to a different one of the multiple processors based on the communication between the threads.06-17-2010
20100153962Method and system for controlling distribution of work items to threads in a server - A system and method are presented to control distribution of work items to threads in a server. The system and method include a permit dispenser that keeps track of permits, and a plurality of thread pools each including a queue with a configurable size, being configured with a desired concurrency and a size of the queue that is equal to a total number of work items to be executed by pool threads in the thread pool. The number of permits specifies a total number of threads available for executing the work items in the server. Each pool thread executes a work item in the thread pool, determines whether a thread surplus or a thread deficit exists, and shrinks or grows the thread pool respectively.06-17-2010
20090260017WORKFLOW EXECUTION DEVICE AND WORKFLOW EXECUTION METHOD - A workflow execution device is provided. The latest update date information relating to date when update is performed is added to each step of a workflow definition file. When executing each step of the workflow definition file, processing execution date information relating to date when the execution is performed is added to data processed. Subsequently, when executing each step of the workflow definition file, the final processing date of the data is determined from the processing execution date information added to the data processed. In a case where the update date determined from the latest update date information prior to the execution is later than the final processing date of the data to be processed in the execution, the processing in the execution is cancelled.10-15-2009
20100153964LOAD BALANCING OF ADAPTERS ON A MULTI-ADAPTER NODE - Load balancing of adapters on a multi-adapter node of a communications environment. A task executing on the node selects an adapter resource unit to be used as its primary port for communications. The selection is based on the task's identifier, and facilitates a balancing of the load among the adapter resource units. Using the task's identifier, an index is generated that is used to select a particular adapter resource unit from a list of adapter resource units assigned to the task. The generation of the index is efficient and predictable.06-17-2010
20100162261Method and System for Load Balancing in a Distributed Computer System - In an embodiment, a distributed computer system comprises a plurality of computers connected in substantial logical ring architecture. The computers are configured having a synchronized clock operation. At least one predetermined token designated with any one of a busy or an idle status circulates through the logical ring, wherein the computers are configured to check the status and give away or receive a predetermined job for completion, based on one or more predetermined conditions. Further, any deadlock generated is released by preempting the jobs based on predetermined criteria.06-24-2010
20100186020SYSTEM AND METHOD OF MULTITHREADED PROCESSING ACROSS MULTIPLE SERVERS - In one embodiment the present invention includes a computer implemented system and method of multithreaded processing on multiple servers. Jobs may be received in a jobs table for execution. Each of a plurality of servers may associate a thread for executing a particular job type. As a job is received in the job table, the associated thread on each server may access the jobs table and pick up the job if the job type for the job is associated with the thread. Jobs may include sequential and parallel tasks to be performed. Sequential job tasks may be performed by one associated thread on one server, while parallel job tasks may be performed by each associated thread on each server. In one embodiment, a metadata table is used to coordinate multithreaded processing across multiple servers.07-22-2010
20100262975AUTOMATED WORKLOAD SELECTION - A job submission method that presents a set of algorithms that provide automated workload selection to a batch processing system that has the ability to receive and run jobs on various computing resources simultaneously is provided. If all machines in the batch system are running jobs, a queue containing the extra jobs for execution results. For compute intensive workloads, such as chip design, an automated workload selection system software layer submits jobs to the batch processing system. This keeps the batch processing system continually full of useful work The job submission system provides for organizing workloads, assigning relative ratios between workloads, associating arbitrary workload validation algorithms with a workload or parent workload, associating arbitrary selection algorithms with a workload or workload group, defining high priority workloads that preserve fairness and balancing the workload selection based on current status of the batch system, validation status, and the workload ratios.10-14-2010
20100262974Optimized Virtual Machine Migration Mechanism - A virtual machine management system may perform a three phase migration analysis to move virtual machines off of less efficient hosts to more efficient hosts. In many cases, the migration may allow inefficient host devices to be powered down and may reduce overall energy costs to a datacenter or other user. The migration analysis may involve performing a first consolidation, a load balancing, and a second consolidation when consolidating virtual machines and freeing host devices. The migration analysis may also involve performing a first load balancing, a consolidation, and a second load balancing when expanding capacity.10-14-2010
20100192158Modeling Computer System Throughput - A method of determining an estimated data throughput capacity for a computer system includes the steps of creating a first model of data throughput of a central processing subsystem in the computer system as a function of latency of a memory subsystem of the computer system; creating a second model of the latency in the memory subsystem as a function of bandwidth demand of the memory subsystem; and finding a point of intersection of the first and second models. The point of intersection corresponds to a possible operating point for said computer system.07-29-2010
20090007133Balancing of Load in a Network Processor - According to an aspect of the present invention, a scheduler balances the load on the microengines comprising one or more threads allocated to execute a corresponding microblock. The scheduler determines the load on each microengine at regular time intervals. The scheduler balances the load of a heavily loaded microengine by distributing the corresponding load among one or more lightly loaded microengines.01-01-2009
20090077562Client Affinity in Distributed Load Balancing Systems - Aspects of the subject matter described herein relate to client affinity in distributed load balancing systems. In aspects, a request from a requester is sent to each server of a cluster. Each server determines whether it has affinity to the requester. If so, the server responds to the request. Otherwise, if the request would normally be load balanced to the server, the server queries the other servers in the cluster to determine whether any of them have affinity to the requester. If one of them does, the server drops the request and allows the other server to respond to the request; otherwise, the server responds to the request.03-19-2009
20100162260Data Processing Apparatus - A method of providing a service application on a data processing apparatus comprising an interconnect and a plurality of data processing nodes, the method comprising the steps of, registering a service class at the interconnect, the service class having an associated service descriptor, generating a service object at a data processing node, the service object comprising an instance of the service class, and storing subscription information at the interconnect to permit messages to be routed to the service object in accordance with the service descriptor.06-24-2010
20100205609USING TIME STAMPS TO FACILITATE LOAD REORDERING - Some embodiments of the present invention provide a system that supports load reordering in a processor. The system maintains at least one counter value for each thread which is used to assign time stamps for the thread. While performing a load for the thread, the system reads a time stamp from a cache line to which the load is directed. Next, if the counter value is equal to the time stamp, the system performs the load. Otherwise, if the counter value is greater-than the time stamp, the system performs the load and increases the time stamp to be greater-than-or-equal-to the counter. Finally, if the load is a speculative load, which is speculatively performed earlier than an older load in program order, and the counter value is less-than the time stamp, the system fails speculative execution for the thread.08-12-2010
20100251259System And Method For Recruitment And Management Of Processors For High Performance Parallel Processing Using Multiple Distributed Networked Heterogeneous Computing Elements - A parallel processing computer is described that has several processing devices of several different processing device types each communicating over a computer network. The computer has at least one conversion device in communication with the processing devices, the conversion device being a processing device having conversion code for translating at least some task allocation and other messages from a format understood by the conversion device into a format understood for execution by a particular type of the several types of the processing devices. The computer also has at least one access device in communication with the at least one conversion device, the access device having program code for allocating tasks to processing devices and generating task allocation messages to processing devices. The computer network in an embodiment involves portions of the cellular telephone network as well as part of the internet.09-30-2010
20100251258RECORDING MEDIUM HAVING LOAD BALANCING PROGRAM RECORDED THEREON, LOAD BALANCING APPARATUS AND METHOD THEREOF - A load balancing method for servers including allocating a job to one or more servers, respectively, having a load lower that a first reference value, upon detection of a first server having a load that is higher that the first reference value and is lower that a second reference value, reducing a load of a second server having the lowest load among the servers by a load balancing, and upon detection of any server having a load that is higher that the second reference value, reallocating a job of the any server to another server having the lowest load among the servers.09-30-2010
20100251257METHOD AND SYSTEM TO PERFORM LOAD BALANCING OF A TASK-BASED MULTI-THREADED APPLICATION - A method and system to balance the load of a task-based multi-threaded application on a platform. When the work required by the multi-threaded application is represented as a task with a computational requirement that is proportional to the amount of the work, embodiments of the invention control the recursive binary task division of the task using auxiliary partitions to create subtasks of balanced loads to enhance resource utilization and to improve application performance. The task is binary partitioned recursively into a plurality of subtasks until the plurality of subtasks is equal to the plurality of resources available on the platform to execute the subtasks.09-30-2010
20100153963Workload management in a parallel database system - Embodiments of the present invention are directed to a workload management service component of a parallel database-management system that monitors usage of computational resources in the parallel database-management system and that provides a query-processing-task-management interface and a query-execution engine that receives query-processing requests associated with one of a number of services from host computers and accesses the workload-management-services component to determine whether to immediately launch execution of query-processing tasks corresponding to the received query-processing requests or to place the query-processing requests on wait queues for subsequent execution based on the current usage of computational resources within the parallel database-management system.06-17-2010
20100235845SUB-TASK PROCESSOR DISTRIBUTION SCHEDULING - A method for processing of processor executable tasks and a processor readable medium having embodied therein processor executable instructions for implementing the method are disclosed. A system for distributing processing work amongst a plurality of distributed processors is also disclosed. A task generated with a local node is divided into one or more sub-tasks. An optimum number of nodes x on which to process the sub-tasks is determined If x is greater than one a determination is made to either (1) execute the task at the local node with the processor unit, (2), distribute the task among two or more local node processors, (3) distribute the task to one or more of the distributed nodes accessible to the local node over a LAN, or (4) distribute the task to one or more of the distributed nodes that are accessible to the local node over a WAN.09-16-2010
20100211958AUTOMATED RESOURCE LOAD BALANCING IN A COMPUTING SYSTEM - A method for automated resource load balancing in a computing system includes partitioning a plurality of physical resources to create a plurality of dedicated resource sets. A plurality of separate environments are created on the computing system. Each created separate environment is associated with at least one dedicated resource set. The method further includes establishing a user policy that includes a utilization threshold, and for each separate environment, monitoring the utilization of the associated at least one dedicated resource set. The physical resources associated with a particular separate environment are automatically changed based on the monitored utilization for the particular separate environment, and in accordance with the user policy. This provides automated resource load balancing in the computing system.08-19-2010
20100095303Balancing A Data Processing Load Among A Plurality Of Compute Nodes In A Parallel Computer - Methods, apparatus, and products are disclosed for balancing a data processing load among a plurality of compute nodes in a parallel computer that include: partitioning application data for processing on the plurality of compute nodes into data chunks; receiving, by each compute node, at least one of the data chunks for processing; estimating, by each compute node, processing time involved in processing the data chunks received by that compute node for processing; and redistributing, by at least one of the compute nodes to at least one of the other compute nodes, a portion of the data chunks received by that compute node in dependence upon the processing time estimated by that compute node.04-15-2010
20090328055SYSTEMS AND METHODS FOR THREAD ASSIGNMENT AND CORE TURN-OFF FOR INTEGRATED CIRCUIT ENERGY EFFICIENCY AND HIGH-PERFORMANCE - A system and method for improving efficiency of a multi-core architecture includes, in accordance with a workload, determining a number of cores to shut down based upon a metric that combines parameters to represent operational efficiency. Threads of the workload are reassigned to cores remaining active by assigning threads based on priority constraints and thread execution history to improve the operational efficiency of the multi-core architecture.12-31-2009
20090328054ADAPTING MESSAGE DELIVERY ASSIGNMENTS WITH HASHING AND MAPPING TECHNIQUES - A system for efficiently distributing messages to a server farm uses a hashing function and a map-based function, or combinations thereof, to distribute messages associated with a processing request. In one implementation, for example, the hashing function has inputs of an identifier for each message in a processing request, and a list of available servers. Upon identifying that any of the servers is unavailable, or will soon be unavailable, the load balancing server uses an alternate map-based assignment function for new requests, and inputs each assignment into a server map. The load balancing server can then use the map or the hashing function, as appropriate, to direct messages to an operating server. Upon receiving an updated list of available servers, the load balancing server can switch back to the hashing function after the map is depleted, and use the updated server list as an argument.12-31-2009
20090276787PERFORMING DYNAMIC SIMULATIONS WITHIN VIRTUALIZED ENVIRONMENT - A method and apparatus for and article of manufacture for simulating workloads experienced by multiple partitions in a virtualized system are provided. A master workload driver initiates, coordinates and regulates one or more workload drivers that execute one or more workload simulation tasks in a logical partition. Further, each workload driver may be configured to report a measure of performance regarding the workload to the master control driver where results of many workload drivers may be correlated and analyzed. A configuration file specifies the characteristics of each simulation. Further, the rate and nature of workloads may be adjusted dynamically during a given simulation to model the performance under different real-world scenarios of different computational loads that may be experienced by the virtualized system.11-05-2009
20110067033AUTOMATED VOLTAGE CONTROL FOR SERVER OUTAGES - Information regarding a scheduled outage for a server associated with a cluster of servers is received at a voltage regulation system (VRS) for the cluster of servers. A work load increase is determined for each remaining server within the cluster of servers due to the scheduled outage for the server. A voltage adjustment is calculated for each remaining server based upon the determined work load increase for each remaining server. Voltage for each remaining server is automatically adjusted based upon the calculated voltage adjustment.03-17-2011
20090249352Resource Utilization Monitor - Load-balancing threads among a plurality of processing units. The method may include a first processing unit executing a plurality of software threads using a respective plurality of hardware strands. The plurality of hardware strands may share at least one hardware resource within the first processing unit. The method may further include monitoring the at least one hardware resource, wherein, for each respective hardware strand. Monitoring may include, for each respective hardware resource of the at least one hardware resource: maintaining information regarding the respective hardware strand requesting to use the respective hardware resource but failing to do so because the respective hardware resource is in use, comparing the information to a threshold, and generating an interrupt if the information exceeds the threshold. One or more load-balancing operations may be performed in response to the interrupt.10-01-2009
20090320039REDUCING INSTABILITY OF A JOB WITHIN A HETEROGENEOUS STREAM PROCESSING APPLICATION - Embodiments of the invention provide a method for reducing instability in a heterogeneous job plan of a stream processing application. In one embodiment, a job manager may be configured to select a job plan with the objective of minimizing the potential instability of the job plan. Each job plan may provide a directed graph connecting processing elements (both native and non-native). That is, each job plan illustrates data flow through the stream application framework. The job plan may be selected from multiple available job plans, or may be generated by replacing processing elements of a given job plan. Further, the job plan may be selected on the basis of other objectives in addition to an objective of minimizing the potential instability of the job plan, such as minimizing cost, minimizing execution time, minimizing resource usage, etc.12-24-2009
20090320038REDUCING INSTABILITY WITHIN A HETEROGENEOUS STREAM PROCESSING APPLICATION - Embodiments of the invention provide a method for reducing instability in a heterogeneous job plan of a stream processing application. In one embodiment, a job manager may be configured to select a job plan with the objective of minimizing the potential instability of the job plan. Each job plan may provide a directed graph connecting processing elements (both native and non-native). That is, each job plan illustrates data flow through the stream application framework. The job plan may be selected from multiple available job plans, or may be generated by replacing processing elements of a given job plan. Further, the job plan may be selected on the basis of other objectives in addition to an objective of minimizing the potential instability of the job plan, such as minimizing cost, minimizing execution time, minimizing resource usage, etc.12-24-2009
20090165013DATA PROCESSING METHOD AND SYSTEM - In response to the activation of the data processing system, a request for processing is accepted in parallel with loading a series of data (a data body) from an external storage into a main memory independent of whether the processing of individual data is requested or not, and if target data of the request for processing is not loaded into the main memory, apparent system starting time is reduced by executing processing corresponding to the request after the target data is loaded into the main memory.06-25-2009
20090113442METHOD, SYSTEM AND COMPUTER PROGRAM FOR DISTRIBUTING A PLURALITY OF JOBS TO A PLURALITY OF COMPUTERS - Method and system for providing a mechanism for determining an optimal workload distribution, from a plurality of candidate workload distributions, each of which has been determined to optimize a particular aspect of a workload-scheduling problem. More particularly, the preferred embodiment determines a workload distribution based on resource selection policies. From this workload distribution, the preferred embodiment optionally determines a workload distribution based on job priorities. From either or both of the above parameters, the preferred embodiment determines a workload distribution based on a total prioritized weight parameter. The preferred embodiment also determines a workload distribution which attempts to match the previously determined candidate workload distributions to a goal distribution. Similarly, the preferred embodiment calculates a further workload distribution which attempts to maximize job throughput.04-30-2009
20100223622Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions - In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition.09-02-2010
20090144743Mailbox Configuration Mechanism - An email configuration system may use a topology database to determine if a change request results in a valid configuration. The topology database may contain a definition of an enterprise email system, including forests, servers, and individual mailboxes. If a valid configuration is found, a change request may be scheduled and implemented. The email configuration system may store the change request so that a change may be undone at a later time. Changes may be implemented to the enterprise mail system by changing the topology definition and running an analysis of the current topology and a desired topology.06-04-2009
20110107344MULTI-CORE APPARATUS AND LOAD BALANCING METHOD THEREOF - A multi-core apparatus and method for balancing load in the multi-core apparatus. The multi-core apparatus includes a first core that sends a save request including a context of a task, when a task is switched from an active state to a sleep state, a second core that receives an execution request and executes a task corresponding to the execution request, and a load balancer that receives the save request transmitted by the first core, and sends the execution request to the second core.05-05-2011
20100223621Statistical tracking for global server load balancing - Server load-balancing operation-related data, such as data associated with a system configured for global server load balancing (GSLB) that orders IP addresses into a list based on a set of performance metrics, is tracked. Such operation-related data includes inbound source IP addresses (e.g., the address of the originator of a DNS request), the requested host and zone, identification of the selected “best” IP addresses resulting from application of a GSLB algorithm and the selection metric used to decide on an IP address as the “best” one. Furthermore, the data includes a count of the selected “best” IP addresses selected via application of the GSLB algorithm, and for each of these IP addresses, the list of deciding performance metrics, along with a count of the number of times each of these metrics in the list was used as a deciding factor in selection of this IP address as the best one. This tracking feature allows better understanding of GSLB policy decisions (such as those associated with performance, maintenance, and troubleshooting) and intelligent deployment of large-scale resilient GSLB networks.09-02-2010
20100223623METHODS AND SYSTEMS FOR WORKFLOW MANAGEMENT - Systems and method are described for workflow management, and in particular, for workflow management with respect to filming. In response to a filming permit request, a workflow computer system examines workloads associated with permit coordinators. Optionally, the examination takes into account coordinator performance in attempting to balance workloads. The permit request is routed to a selected permit coordinator who is tasked with resolving permit issues. In addition, the permit request is routed to approving entities associated with the permit workflow. Optionally, conflicts with other permits are identified. Substantially real-time workflow status updates are provided to the requester and/or coordinator. The workflow computer system automatically identifies to the coordinator deficiencies associated with the permit that are to be resolved.09-02-2010
20080271039SYSTEMS AND METHODS FOR PROVIDING CAPACITY MANAGEMENT OF RESOURCE POOLS FOR SERVICING WORKLOADS - A method comprises receiving, by a capacity management tool, a capacity management operation request that specifies a resource pool-level operation desired for managing capacity of a resource pool that services workloads. The capacity management tool determines, in response to the received request, one or more actions to perform in the resource pool for performing the requested capacity management operation in compliance with defined operational parameters of the workloads. The method further comprises performing the determined one or more actions for performing the requested capacity management operation.10-30-2008
20110099553SYSTEMS AND METHODS FOR AFFINITY DRIVEN DISTRIBUTED SCHEDULING OF PARALLEL COMPUTATIONS - Embodiments of the invention provide efficient scheduling of parallel computations for higher productivity and performance. Embodiments of the invention provide various methods effective for affinity driven and distributed scheduling of multi-place parallel computations with physical deadlock freedom.04-28-2011
20110119678ISOLATING WORKLOAD PARTITION SPACE - A method, system, and computer usable program product for isolating a workload partition space are provided in the illustrative embodiments. A boot process of a workload partition in a data processing system is started using a scratch file system, the scratch file system being in a global space. A portion of a storage device containing a file system for the workload partition is exported to the workload partition, the portion forming an exported disk. The partially booted up workload partition may discover the exported disk. The exporting causes an association between the global space and the exported disk to either not form, or sever. The exporting places the exported disk in a workload partition space associated with the workload partition. The boot process is transitioned to stop using the scratch file system and start using the data in the exported disk for continuing the boot process.05-19-2011
20090037924PERFORMANCE OF A STORAGE SYSTEM - A method for operating a storage system, including storing data redundantly in the system and measuring respective queue lengths of input/output requests to operational elements of the system. The queue lengths are compared to an average queue length to determine respective performances of the operational elements of the storage system. In response to the average queue lengths and a permitted deviation from the average an under-performing operational element among the operational elements is identified. An indication of the under-performing operational element is provided to host interfaces in the storage system. One of the host interfaces receives requests for specified items of the data directed to the under-performing element, and in response to the indication, some of the requests are diverted from the under-performing operational element to one or more other operational elements of the storage system that are configured to provide the specified items of the data.02-05-2009
20130132971SYSTEM, METHOD AND PROGRAM PRODUCT FOR STREAMLINED VIRTUAL MACHINE DESKTOP DISPLAY - A shared resource system, method of updating client displays and computer program products therefor. At least one client device locally displays activity with resources shared with the client device. A management system on provider computers that is providing resources shared by the client devices selectively generates prioritized display updates. The management system provides updates to respective client devices according to update priority. Updates may also be ordered for network load balancing.05-23-2013
20130132972THERMALLY DRIVEN WORKLOAD SCHEDULING IN A HETEROGENEOUS MULTI-PROCESSOR SYSTEM ON A CHIP - Various embodiments of methods and systems for thermally aware scheduling of workloads in a portable computing device that contains a heterogeneous, multi-processor system on a chip (“SoC”) are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different processing efficiencies at a given temperature, and because more than one of the processing components may be capable of processing a given block of code, thermally aware workload scheduling techniques that compare performance curves of the individual processing components at their measured operating temperatures can be leveraged to optimize quality of service (“QoS”) by allocating workloads in real time, or near real time, to the processing components best positioned to efficiently process the block of code.05-23-2013
20130132973SYSTEM AND METHOD OF DYNAMICALLY CONTROLLING A PROCESSOR - A method of executing a dynamic clock and voltage scaling (DCVS) algorithm in a central processing unit (CPU) is disclosed and may include monitoring CPU activity and determining whether a workload is designated as a special workload when the workload is added to the CPU activity.05-23-2013
20100306781DETERMINING AN IMBALANCE AMONG COMPUTER-COMPONENT USAGE - The present invention is directed to determining an imbalance among computer-component usage. Based on a performance value (e.g. utilization value, response time, queuing delay, Input/Output operations, bytes transferred, work threads used, connections made, etc) that describes a respective computer component among a set of computer components, and an average performance value of the set, a component value of each computer component in the set can be determined. Each component value quantifies a contribution of the usage of a respective computer component toward an imbalanced assignment of computer operations. Component values are information rich and comparisons of component values suggest levels of over-utilization and under-utilization of the computer components. Based on the component values of a set of computer components, decisions can be made as to what portion of computer operations should be reassigned to enable computer operations to be executed in a more balanced manner by the set of computer components.12-02-2010
20130139176SCHEDULING FOR REAL-TIME AND QUALITY OF SERVICE SUPPORT ON MULTICORE SYSTEMS - In a first embodiment of the present invention, a method of assigning tasks in a multicore electronic device is provided, the method comprising: receiving a set of tasks; ordering the tasks in non-increasing order of a utilization value of each task; partitioning the ordered tasks using a schedulability-centric algorithm; repartitioning the partitioned ordered tasks by reordering the partitioned ordered tasks in non-decreasing order of the utilization value of each task and partitioning the partitioned reordered tasks using a load-balancing-centric algorithm; and assigning the repartitioned tasks to one or more cores of the multicore electronic device based on results of the repartitioning.05-30-2013
20100333105PRECOMPUTATION FOR DATA CENTER LOAD BALANCING - Pre-computing a portion of forecasted workloads may enable load-balancing of data center workload, which may ultimately reduce capital and operational costs associated with data centers. Computing tasks performed by the data centers may be analyzed to identify computing tasks that are eligible for pre-computing, and may be performed prior to an actual data request from a user or entity. In some aspects, the pre-computing tasks may be performed during a low-volume workload period prior to a high-volume workload period to reduce peaks that typically occur in data center workloads that do not utilize pre-computation. Statistical modeling methods can be used to make predictions about the tasks that can be expected to maximally contribute to bottlenecks at data centers and to guide the speculative computing.12-30-2010
20100333104Service-Based Endpoint Discovery for Client-Side Load Balancing - A server farm includes a plurality of server devices. The plurality of server devices includes a plurality of topology service endpoints and a plurality of target service endpoints. A client computing system sends a topology service request to one of the topology service endpoints. In response, the topology service endpoint sends target service endpoint Uniform Resource Identifiers (URIs) to the client computing system. When a client application at the client computing system needs to send a target service request to one of the target service endpoints, the client computing system applies a load balancing algorithm to select one of the target service endpoint URIs. The client computing system then sends a target service request to the target service endpoint identified by the selected one of the target service endpoint URIs. In this way, the client computing system may use a load balancing algorithm appropriate for the client application.12-30-2010
20110029983SYSTEMS AND METHODS FOR DATA AWARE WORKFLOW CHANGE MANAGEMENT - A method includes providing a baseline workflow as an electronic representation of an actual workflow, the baseline workflow including baseline tasks, data items, and baseline data scopes, and providing a fragment workflow as an electronic representation of an actual fragment workflow, the fragment workflow including at least one fragment task, and at least one fragment data scope. A baseline data scope is identified as an affected data scope based on a structural change operation, the baseline workflow and the fragment workflow, and the affected data scope is compared to the at least one fragment data scope to identify at least one change operation. The fragment and baseline workflows are integrated based on the structural change operation to provide an integrated workflow, and the at least one data scope change operation is executed to provide at least one integrated data scope in the integrated workflow.02-03-2011
20110029982NETWORK BALANCING PROCEDURE THAT INCLUDES REDISTRIBUTING FLOWS ON ARCS INCIDENT ON A BATCH OF VERTICES - A representation of a flow network having vertices connected by arcs is provided. The vertices include a first set of vertices that provide flow to a second set of vertices over arcs connecting the first set and second set of vertices. A balancing procedure in the network is performed that includes redistributing flows on arcs incident on the second set of vertices. The balancing procedure includes selecting a batch of the vertices in the second set, and redistributing flows on arcs incident on the selected batch of vertices. The selecting and redistributing are repeated for other batches of vertices in the second set.02-03-2011
20130191844MANAGEMENT OF THREADS WITHIN A COMPUTING ENVIRONMENT - Threads of a computing environment are managed to improve system performance. Threads are migrated between processors to take advantage of single thread processing mode, when possible. As an example, inactive threads are migrated from one or more processors, potentially freeing-up one or more processors to execute an active thread. Active threads are migrated from one processor to another to transform multiple threading mode processors to single thread mode processors.07-25-2013
20110041136METHOD AND SYSTEM FOR DISTRIBUTED COMPUTATION - A system for processing a computational task is presented. The system includes a plurality of nodes operationally coupled to one another via one or more networks. The plurality of nodes includes a base node including a processing subsystem configured to receive the computational task, select a subset of available nodes from the plurality of nodes based upon a present status, processing capability, distance, network throughput, range, resources, features, or combinations thereof of the plurality of nodes, divide the computational task into a plurality of sub-tasks, distribute the plurality of sub-tasks among the subset of available nodes based upon a number of nodes in the subset of available nodes, completion time period allowed for the plurality of sub-tasks, a distribution criteria, level of security required for the completion of the plurality of sub-tasks, resources available with the subset of available nodes, processing capability of the subset of available nodes, range of the subset of available nodes, features in the subset of available nodes, reliability of the subset of available nodes, trust in the subset of available nodes, the current load on the subset of available nodes, domain of the plurality of sub-tasks, or combinations thereof, receive sub-solutions corresponding to the plurality of sub-tasks from the subset of available nodes in a desired time period, and reassemble the sub-solutions to determine a solution corresponding to the computational task.02-17-2011
20090064166System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A system and method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors.03-05-2009
20110055845Technique for balancing loads in server clusters - In a network arrangement where a client requests a service from a server system, e.g., through the Internet, a multiple-load balancer is used for balancing loads in two or more server clusters in the server system to completely identify a sequence of servers for processing the service request. Each server in the resulting sequence belongs to a different server cluster. The service request is sent to the first server in the sequence, along with information for routing the request through the sequence of servers.03-03-2011
20110119679METHOD AND SYSTEM OF AN I/O STACK FOR CONTROLLING FLOWS OF WORKLOAD SPECIFIC I/O REQUESTS - A method and system of a host device hosting multiple workloads for controlling flows of I/O requests directed to a storage device is disclosed. In one embodiment, a type of a response from the storage device reacting to an I/O request issued by an I/O stack layer of the host device is determined. Then, a workload associated with the I/O request is identified among the multiple workloads based on the response to the I/O request. Further, a maximum queue depth assigned to the workload is adjusted based on the type of the response, where the maximum queue depth is a maximum number of I/O requests from the workload which are concurrently issuable by the I/O stack layer.05-19-2011
20090320041COMPUTER PROGRAM AND METHOD FOR BALANCING PROCESSING LOAD IN STORAGE SYSTEM, AND APPARATUS FOR MANAGING STORAGE DEVICES - In a distributed storage system, client terminals make access to virtual storage areas provided as logical segments of a storage volume. Those logical segments are associated with physical segments that serve as real data storage areas. A management data storage unit stores management data describing the association between such logical segments and physical segments. Upon receipt of access requests directed to a specific access range, a segment identification unit consults the management data to identify logical segments in the specified access range and their associated physical segments. A remapping unit subdivides the identified logical segments and physical segments into logical sub-segments and physical sub-segments, respectively, and remaps the logical sub-segments to the physical sub-segments according to a predetermined remapping algorithm. A data access unit executes the access requests based on the remapped logical sub-segments and physical sub-segments.12-24-2009
20100031266SYSTEM AND METHOD FOR DETERMINING A NUMBER OF THREADS TO MAXIMIZE UTILIZATION OF A SYSTEM - A system and associated method for determining a number of threads to maximize system utilization. The method begins with determining a first value which corresponds to the current system utilization. Next the method determines a second value which corresponds the current number of threads in the system. Next the method determines a third value which corresponds to the number of processor cores in the system. Next the method receives a fourth value from an end user which corresponds to the optimal system utilization the end user wishes to achieve. Next the method determines a fifth value which corresponds to the number of threads necessary to achieve the optimal system utilization value received from the end user. Finally, the method sends the fifth value to all running applications.02-04-2010
20090313635SYSTEM AND/OR METHOD FOR BALANCING ALLOCATION OF DATA AMONG REDUCE PROCESSES BY REALLOCATION - The subject matter disclosed herein relates to a system and/or method for allocating data among reduce processes.12-17-2009
20110088041Hardware support for thread scheduling on multi-core processors - A method, device, and system are disclosed. In one embodiment the method includes scheduling a thread to run on first core of a multi-core processor. The determination as to which core the thread is scheduled on uses one or more processes. These processes may include ranking all of the cores specific to a workload of the thread, establishing a current utilization of each core of the multi-core processor, and calculating an inter-core migration cost for the thread.04-14-2011
20100131961PACKAGE REVIEW PROCESS WORKFLOW - A workflow module automates and monitors a package review process. A package review module receives a package created by a contributor to be reviewed for compliance with a set of guidelines. The workflow module initiates, monitors, and manages a plurality of package review tasks to be performed on the package. A user interface module provides user interface for creating a package and a user interface for reviewing a package. The workflow module automates review tasks, interfaces with external servers performing review tasks, gathers review task results, determines whether to send a notification regarding the status of a review task, sends notifications regarding the status of a review task and stores successfully review packages in a repository.05-27-2010
20100131959Proactive application workload management - A method is provided for continuous optimization of allocation of computing resources for a horizontally scalable application which has a cyclical load pattern wherein each cycle may be subdivided into a number of time slots. A computing resource allocation application pre-allocates computing resources at the beginning of a time slot based on a predicted computing resource consumption during that slot. During the servicing of the workload, a measuring application measures actual consumption of computing resources. On completion of servicing, the measuring application updates the predicted computing resource consumption profile, allowing optimal allocation of resources. Un-needed computing resources may be released, or may be marked as releasable, for use upon request by other applications, including applications having the same or lower priority than the original application. Methods, computer systems, and computer programs available as a download or on a computer-readable medium for installation according to the invention are provided.05-27-2010
20090106767WORKLOAD PERIODICITY ANALYZER FOR AUTONOMIC DATABASE COMPONENTS - A computer data processing system and an article of manufacture for determining database workload periodicity. The computer data processing system includes a module for converting database activity samples spanning a time period from the dime domain to the frequency domain, the converting resulting in a frequency spectrum, a module for identifying fundamental peaks of the frequency spectrum, and a module for allocating database resources based on at least one of the fundamental peaks.04-23-2009
20090320040Preserving hardware thread cache affinity via procrastination - A method, device, system, and computer readable medium are disclosed. In one embodiment the method includes managing one or more threads attempting to steal task work from one or more other threads. The method will block a thread from stealing a mailed task that is also residing in another thread's task pool. The blocking occurs when the mailed task was mailed to an idle third thread. Additionally, some tasks are deferred instead of immediately spawned.12-24-2009
20120192201Dynamic Work Partitioning on Heterogeneous Processing Devices - A method, system and article of manufacture for balancing a workload on heterogeneous processing devices. The method comprising accessing a memory storage of a processor of one type by a dequeuing entity associated with a processor of a different type, identifying a task from a plurality of tasks within the memory that can be processed by the processor of the different type, synchronizing a plurality of dequeuing entities capable of accessing the memory storage, and dequeuing the task form the memory storage07-26-2012
20090313634Dynamically selecting an optimal path to a remote node - In a multi-cell system, a dynamic adjustment of a workload of a data path between multiple cells of the system may be preferred to eliminate system latencies during operation of the system. The dynamic adjustment may include monitoring a workload, or an amount of data traffic, of a data path and determining if the monitored workload of the data path exceeds a predetermined workload threshold. If the workload threshold is exceeded, the dynamic adjustment of the workload of the data path may include transferring a portion of data from the monitored data path to another data path that is also connected to the same cells as the monitored data path. The transfer of data may be to a previously-existing data path that has capacity for the data, to a newly-created data path, or to both a previously-existing data path and a new data path.12-17-2009
20090313636Executing An Application On A Parallel Computer - Methods, apparatus, and products are disclosed for executing an application on a parallel computer that include: executing, by a current compute node, a current task of the application, including producing results; determining, by the current compute node in dependence upon current network characteristics and application characteristics, whether to transfer the results to a next compute node for further processing by a next task on the next compute node or to execute the next task for further processing of the results on the current compute node; transferring, by the current compute node, the results to the next compute node for further processing by the next task on the next compute node if the determination specifies transferring the results to the next node; and executing, by the current compute node, the next task for further processing of the results if the determination specifies executing the next task on the current compute node.12-17-2009
20090217288Routing Workloads Based on Relative Queue Lengths of Dispatchers - Mechanisms for distributing workload items to a plurality of dispatchers are provided. Each dispatcher is associated with a different computing system of a plurality of computing systems and workload items comprise workload items of a plurality of different workload types. A capacity value for each combination of workload type and computing system is obtained. For each combination of workload type and computing system, a queue length of a dispatcher associated with the corresponding computing system is obtained. For each combination of workload type and computing system, a dispatcher's relative share of incoming workloads is computed based on the queue length for the dispatcher associated with the computing system. In addition, incoming workload items are routed to a dispatcher, in the plurality of dispatchers, based on the calculated dispatcher's relative share for the dispatcher.08-27-2009
20090217287FEDERATION OF COMPOSITE APPLICATIONS - A predetermined business task of a composite application can be fulfilled. The composite application can include a set of components. The composite application is instantiated by a template means and a predefined collaborative context module controls the interaction of the set of components during the runtime of the composite application. A set of components fulfilling individual services on individual different server systems is leveraged by the composite application. During the instantiation of the composite application from a template, the referenced components (as types) are instantiated leading to runtime instances of these components. The interaction of the different components is controlled on individual different server systems utilizing a primary context module. The primary context module communicates with an appropriate collaborative module implemented locally on the respective set of servers, where the local context modules act as secondary context modules in relation to the primary context modules. For each of the secondary context modules, local components communicate to control the interaction of components.08-27-2009
20090217286Adjunct Processor Load Balancing - Managing the workload across one or more partitions of a plurality of partitions of a computing environment. One or more processors are identified in a partition to be managed by a quality weight defined according to characteristics of each corresponding processor. A load of each identified processor is measured depending on the requests already allocated to be processed by each corresponding processor. Each identified processor has a performance factor determined based on the measured load and the quality weight. The performance factor is a measurement of processor load. A new request is identified to be allocated to the partition, selecting a processor from the partition with the lowest performance factor. The new request is allocated to the selected processor.08-27-2009
20100058352System and Method for Dynamic Resource Provisioning for Job Placement - A method for dynamic resource provisioning for job placement includes receiving a request to perform a job on an unspecified computer device. One or more job criteria for performing the job are determined. Each job criteria defines a required operational characteristic needed for a computer device to perform the job. A list of available computer devices is provided. The list includes a plurality of computer devices currently provisioned to perform computer operations. A list of suitable computer devices for performing the job is determined from the list of available computer devices by comparing operational characteristics for each available computer device with the job criteria. The list of suitable computer devices includes one or more computer devices having operational characteristics that satisfy the job criteria. From the list of suitable computer devices, a least active computer device is determined, and the job is forwarded to the least active computer device.03-04-2010
20110154356METHODS AND APPARATUS TO BENCHMARK SOFTWARE AND HARDWARE - Example methods, apparatus and articles of manufacture to benchmark hardware and software are disclosed. A disclosed example method includes initiating a first thread to execute a set of instructions on a processor, initiating a second thread to execute the set of instructions on the processor, determining a first duration for the execution of the first thread, determining a second duration for the execution of the second thread, and determining a thread fairness value for the computer system based on the first duration and the second duration.06-23-2011
20110154358METHOD AND SYSTEM TO AUTOMATICALLY OPTIMIZE EXECUTION OF JOBS WHEN DISPATCHING THEM OVER A NETWORK OF COMPUTERS - A computer implemented method, system, and/or computer program product selects a target computer to execute a job. For each computer in a system, a statistical mean of last job duration values is computed from historical records for all computers that have executed the job. Multiple pools of computers are selected based on a statistical mean of last job duration values. A ratio for each pool from the multiple pools is computed. This ratio is a ratio of the quantity of current executions of the job in a particular pool compared to a total of current job executions of the job in all of the multiple pools of computers. A particular pool of computers, which has a computed ratio that is closest to a preselected ratio, is selected. A target computer is selected from the particular pool of computers to execute a next iteration of the job.06-23-2011
20120304192LIFELINE-BASED GLOBAL LOAD BALANCING - Work-stealing is efficiently extended to distributed memory using low degree, low-diameter, fully-connected directed lifeline graphs. These lifeline graphs include k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, that node quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. Each quiesced node reactivates when work arrives from a lifeline, itself sharing this work with its incoming lifelines that are activated. Termination occurs when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination is detected automatically using the finish construct.11-29-2012
20110083135VIRTUAL COMPUTER SYSTEMS AND COMPUTER VIRTUALIZATION PROGRAMS - Disclosed are a virtual computer system and method, wherein computer resources are automatically and optimally allocated to logical partitions according to loads to be accomplished by operating systems in the logical partitions and setting information based on a knowledge of workloads that run on the operating systems. Load measuring modules are installed on the operating systems in order to measure the loads to be accomplished by the operating systems. A manager designates the knowledge concerning the workloads on the operating systems through a user interface. An adaptive control module determines the allocation rations of the computer resources relative to the logical partitions according to the loads and the settings, and issues an allocation varying instruction to a hypervisor so as to thus instruct variation of allocations.04-07-2011
20110078700TASK DISPATCHING IN MULTIPLE PROCESSOR SYSTEMS - A method and system is disclosed for dispatching tasks to multiple processors that all share a shared memory. A composite queue size for multiple work queues each having an associated processor is determined. A queue availability flag is stored in shared memory for each processor work queue and is set based upon the composite queue size and the size of the work queue for that processor. Each queue availability flag indicates availability or unavailability of the work queue to accept new tasks. A task is placed in a selected work queue based on that work queue having an associated queue availability flag indicating availability to accept new tasks. The data associated with task dispatching is maintained so as to increase the likelihood that valid copies of the data remain present in each processor's local cache without requiring updating do to their being changed by other processors.03-31-2011
20110072440PARALLEL PROCESSING SYSTEM AND METHOD - A parallel processing system determines whether to drive all or some processors so as to process data that are input based on capacity or time for processing the input data. Also, the system temporarily stores the data that are processed and output by the respective processors, and controls the same to be output when it becomes the calculated output time based on the traffic processing time for the input data.03-24-2011
20110016473MANAGING SERVICES FOR WORKLOADS IN VIRTUAL COMPUTING ENVIRONMENTS - Methods and apparatus involve managing computing services for workloads. A storage of services available to the workloads are maintained as virgin or golden computing images. By way of a predetermined policy, it is identified which of those services are necessary to support the workloads during use. Thereafter, the identified services are packaged together for deployment as virtual machines on a hardware platform to service the workloads. In certain embodiments, services include considerations for workload and service security, quality of service, deployment sequence, storage management, and hardware requirements necessary to support virtualization, to name a few. Meta data in open virtual machine formats (OVF) are also useful in defining these services. Computer program products and computing arrangement are also disclosed.01-20-2011
20110258634Method for Monitoring Operating Experiences of Images to Improve Workload Optimization in Cloud Computing Environments - An embodiment of the invention includes a method for workload optimization in a network (e.g., cloud computing environment). Usage of resources in the network is monitored in order to maintain a metadata catalog of operating experiences of the resources. A request for a resource in the network is received; and, resources that are available in the network are identified. Units that are included in the resources are also identified. The metadata catalog is queried for operating experiences associated with the requested resource. The requested resource is provisioned by the host system based on the operating experiences associated with the resource. This includes assembling the units that are included in the requested resource and/or automatically allocating workloads of the computing modules based on the cataloging of the workloads in the metadata catalog. The metadata catalog is updated with an operating experience associated with the provisioning of the requested resource.10-20-2011
20100070978VDI Storage Overcommit And Rebalancing - A method for managing storage for a desktop pool is described. The desktop pool includes a plurality of virtual machines (VMs), each VM having at least one virtual disk represented as a virtual disk image file on one of a plurality of datastores associated with the desktop pool. To identify a target datastore for a VM, a weight of each datastore is calculated. The weight may be a function of a virtual capacity of the datastore and the sum of maximum sizes of all the virtual disk image files on the datastore. The virtual capacity is a product of the data storage capacity of the datastore and an overcommit factor assigned to the datastore. The target datastore is selected as the datastore having the highest weight. The VM may is moved to or created on the target datastore.03-18-2010
20090241124ONLINE MULTIPROCESSOR SYSTEM RELIABILITY DEFECT TESTING - A multiprocessor system comprising a plurality of processors is disclosed. The plurality of processors includes a first processor including first monitor on-chip and a second processor including a including a second monitor on-chip. The first monitor on-chip is configured to measure load on the second processor and the second monitor on-chip is configured to measure load on the first processor. The first monitor on-chip is configured to cause the second monitor on-chip to perform a self-test on the second processor if the load on the second processor is below a second processor load threshold value and the second monitor on-chip is configured to cause the first monitor on-chip to perform a self-test on the first processor if the load on the first processor is below first processor load threshold value.09-24-2009
20080320487SCHEDULING TASKS ACROSS MULTIPLE PROCESSOR UNITS OF DIFFERING CAPACITY - A mechanism is provided for scheduling tasks across multiple processor units of differing capacity. In a multiple processor unit system with processor units of disparate speeds, it is advantageous to have the most processing-intensive tasks run on the processor units with the highest capacity. All tasks are initially scheduled on the lowest capacity processor units. Because processor units with higher capacity are more likely to have idle time, these higher capacity processor units may pull one or more tasks onto themselves from the same or lower capacity processor units. A processor unit will attempt to pull tasks that utilize a larger percentage of the timeslice. When a higher capacity processor unit is overloaded or near capacity, the higher capacity processor unit may push tasks to processor units with the same or lower capacity. A processor unit will attempt to push tasks that utilize a smaller percentage of the timeslice.12-25-2008
20080320488CONTROL DEVICE AND CONTROL METHOD FOR REDUCED POWER CONSUMPTION IN NETWORK DEVICE - This invention provides a data transfer control device for carrying out data transfer using a plurality of transfer resources. The data transfer control device comprises a transfer resource management portion that set the plurality of transfer resources to either one of a transfer-enabled state whereby data transfer is enabled and a plurality of standby states on the basis of a load on the data transfer control device and that manages the plurality of transfer resources so as to assume the set operating status; and a load distribution portion that distributes the data to transfer resources that have been set to the transfer-enabled state. The plurality of standby states are states which data transfer is disabled and which mutually differ at a minimum in terms of at least one of power consumption level and transition time to the transfer-enabled state.12-25-2008
20120204188PROCESSOR THREAD LOAD BALANCING MANAGER - A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).08-09-2012
20120204187Hybrid Cloud Workload Management - A method, apparatus, and computer program product for managing a workload in a hybrid cloud. It is determined whether first data processing resources processing a portion of a workload are overloaded. Responsive to a determination that the first data processing resources are overloaded, second data processing resources are automatically provisioned and the portion of the workload is automatically moved to the second data processing resources for processing. The second data processing resources are data processing resources that are provided as a service on the hybrid cloud. Processing of a first portion of a workload being processed on first data processing resources of a hybrid cloud are monitored simultaneously with monitoring processing of a second portion of the workload being processed on second data processing resources of the hybrid cloud. The workload may be allocated automatically between the first portion and the second portion responsive to the simultaneous monitoring.08-09-2012
20110161979MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION - Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.06-30-2011
20110161980Load Balancing Web Service by Rejecting Connections - A load balancer allocates requests to a pool of web servers configured to have low queue capacities. If the queue capacity of a web server is reached, the web server responds to an additional request with a rejection notification to the load balancer, which enables the load balancer to quickly send the rejected request to another web server. Each web server self-monitors its rejection rate. If the rejection rate exceeds a threshold, the number of processes concurrently running on the web server is increased. If the rejection rate falls below a threshold, the number of processes concurrently running on the web server is decreased.06-30-2011
20080301697MULTIPLE TASK MANAGEMENT BETWEEN PROCESSORS - A system for multiple task management between processors includes a first processing device for executing tasks. A respective storage element is provided for storing one or more commands from each of the tasks. A command dispatcher is provided for selectively transferring a command from one of the storage elements to a command queue provided within a second processing device.12-04-2008
20080301696Controlling workload of a computer system through only external monitoring - Provides control of the workload, flow control, and concurrency control of a computer system through the use of only external performance monitors. Data collected by external performance monitors are used to build a simple, black box model of the computer system, comprising two resources: a virtual bottleneck resource and a delay resource representing all non-bottleneck resources combined. The service times of the two resource types are two parameters of the black box model. The two parameters are evaluated based on historical data collected by the external performance monitors. The workload capacity that avoids saturation of the bottleneck resource is then determined and used as a control variable by a flow controller to limit the workload on the computer system. The workload may include a mix of traffic classes. In such a case, data is collected, parameters are evaluated and control variables are determined for each of the traffic classes.12-04-2008
20080301695Managing a Plurality of Processors as Devices - A computer system's multiple processors are managed as devices. The operating system accesses the multiple processors using processor device modules loaded into the operating system to facilitate a communication between an application requesting access to a processor and the processor. A device-like access is determined for accessing each one of the processors similar to device-like access for other devices in the system such as disk drives, printers, etc. An application seeking access to a processor issues device-oriented instructions for processing data, and in addition, the application provides the processor with the data to be processed. The processor processes the data according to the instructions provided by the application.12-04-2008
20120311603STORAGE APPARATUS AND STORAGE APPARATUS MANAGEMENT METHOD - The overall processing performance of a storage apparatus is improved by migrating MPPK ownership with suitable timing.12-06-2012
20120311602STORAGE APPARATUS AND STORAGE APPARATUS MANAGEMENT METHOD - The overall processing function of a storage apparatus is improved by suitably migrating ownership.12-06-2012
20110047555DECENTRALIZED LOAD DISTRIBUTION IN AN EVENT-DRIVEN SYSTEM - A computer-implemented method, computer program product and computer readable storage medium directed to decentralized load distribution in an event-driven system. Included are receiving a data flow to be processed by a plurality of tasks at a plurality of nodes in the event-driven system having stateful and stateless event processing components, wherein the plurality of tasks are selected from the group consisting of hierarchical tasks (a task that is dependent on the output of another task), nonhierarchical tasks (a task that is not dependent on the output of another task) and mixtures thereof. Tasks are considered for migration to distribute the system load of processing tasks. The target node, to which the at least one target task is migrated, is chosen wherein the target node meets predetermined criteria in terms of load distribution quality. The computer-implemented method, computer program product and computer readable storage medium of the present invention may also include migrating tasks to target nodes to reduce cooling costs and selecting at least one node to go into quiescent mode.02-24-2011
20110047554DECENTRALIZED LOAD DISTRIBUTION TO REDUCE POWER AND/OR COOLING COSTS IN AN EVENT-DRIVEN SYSTEM - A computer-implemented method, computer program product and computer readable storage medium directed to decentralized load placement in an event-driven system so as to minimize energy and cooling related costs. Included are receiving a data flow to be processed by a plurality of tasks at a plurality of nodes in the event-driven system having stateful and stateless event processing components, wherein the plurality of tasks are selected from the group consisting of hierarchical tasks (a task that is dependent on the output of another task), nonhierarchical tasks (a task that is not dependent on the output of another task) and mixtures thereof. Nodes are considered for quiescing whose current tasks can migrate to other nodes while meeting load distribution and energy efficiency parameters and the expected duration of the quiesce provides benefits commensurate with the costs of quiesce and later restart. Additionally, tasks are considered for migrating to neighbor nodes to distribute the system load of processing the tasks and reduce cooling costs.02-24-2011
20110023048INTELLIGENT DATA PLACEMENT AND MANAGEMENT IN VIRTUAL COMPUTING ENVIRONMENTS - Methods and apparatus involve intelligently pre-placing data for local consumption by workloads in a virtual computing environment. Access patterns of the data by the workload are first identified. Based thereon, select data portions are migrated from a first storage location farther away the workload to a second storage location closer the workload. Migration also occurs at a time when needed by the workload during use. In this manner, bandwidth for data transmission is minimized. Latency effects created by consumption of remotely stored data is overcome as well. In various embodiments, a data vending service and proxy are situated between a home repository of the data and the workload. Together they serve to manage and migrate the data as needed. Data recognition patterns are disclosed as is apportionment of the whole of the data into convenient migration packets. De/Encryption, (de)compression, computing systems and computer program products are other embodiments.01-27-2011
20110265096MANAGING RESOURCES IN A MULTIPROCESSING COMPUTER SYSTEM - Embodiments of the invention relate to multiprocessing systems. An aspect of the invention concerns a multiprocessing system that comprises a hardware control component for selecting a hardware management action responsive to a hardware policy and a virtualization component for presenting virtual hardware resources to a software task execution environment. The system may further comprise a software workload management component for controlling at least one running software task and routing at least one new software task using the virtual hardware resources; and a communication component for signaling the software workload management component to perform a software management action in compliance with the hardware management action. The hardware policy may be a hardware power management policy, and the software management action may comprise quiescing the at least one running software task or routing the new software tasks to a different software task execution environment.10-27-2011
20110265095Resource Affinity via Dynamic Reconfiguration for Multi-Queue Network Adapters - A mechanism is provided for providing resource affinity for multi-queue network adapters via dynamic reconfiguration. A device driver allocates an initial queue pair within a memory. The device driver determines whether workload of the data processing system has risen above a predetermined high threshold. Responsive to the workload rising above the predetermined high threshold, the device driver allocates and initializes an additional queue pair in the memory. The device driver programs a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair. The device driver enables transmit tuple hashing to the additional queue pair.10-27-2011
20110093862WORKLOAD-DISTRIBUTING DATA REPLICATION SYSTEM - A method for more effectively distributing the I/O workload in a data replication system is disclosed herein. In selected embodiments, such a method may include generating an I/O request and identifying a storage resource group associated with the I/O request. In the event the I/O request is associated with a first storage resource group, the I/O request may be directed to a first storage device and a copy of the I/O request may be mirrored from the first storage device to a second storage device. Alternatively, in the event the I/O request is associated with a second storage resource group, the I/O request may be directed to a second storage device and a copy of the I/O request may be mirrored from the second storage device to the first storage device. A corresponding system, apparatus, and computer program product are also disclosed and claimed herein.04-21-2011
20100293552Altering Access to a Fibre Channel Fabric - A mechanism is provided for altering access to a network. A virtual I/O server controller in a virtual I/O server operating system receives an indication that an identified communications adapter requires attention. The virtual I/O server controller issues a set of calls to a set of N_port identification virtualization server adapters coupled to the identified communications adapter. Each of the set of calls indicates to each of the set of N_port identification virtualization server adapters a request to move a set of clients from their assigned port on the identified communications adapter to an available port on a failover communications adapter. The set of N_port identification virtualization server adapters moves the set of clients from the identified communications adapter to the failover communications adapter.11-18-2010
20110126209Distributed Multi-Core Memory Initialization - In a system having a plurality of processing nodes, a control node divides a task into a plurality of sub-tasks, and assigns the sub-tasks to one or more additional processing nodes which execute the assigned sub-tasks and return the results to the control node, thereby enabling a plurality of processing nodes to efficiently and quickly perform memory initialization and test of all assigned sub-tasks.05-26-2011
20110138396METHOD AND SYSTEM FOR DATA DISTRIBUTION IN HIGH PERFORMANCE COMPUTING CLUSTER - The present invention discloses a method and system for data distribution in a High-Performance Computing cluster, the High-Performance Computing cluster comprising a Management node and M computation nodes where M is an integer greater than or equal to 2, the Management node distributing the specified data to the M computation nodes, the method comprising steps of: dividing the M computation nodes into m layers where m is an integer greater than or equal to 2; dividing the specified data into k shares where k is an integer greater than or equal to 2; distributing, by the Management node, the k shares of data to a first layer of computation nodes as sub-nodes thereof, each of the first layer of computation nodes obtaining at least one share of data therein; distributing, by each of the computation nodes, the at least one share of data distributed by a parent node thereof to sub-computation nodes thereof; and requesting, by each of the computation nodes, the remaining specified data to other computation nodes, to thereby obtain all the specified data. The method and system enable data to be distributed rapidly to various computation nodes in the High-Performance Computing cluster.06-09-2011
20110138395THERMAL MANAGEMENT IN MULTI-CORE PROCESSOR - Techniques described herein generally relate to multi-core processors including two or more processor cores. Example embodiments may set forth devices, methods, and computer programs related to thermal management in the multi-core processor. Some example methods may include retrieving a first temperature reading for the first processor core during a scheduling interval, retrieving a second temperature reading for the second processor core also during the scheduling interval, and assigning a first task to the first processor core to be executed based on a comparison of the first temperature reading and the second temperature reading retrieved during the scheduling interval.06-09-2011
20120266181SCALABLE PACKET PROCESSING SYSTEMS AND METHODS - A data processing architecture includes multiple processors connected in series between a load balancer and reorder logic. The load balancer is configured to receive data and distribute the data across the processors. Appropriate ones of the processors are configured to process the data. The reorder logic is configured to receive the data processed by the processors, reorder the data, and output the reordered data.10-18-2012
20090300642FILE INPUT/OUTPUT SCHEDULER - Handling of input or output (I/O) to or from a media device may be implemented in a system having a memory, a processor unit with a main processor and an auxiliary processor having an associated local memory, and the media device. An incoming I/O request received from an application running on the processor unit may be serviced according to the schedule. A set of processor executable instructions configured to implement I/O handling may include media filter layers. I/O handling may alternatively comprise: receiving an incoming I/O request from an application running on a main processor; inserting the request into a schedule embodied in the main memory; and implementing the request according to the schedule and one or more filters, at least one of which is implemented by an auxiliary processor.12-03-2009
20100023950WORKFLOW PROCESSING APPARATUS - A workflow processing apparatus receives interface information of a function provided by a device on a network from the device on the network and sends, during the processing of a workflow, input information based on the interface information of the function provided by the device on the network and a program for controlling the function provided by the device on the network to the device on the network.01-28-2010
20090150898MULTITHREADING FRAMEWORK SUPPORTING DYNAMIC LOAD BALANCING AND MULTITHREAD PROCESSING METHOD USING THE SAME - A multithreading framework supporting dynamic load balancing, the multithreading framework being used to perform multi-thread programming, the multithreading framework includes a job scheduler for performing parallel processing by redefining a processing order of one or more unit jobs, transmitted from a predetermined application, based on unit job information included in the respective unit jobs, and transmitting the unit jobs to a thread pool based on the redefined processing order, a device enumerator for detecting a device in which the predetermined application is executed and defining resources used inside the application, a resource manager for managing the resources related to the predetermined application executed using the job scheduler or the device enumerator, and a plug-in manager for managing a plurality of modules which performs various types of functions related to the predetermined application in a plug-in manner, and providing such plug-in modules to the job scheduler.06-11-2009
20110154357Storage Management In A Data Processing System - The invention relates to a method for storage management in a data processing system having a plurality of storage devices with different performance attributes and a workload. The workload is being associated with respective sets of data blocks to be stored in said plurality of storage devices. The method comprises the steps of dynamically determining performance requirements of the workload and dynamically determining performance attributes of the storage devices. The method further comprises the step of allocating data blocks to the storage devices depending on the performance requirements of the associated workload and the performance attributes of the storage devices.06-23-2011
20110307903SOFT PARTITIONS AND LOAD BALANCING - A method and system are provided for load balancing and partial task-processor binding. The method may provide for migrating at least one first task partially bound to and executing on at least one first processor. In accordance with the method, if at least one first condition is true, then the at least one first task may be migrated to at least one second processor such that the at least one second processor executes the at least one first task. Moreover, in accordance with the method, if at least one second condition is true, the at least one first task may be migrated back to the at least one first processor such that the at least one first processor executes the at least one first task.12-15-2011
20090172693Assigning work to a processing entity according to non-linear representations of loadings - To perform load balancing across plural processing entities, load level indications associated with plural processing entities are received. The load level indications are representations based on applying a concave function on loadings of the plural processing entities. A processing entity is selected from among the plural processing entities to assign work according to the load level indications.07-02-2009
20120042322Hybrid Program Balancing - A method for balancing loads in a system having multiple processing elements (800) includes executing a plurality of load balancing algorithms in a dry run on load data from the system (810, 820, 830, 840), recording the results of each of the load balancing algorithms (815, 825, 835, 845), evaluating the results of each of the load balancing algorithms (850), selecting a load balancing algorithm providing the best results (855) and implementing the results of the selected algorithm on the system (860).02-16-2012
20080320486Business Process Automation - A system for business processes within and between organizations and/or individuals may be automated using standards-based, service-oriented business process automation architectures based on XML and Web Services Standards is described. An execution framework for the business processes is also described. Further aspects include a decomposition methodology for deconstructing business process specifications into business flows, business rules and business states. The business flows (FIG. 12-25-2008
20110321058Adaptive Demand-Driven Load Balancing - The present disclosure involves systems, software, and computer implemented methods for providing adaptive demand-driven load balancing for processing jobs in business applications. One process includes operations for identifying a workload for distribution among a plurality of work processes. A subset of the workload is assigned to a plurality of work processes for processing of the subset of the workload based on an application-dependent algorithm. An indication of availability is received from one of the plurality of work processes, and a new subset of the workload is assigned to the work process.12-29-2011
20120005686Annotating HTML Segments With Functional Labels - A method and apparatus is described for assigning functional labels to segments of web pages in an application-independent way. In the approach described herein, one of a generic set functional labels are automatically assigned to each segment of a web page, where the generic functional labels may be topic-independent and application-independent. Applications with different needs can determine which segments of the web page to process based on which functional labels correspond to the types of information needed by each application. Thus, the work of classifying the function of each segment of a web page is separated from the work of selecting which segments satisfy the need of a particular application. The work of classification can be performed in an application-independent way, relieving the burden from every application developer from having to create their own classifiers.01-05-2012
20080320489LOAD BALANCING - In a preferred embodiment, the present invention provides a method of load balancing in a data processing system comprising a plurality of physical CPUs and a plurality of virtual CPUs, the method comprising: mapping one or more virtual CPUs to each of said physical OPUs; and dynamically adapting the mapping depending on the load of said physical CPUs.12-25-2008
20120011519PARALLEL CHECKPOINTING FOR MIGRATION OF WORKLOAD PARTITIONS - A method includes receiving a command for migration of a workload partition having multiple processes from a source machine to a target machine. The method includes executing, for each of the multiple processes at least partially in parallel, an operation to create checkpoint data. The operation to create the checkpoint data provides an estimation of a size of the checkpoint data that is needed for migration, wherein the operation to create the checkpoint data is independent of storing the checkpoint data in the file. The method includes allocating areas within the file for storage of the checkpoint data for each of the multiple processes. The method includes storing the checkpoint data, for each of the multiple processes at least partially in parallel, into the areas allocated within the file based on offsets in the file for each of the multiple processes.01-12-2012
20090100437TEMPERATURE-AWARE AND ENERGY-AWARE SCHEDULING IN A COMPUTER SYSTEM - A computer system to schedule loads across a set of processor cores is described. During operation, the computer system receives a process to be executed. Next, the computer system obtains one or more thermodynamic process characteristics associated with the process and one or more thermodynamic processor-core characteristics associated with operation of the set of processor cores. Then, the computer system schedules the process to be executed by at least one of the processor cores based on the one or more thermodynamic process characteristics and the one or more thermodynamic processor-core characteristics.04-16-2009
20120117571LOAD BALANCER AND FIREWALL SELF-PROVISIONING SYSTEM - A method and system may receive a request to configure a computing resource, such as a load balancer or firewall based on configuration information received from a user via a web portal. The configuration information may be stored and a subsequent request to commit the stored configuration information may be received. One or more jobs may be queued in a jobs database based on the request to commit the configuration information. The one or more jobs may be dequeued by a workflow engine and executed to configure the computing resource.05-10-2012
20120017220Systems and Methods for Distributing Validation Computations - In one embodiment, a method includes statically analyzing a validation toolkit environment. The method may also include, identifying a plurality of computational threads that do not share data structures with each other based on analysis of the validation toolkit environment. The method may additionally include calculating computational requirements of the computational threads. The method may further include distributing the threads among a plurality of resources such that the aggregate computational requirements of the computational threads are approximately evenly balanced among the plurality of resources.01-19-2012
20120023504NETWORK OPTIMIZATION - A method for handling communication data involving identifying available resources for applying compression tasks and estimating a throughput reduction value to be achieved by applying each of a plurality of different compression tasks to a plurality of media items. A cost of applying the plurality of different compression tasks to the plurality of media items is estimated. The method further includes finding an optimization solution that maximizes the throughput reduction value over possible pairs of the compression tasks and the media items, while keeping the cost of the tasks of the solution within the identified available resources and providing instructions to apply compression tasks according to the optimization solution.01-26-2012
20120060171Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity.03-08-2012
20120159510HANDLING AND REPORTING OF OBJECT STATE TRANSITIONS ON A MULTIPROCESS ARCHITECTURE - Techniques are described for managing states of an object using a finite-state machine. The states may be used to indicate whether an object has been added, removed, requested or updated. Embodiments of the invention generally include dividing a process into at least two threads where a first thread changes the state of the object while the second thread performs the processing of the data found in the object. While the second thread is processing the data, the first thread may receive additional updates and change the states of the objects to inform the second thread that it should process the additional updates when the second thread becomes idle.06-21-2012
20120072919MOBILE DEVICE AND METHOD FOR EXPOSING AND MANAGING A SET OF PERFORMANCE SCALING ALGORITHMS - A mobile device, a method for managing and exposing a set of performance scaling algorithms on the device, and a computer program product are disclosed. The mobile device includes a multiple-core processor communicatively coupled to a non-volatile memory. The non-volatile memory includes a set of programs defined by a respective combination of a performance scaling algorithm and a set of parameters, a startup program that when executed by the multiple-core processor identifies at least one member of the set of programs suitable for monitoring operation of the mobile device and scaling the performance of an identified processor core and an application programming interface that exposes the set of programs.03-22-2012
20110078701Method and arrangement for distributing the computing load in data processing systems during execution of block-based computing instructions, as well as a corresponding computer program and a corresponding computer-readable storage medium - The invention is directed to a method and an arrangement for distributing the computing load in data processing system while executing of block-based computing instructions, as well as a corresponding computer program and a corresponding computer-readable storage medium, which can be used to uniformly distribute the computing load in processors for periodically occurring computing operations. The block-based computing instructions are hereby divided into blocks, wherein a block requires a number of time-sequential incoming input values, wherein the number can be predetermined for each block. A particular area of application is the field of digital processing of multimedia signals, such as in particular audio signals, video signals, and the like.03-31-2011
20120079501Application Load Adaptive Processing Resource Allocation - The invention provides hardware-automated systems and methods for efficiently sharing a multi-core data processing system among a number of application software programs, by dynamically reallocating processing cores of the system among the application programs in an application processing load adaptive manner. The invention enables maximizing the whole system data processing throughput, while providing deterministic minimum system access levels for each of the applications. With invented techniques, each application on a shared multi-core computing system dynamically gets a maximized number of cores that it can utilize in parallel, so long as all applications on the system still get at least up to their entitled number of cores whenever their actual processing load so demands. The invention provides inherent security and isolation between applications, as each application resides in its dedicated system memory segments, and can safely use the shared processing system as if it was the sole application running on it.03-29-2012
20120079500PROCESSOR USAGE ACCOUNTING USING WORK-RATE MEASUREMENTS - Accounting charges are assigned to workloads by measuring a relative use of computing resources by the workloads, then scaling the results using determined work-rate for the corresponding workload. Usage metrics for the individual resources may be selectable for the resources being measured and the work-rates may be determined from an analytical model or from empirical model that determines work-rates from an indication of processor throughput. Under single workload conditions on a platform, or other suitable conditions, a workload type may be used to select the particular usage metrics applied for the various resources.03-29-2012
20120079499LOAD BALANCING DATA ACCESS IN VIRTUALIZED STORAGE NODES - Systems and methods of load balancing data access in virtualized storage nodes are disclosed. An embodiment of a method includes receiving a data access request from a client for data on a plurality of the virtualized storage nodes. The method also includes connecting the client to one of the plurality of virtualized storage nodes having data for the data access request. The method also includes reconnecting the client to another one of the plurality of virtualized storage nodes to continue accessing data in the data access request.03-29-2012
20090133031INFORMATION SYSTEM, LOAD CONTROL METHOD, LOAD CONTROL PROGRAM AND RECORDING MEDIUM - A load control server, computer program product, and method for controlling bottlenecks in an information system that includes application servers and a database server. Each application server executes at least one application program for processing a transaction received by each application server. The database server accesses a database based on a request received from an application server. A processing time required for each application program to process the transaction is monitored. A bottleneck relating to usage of at least one resource is detected. Each resource is a resource of at least one application server, a resource related to input to the transaction, a resource of the database server, or a resource related to the transaction. The detecting responds to the monitoring determining that the processing time for processing the transaction by at least one application server is not within a predesignated permissible processing time range. The detected bottleneck is removed.05-21-2009
20120222042MANAGEMENT OF HETEROGENEOUS WORKLOADS - Systems and methods for managing a system of heterogeneous workloads are provided. Work that enters the system is separated into a plurality of heterogeneous workloads. A plurality of high-level quality of service goals is gathered. At least one of the plurality of high-level quality of service goals corresponds to each of the plurality of heterogeneous workloads. A plurality of control functions are determined that are provided by virtualizations on one or more containers in which one or more of the plurality of heterogeneous workloads run. An expected utility of a plurality of settings of at least one of the plurality of control functions is determined in response to the plurality of high-level quality of service goals. At least one of the plurality of control functions is exercised in response to the expected utility to effect changes in the behavior of the system.08-30-2012
20120131594SYSTEMS AND METHODS FOR GENERATING DYNAMICALLY CONFIGURABLE SUBSCRIPTION PARAMETERS FOR TEMPORARY MIGRATION OF PREDICTIVE USER WORKLOADS IN CLOUD NETWORK - Embodiments relate to systems and methods for generating dynamically configurable subscription parameters for the temporary migration of predictive user workloads in a cloud network. Aspects relate to platforms and techniques for analyzing overnight or other off-peak or temporary deployments of user workloads to underutilized host clouds. A cloud management system can capture usage history data for a user operating in a default deployment, such as a premise/cloud mix. A deployment engine can determine the resources required for the user's workload pattern, and examine corresponding resources available in a set of other geographically-dispersed host clouds. The host clouds can comprise clouds based in different time zones, so that cloud capacity during U.S. West Coast evening time or European overnight hours can be packaged and offered to U.S. East Coast users at reduced rates. The deployment engine can generate different sets of dynamic subscription terms or parameters to be offered to the user, such as different costs or service levels at staggered off-peak periods.05-24-2012
20120131593SYSTEM AND METHOD FOR COMPUTING WORKLOAD METADATA GENERATION, ANALYSIS, AND UTILIZATION - A method for managing computing resources includes generating a first workload metadata for a first workload, generating a second workload metadata for a second workload, and comparing the first workload and the second workload metadata against resource metadata. The method includes, based upon the comparison of workload metadata against resource metadata, identifying a potential conflict in resource requirements between the first workload and the computing resources available to the processing entity, and assigning the second workload for execution by one of the processing entities. The metadata characterize computing resources required by the associated workload. The first workload metadata is initially prioritized over the second workload metadata. The workloads are to be executed by processing entities. The resource metadata is available to the processing entities. The potential conflict in resource requirements does not exist between the resource requirements of the second workload and the computing resources of the processing entity.05-24-2012
20120131595PARALLEL COLLISION DETECTION METHOD USING LOAD BALANCING AND PARALLEL DISTANCE COMPUTATION METHOD USING LOAD BALANCING - Disclosed herein is a parallel collision detection method using load balancing in order to detect collision between two objects of a polygon soup. The parallel collision detection method is processed in parallel using a plurality of threads. The parallel collision detection method includes traversing a Bounding Volume Traversal Tree (BVTT) using Bounding Volume Hierarchies (BVHs) related to the polygon soup in a depth first search manner or a width first search manner; recursively traversing the children node of an internal node (a parent node) when a currently traversed node is the internal node and two Boundary Volumes (BVs) in the corresponding node overlap, and stopping to traverse the node when the currently traversed node is the internal node and two Boundary Volumes (BVs) do not overlap; and storing collision primitives in a leaf node when the currently traversed node is the leaf node and collision primitives in the leaf node overlap.05-24-2012
20090064164METHOD OF VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT AND MULTITHREADED PROCESSOR WITH VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT - A program product and method of managing task execution on an integrated circuit chip such as a chip-level multiprocessor (CMP) with Simultaneous MultiThreading (SMT). Multiple chip operating units or cores have chip sensors (temperature sensors or counters) for monitoring temperature in units. Task execution is monitored for hot tasks and especially for hotspots. Task execution is balanced, thermally, to minimize hot spots. Thermal balancing may include Simultaneous MultiThreading (SMT) heat balancing, chip-level multiprocessors (CMP) heat balancing, deferring execution of identified hot tasks, migrating identified hot tasks from a current core to a colder core, User-specified Core-hopping, and SMT hardware threading.03-05-2009
20120216214MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION - Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.08-23-2012
20110185366Load-balancing of processes based on inertia - A process is selected for movement from a current node to a new node, based on an inertia of the process. The inertia is a quantified measure of the impact resulting from the process being inaccessible while being moved. The inertia can take into account the number of current external connections to the process; the larger the number of current external connections is, the greater the inertia. The inertia can take into account the extent to which the process accepts external connections; the greater the extent to which the process accepts external connections is, the greater the inertia. The inertia can take into account the desired availability of the process; the greater the desired availability is, the greater the inertia. The inertia can take into account a specified quality of service of the process; the higher the specified quality of service is, the greater the inertia.07-28-2011
20100175070VIRTUAL MACHINE MANAGING DEVICE, VIRTUAL MACHINE MANAGING METHOD, AND VIRTUAL MACHINE MANAGING PROGRAM - An object of the present invention is to suppress a variation in virtual machine startup times when multiple virtual machines are started in a computer system having multiple virtual machine providing servers. Execution server distribution unit 07-08-2010
20120222041TECHNIQUES FOR CLOUD BURSTING - Techniques for automated and controlled cloud migration or bursting are provided. A schema for a first cloud in a first cloud processing environment is used to evaluate metrics against thresholds defined in the schema. When a threshold is reached other metrics for other clouds in second cloud processing environments are evaluated and a second cloud processing environment is selected. Next, a second cloud is cloned in the selected second cloud processing environment for the first cloud and traffic associated with the first cloud is automatically migrated to the cloned second cloud.08-30-2012
20100050182PARALLEL PROCESSING SYSTEM - A system for processing a user application having a plurality of functions identified for parallel execution. The system includes a client coupled to a plurality of compute engines. The client executes both the user application and a compute engine management module. Each of the compute engines is configured to execute a requested function of the plurality of functions in response to a compute request. If, during execution of the user application by the client, the compute engine management module detects a function call to one of the functions identified for parallel execution, and the module selects a compute engine and sends a compute request to the selected compute engine requesting that it execute the function called. The selected compute engine calculates a result of the requested function and sends the result to the compute engine management module, which receives the result and provides it to the user application.02-25-2010
20120180066VIRTUAL TAPE LIBRARY CLUSTER - Various embodiments for managing a virtual tape library cluster are provided. A virtual tape library system is enhanced by representing virtual tape resources in cluster nodes with a unique serial number. A least utilized cluster node is determined. One of the virtual tape resources represented within the least utilized cluster node is selected.07-12-2012
20090064168System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks By Modifying Tasks - A system and method are provided for providing hardware based dynamic load balancing of message passing interface (MPI) tasks by modifying tasks. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. Thus, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors.03-05-2009
20120185869MULTIMEDIA PRE-PROCESSING APPARATUS AND METHOD FOR VIRTUAL MACHINE IN MULTICORE DEVICE - A multimedia data preprocessing apparatus for a virtual machine is provided. The multimedia data preprocessing apparatus includes a detection unit configured to detect multimedia data included in an application, a generation unit configured to generate a thread for processing the detected multimedia data, and an allocation unit configured to allocate the generated thread to an idle core.07-19-2012
20120185868WORKLOAD PLACEMENT ON AN OPTIMAL PLATFORM IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for optimizing workload placement in a networked computing environment (e.g., a cloud computing environment). Specifically, under embodiments of the present invention, a workload placement technique is applied to determine an optimal platform for handling an identified workload. The workload placement technique can comprise one or more of the following: a shadow placement technique whereby the workload is placed on multiple similar platforms substantially contemporaneously; a simultaneous placement technique whereby the workload is placed on multiple different platforms substantially contemporaneously; and/or a single platform placement technique whereby the workload is placed on a single platform at a given time. Once an optimal platform is identified, a workload timing method may be applied to determine when the workload should be placed thereon. The workload timing method can comprise one or more of the following: a time-based method whereby the workload is placed on the optimal platform at a predetermined time or time interval; and/or an event-based method whereby the workload is placed on the optimal platform based on an occurrence of one or more events external to the workload itself (e.g., a certain CPU or memory consumption, etc.). Once the workload is placed on the optimal platform, optimization data can be gathered for future assessments.07-19-2012
20120185867Optimizing The Deployment Of A Workload On A Distributed Processing System - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes.07-19-2012
20120185870All-to-All Comparisons on Architectures Having Limited Storage Space - Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory.07-19-2012
20120084789System and Method for Optimizing the Evaluation of Task Dependency Graphs - One embodiment of the present invention is a technique for optimizing a task graph that specifies multiple tasks and the dependencies between the specified tasks. When optimizing the task graph, the optimization engine performs multiple iterations of runtime optimization operations on the task graph. At each iteration, an optimized task graph is generated based on a different task aggregation topology. The optimized task graph is then compiled and executed. Runtime statistics related to the execution are collected, and, in subsequent iterations, the task graph is further optimized based on the collected statistics. Once the optimization process is complete, the most optimal task graph topology that was identified during the process is used to generate an optimized task graph for execution.04-05-2012
20120084788COMPLEX EVENT DISTRIBUTING APPARATUS, COMPLEX EVENT DISTRIBUTING METHOD, AND COMPLEX EVENT DISTRIBUTING PROGRAM - A server calculates correlations between complex event processing processes performed by virtual machines (VMs) so as to detect events from streams using condition expressions for identifying the events. The server obtains the load status of each of the VMs. The server then detects a VM having a processing load exceeding a predetermined level based on the load status thus obtained. When a VM having a processing load exceeding a predetermined level is detected, the server distributes the complex event processing processes to the respective VMs based on the calculated correlations between the complex event processing processes.04-05-2012
20090019449Load balancing method and apparatus in symmetric multi-processor system - Provided are a load balancing method and a load balancing apparatus in a symmetric multi-processor system. The load balancing method includes selecting at least two processors based on a load between a plurality of processors, from among the plurality of processors, migrating a predetermined task stored in a run queue of a first processor to a migration queue of a second processor, and migrating the predetermined task stored in the migration queue of the second processor to a run queue of the second processor. Accordingly, a run queue of a processor is not blocked while migrating a task, an immediate response of the run queue is possible, and a waiting time of a scheduler is reduced. Consequently, the scheduler can speedily perform context switching, and thus performance of the entire operating system is improved.01-15-2009
20130174177LOAD-AWARE LOAD-BALANCING CLUSTER - A load-aware load-balancing cluster includes a switch having a plurality of ports; and a plurality of servers connected to at least some of the plurality of ports of the switch. Each server is addressable by the same virtual Internet Protocol (VIP) address. Each server in the cluster has a mechanism constructed and adapted to respond to determine the particular server's own measured load; convert the particular server's own measured load to a corresponding own particular load category of a plurality of load categories; provide the particular server's own particular load category to other servers of the plurality of servers; obtain load category information from other servers of the plurality of servers; and maintain, as an indication of server load of each of the plurality of servers, the particular server's own particular load category and the load category information from the other servers.07-04-2013
20130174178AUTOMATED TEST CYCLE ESTIMATION SYSTEM AND METHOD - A system and method is disclosed to estimate both, the time and number of resources required to execute a test suite or a subset of test suite in parallel, with the objective of providing a balanced workload distribution. The present invention partitions test suite for parallelization, given the dependencies that exists between test cases and test execution time.07-04-2013
20130174176WORKLOAD MANAGEMENT IN A DATA STORAGE SYSTEM - According to certain aspects, the presently disclosed subject matter includes a method, system and apparatus, for managing a plurality of disk drives in a storage system. The workload of at least one disk drive among the plurality of disk drives is monitored, wherein the monitoring comprises receiving data indicative of a temperature of the at least one disk drive. In case the measured temperature matches a predefined criterion, the modification of workload distribution across the plurality of disk drives is enabled, in order to reduce workload of the at least one disk drive.07-04-2013
20120266180Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history.10-18-2012
20110131585DATA PROCESSING SYSTEM - A data processing apparatus is constructed by an input device for inputting an instruction for causing a job processor to perform a job, an analyzing unit for analyzing the instruction inputted by the input device, a discriminating unit for discriminating a processing ability of the job processor which performs the job based on the instruction inputted by the input device, and a controller for controlling a supply of the instruction inputted by the input device to the job processor in accordance with a result of the analysis by the analyzing unit and a result of the discrimination by the discriminating unit. The job processor performs a job to transmit input data to another apparatus, and the input device inputs an instruction including a designation of destinations to transmit data by the job processor.06-02-2011
20120266179DYNAMIC MAPPING OF LOGICAL CORES - A processor that dynamically remaps logical cores to physical cores is disclosed. In one embodiment, the processor includes a plurality of physical cores, and is configured to store a mapping of logical cores to the plurality of physical cores. The processor further includes an assignment unit configured to remap the logical cores to the plurality of physical cores subsequent to a boot process of the processor. In some embodiments, the assignment unit is configured to remap the logical cores in response to receiving an indication that one or more of the plurality of physical cores have entered an idle state. The processor may be configured to load a first of the plurality of physical cores with an execution state of a second of the plurality of physical cores upon the first physical core exiting an idle state.10-18-2012
20120324471CONTROL DEVICE, MANAGEMENT DEVICE, DATA PROCESSING METHOD OF CONTROL DEVICE, AND PROGRAM - A virtual server for measuring performance (12-20-2012
20120278813LOAD BALANCING - Efforts to avoid time-outs during execution of an application in a managed execution environment may be implemented by monitoring memory allocation.11-01-2012
20120331479LOAD BALANCING DEVICE FOR BIOMETRIC AUTHENTICATION SYSTEM - A load balancing device is provided that allocates, to one of a plurality of authentication servers, biometric authentication requests of users received from client terminals by comparing input biometric authentication data and registration target biometric authentication data so as to estimate a check process time, including storing a process time for an authentication request being processed for each of the authentication servers, and allocating a process for a biometric authentication request from the client terminal to an authentication server having a process time that is short by estimating a check process time on the basis of a quality of the input biometric data and a quality of registration target biometric data and referring to the process time stored in the storage unit for each authentication server when the biometric authentication request has been received from the client terminal.12-27-2012
20100229180Information processing system - An information processing system includes a first system and a second system. The first system and the second system each includes: hardware; a compensation section configured to provide execution environments for execution of a process using the hardware of the system to which the compensation section belongs; and a processing section configured to execute a predetermined process in the execution environments provided by the compensation section. The hardware of the first system and the hardware of the second system are different in nature from each other. The compensation section of one of the first system and the second system compensates for the differences between the hardware of the first system and the hardware of the second system to provide the processing section of the other with the execution environments which are not affected by the differences between the hardware of the first system and the hardware of the second system.09-09-2010
20110321057MULTITHREADED PHYSICS ENGINE WITH PREDICTIVE LOAD BALANCING - A circuit arrangement and method utilize predictive load balancing to allocate the workload among hardware threads in a multithreaded physics engine. The predictive load balancing is based at least in part upon the detection of predicted future collisions between objects in a scene, such that the reallocation of respective loads of a plurality of hardware threads may be initiated prior to detection of the actual collisions, thereby increasing the likelihood that hardware threads will be optimally allocated when the actual collisions occur.12-29-2011
20110321056DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node.12-29-2011
20120102501ADAPTIVE QUEUING METHODOLOGY FOR SYSTEM TASK MANAGEMENT - A task management methodology for system having multiple processors and task queues adapts a queuing topology by monitoring a queue pressure and adjusting the queue topology from a selection of at least two different queue topologies. The queue pressure may be periodically monitored and queues with different granularities selected. The methodology reduced contention when there is high pressure on the queues while also reducing overhead to manage queues when there is less pressure on the queues.04-26-2012
20100131960Systems and Methods for GSLB Based on SSL VPN Users - The present invention provides a system and a method for global server load balancing of a plurality of sites based on a number of Secure Socket Layer Virtual Private Network (SSL VPN) users. The SSL VPN users may access servers at each of the plurality of sites. A global server load balancing virtual server (GSLB) may receive a request to access a server. The GSLB virtual server may load balance a plurality of sites wherein each of the plurality of sites may further comprising a load balancing virtual server load balancing users accessing the server accessing servers via an SSL VPN session. GSLB may receive from a first load balancing virtual server at a first site, a first number of current SSL VPN users accessing servers from the first site via SSL VPN sessions. The GSLB may also receive from a second load balancing virtual server at a second site, a second number of current SSL VPN users of the users accessing servers from the second site via SSL VPN sessions. GSLB may determine to forward the request to one of the first load balancing virtual server of the first site or the second load balancing virtual server of the second site by load balancing SSL VPN users across the plurality of sites based on the first number of current SSL VPN users and the second number of current SSL VPN users.05-27-2010
20120151494METHOD FOR DETERMINING A NUMBER OF THREADS TO MAXIMIZE UTILIZATION OF A SYSTEM - A method for determining a number of threads to maximize system utilization. The method begins with determining a first value which corresponds to the current system utilization. Next the method determines a second value which corresponds to the current number of threads in the system. Next the method determines a third value which corresponds to the number of processor cores in the system. Next the method receives a fourth value from an end user which corresponds to the optimal system utilization the end user wishes to achieve. Next the method determines a fifth value which corresponds to the number of threads necessary to achieve the optimal system utilization value received from the end user. Finally, the method sends the fifth value to all running applications.06-14-2012
20130024871Thread Management in Parallel Processes - A method and system are provided for thread management in parallel processes in a multi-core or multi-node system. The method includes receiving monitored hardware metrics information from the multiple cores or multiple nodes on which processes are executed, receiving monitored process and thread information; and globally monitoring the processing across the multiple cores or multiple nodes. The method further includes analyzing the monitored information to minimize imbalances between the multiple cores and/or to improve core or node exploitation and dynamically adjusting the number of threads per process based on the analysis.01-24-2013
20130024872Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity.01-24-2013
20080250421Data Processing System And Method - A method of forming a cluster from a plurality of potential clusters that share a common node, the method comprising determining a criticality factor of each potential cluster by combining criticality factors of the nodes of each potential cluster; and forming the cluster from the potential cluster with the highest criticality factor.10-09-2008
20080235705Methods and Apparatus for Global Systems Management - Techniques for globally managing systems are provided. One or more measurable effects of at least one hypothetical action to achieve a management goal are determined at a first system manager. The one or more measurable effects are sent from the first system manager to a second system manager. At the second system manager, one or more procedural actions to achieve the management goal are determined in response to the one or more received measurable effects. The one or more procedural actions are executed to achieve the management goal.09-25-2008
20080235704Plug-and-play load balancer architecture for multiprocessor systems - One embodiment relates to a multiprocessor system with a modular load balancer. The multiprocessor system includes a plurality of processors, a memory system, and a communication system interconnecting the processors and the memory system. A kernel comprising instructions that are executable by the processors is provided in the memory system, and a scheduler is provided in the kernel. Load balancing routines are provided in the scheduler, the load balancing routines including interfaces for a plurality of balancer operations. At least one balancer plug-in module is provided outside the scheduler, the balancer plug-in module including the plurality of balancer operations. Other embodiments, aspects, and features are also disclosed.09-25-2008
20110247006Apparatus and method of dynamically distributing load in multiple cores - Provided is an apparatus and method of dynamically distributing load occurring in multiple cores that may determine a corresponding core to perform functions constituting an application program, thereby enhancing the entire processing rate.10-06-2011
20110247005Methods and Apparatus for Resource Capacity Evaluation in a System of Virtual Containers - Methods and apparatus are provided for evaluating potential resource capacity in a system where there is elasticity and competition between a plurality of containers. A dynamic potential capacity is determined for at least one container in a plurality of containers competing for a total capacity of a larger container. A current utilization by each of the plurality of competing containers is obtained, and an equilibrium capacity is determined for each of the competing containers. The equilibrium capacity indicates a capacity that the corresponding container is entitled to. The dynamic potential capacity is determined based on the total capacity, a comparison of one or more of the current utilizations to one or more of the corresponding equilibrium capacities and a relative resource weight of each of the plurality of competing containers. The dynamic potential capacity is optionally recalculated when the set of plurality of containers is changed or after the assignment of each work element.10-06-2011
20080222647Method and system for load balancing of computing resources - A load balancing method incorporates temporarily inactive machines as part of the resources capable of executing tasks during heavy process requests periods to alleviate some of the processing load on other computing resources. This method determines which computing resources are available and prioritizes these resources for access by the load balancing process. A snap shot of the resource configuration and made secured along with all data on this system such that no contamination occurs between resident data on that machine and any data placed on that machine as put of the load balancing activities. After a predetermined period of time or a predetermined event, the availability of the temporary resources for load balancing activities ends. At this point, the original configuration and data is restored to the computing resource such that no trace of use of the resource in load balancing activities is detected to the user.09-11-2008
20080222646PREEMPTIVE NEURAL NETWORK DATABASE LOAD BALANCER - A preemptive neural network database load balancer configured to observe, learn and predict the resource utilization that given incoming tasks utilize. Allows for efficient execution and use of system resources. Preemptively assigns incoming tasks to particular servers based on predicted CPU, memory, disk and network utilization for the incoming tasks. Direct write-based tasks to a master server and utilizes slave servers to handle read-based tasks. Read-base tasks are analyzed with a neural network to learn and predict the amount of resources that tasks will utilize. Tasks are assigned to a database server based on the predicted utilization of the incoming task and the predicted and observed resource utilization on each database server. The predicted resource utilization may be updated over time as the number of records, lookups, images, PDFs, fields, BLOBs and width of fields in the database change over time.09-11-2008
20080216088Coordinating service performance and application placement management - Apparatus, systems and methods for service and/or business for coordinating tasks of performance management and application placement management in a dynamic fashion. An example process is dynamic in the face of fluctuations in the request load to the distributed computer system and the periodic adjustments to the placement of applications onto servers in said distributed computer system. There are two opposite functional flows in said process: a demand estimation function and a capacity adjustment function. The coordination system involves two subsystems: a demand estimator and a capacity adjuster, along with appropriate interfaces to of the performance manager and the application placement manager. This results in application placement process reacting quicker to demand fluctuations, performance guarantees are better met by rearranging the resources to be allocated to the various classes of service, and the management system works in an unsupervised mode, thus reducing manual administration costs and human errors.09-04-2008
20130091509OFF-LOADING OF PROCESSING FROM A PROCESSOR BLADE TO STORAGE BLADES - A processor blade determines whether a selected processing task is to be off-loaded to a storage blade for processing. The selected processing task is off-loaded to the storage blade via a planar bus communication path, in response to determining that the selected processing task is to be off-loaded to the storage blade. The off-loaded selected processing task is processed in the storage blade. The storage blade communicates the results of the processing of the off-loaded selected processing task to the processor blade.04-11-2013
20130104143RUN-TIME ALLOCATION OF FUNCTIONS TO A HARDWARE ACCELERATOR - An accelerator work allocation mechanism determines at run-time which functions to allocate to a hardware accelerator based on a defined accelerator policy, and based on an analysis performed at run-time. The analysis includes reading the accelerator policy, and determining whether a particular function satisfies the accelerator policy. If so, the function is allocated to the hardware accelerator. If not, the function is allocated to the processor.04-25-2013
20130125133System and Method for Load Balancing of Fully Strict Thread-Level Parallel Programs - A system and method for executing fully strict thread-level parallel programs and performing load balancing between concurrently executing threads may allow threads to efficiently distribute work among themselves. A parent function of a thread may spawn children on one or more processors, pushing a stack frame onto a deque, then may sync by determining whether its children remain in the deque. If not, and/or if not all stolen children have returned, the thread may abandon its stack as an orphan, acquire an empty stack, and begin stealing work from other threads. Stealing work may include identifying an element in a deque of another thread, removing the element from the deque, and executing the associated child function. If this is the last child of a parent on the other thread's orphan stack, the thread may release its stack, adopt the orphan stack of the other thread, and continue its execution.05-16-2013
20110276982Load Balancer and Load Balancing System - In a system including a load balancer to select a virtual server to which a request is to be transferred, the load balancer includes a function to monitor resource use states of physical and virtual servers and a function to predict a packet loss occurring in a virtual switch. The request count of requests processible by each virtual server is calculated based on the resource amount available for the virtual server and a packet loss rate of the virtual switch, to thereby select a virtual server capable of processing a larger number of requests.11-10-2011
20090064165Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors.03-05-2009
20080201720System and Method for Load-Balancing in a Resource Infrastructure Running Application Programs - The idea of the present invention is to provide a challenge-response mechanism to acquire work scope split range information from the application's Work Scope Split component of the over-utilized resource. By using the work scope split range information, the provisioning system is able to add a new resource, install a new application for that new resource, configure the new and the over-utilized resource's application, and reconfigure the load-balancer in accordance with the work scope split range information. The present invention adds scalability to complex and stateful application programs and allows dynamic provisioning of resources for these application programs.08-21-2008
20080201719System and method for balancing information loads - A method and system is provided for routing data in a system. The method includes determining an initial fixed distribution pattern, determining a queue parameter based on at least a current amount of system use and a maximum potential system use, determining a time parameter based on the time that a message in the application has been waiting for its processing, determining a load parameter based on at least the time parameter and the queue parameter, and modifying the distribution pattern based on at least the load parameter.08-21-2008
20080201718Method, an apparatus and a system for managing a distributed compression system - Some embodiments of the invention relate to a method of managing a distributed compression system comprised of a plurality of compression modules. According to some embodiments of the invention, a method of managing a distributed compression system comprised of a plurality of compression modules may include implementing a load balancing distribution scheme in respect of a plurality of currently active compression modules, providing a reference key for each of a plurality of data units which are intended for being compressed, the reference key of each data unit being based upon at least a portion of the content of the data unit, and applying the load balancing distribution scheme in respect the reference key of each of the plurality of data units so as to designate for each data unit a compression module from amongst the plurality of compression modules to which the data unit is to be assigned, thereby giving rise to a substantially balanced distribution of the data units across the plurality of currently active compression modules.08-21-2008
20080201717OPTIMIZATION AND/OR SCHEDULING FRAMEWORK FOR A PERIODIC DATA COMMUNICATION SYSTEM HAVING MULTIPLE BUSES AND HARDWARE APPLICATION MODULES - Periodic communication of data packets between modules in time frames having a plurality of frame rates including a base frame rate through a bus is schedule by determining a first load schedule for data packets of base frame and half base frame rates using constraint logic programming techniques, by determining a second load schedule for data packets of other frame rates using mixed integer linear programming techniques, and by scheduling produce and consume loads for each of the modules based on the first and second load schedules.08-21-2008
20130139175PROCESS MAPPING PARALLEL COMPUTING - A method of mapping processes to processors in a parallel computing environment where a parallel application is to be run on a cluster of nodes wherein at least one of the nodes has multiple processors sharing a common memory, the method comprising using compiler based communication analysis to map Message Passing Interface processes to processors on the nodes, whereby at least some more heavily communicating processes are mapped to processors within nodes. Other methods, apparatus, and computer readable media are also provided.05-30-2013
20100299675SYSTEM AND METHOD FOR ESTIMATING COMBINED WORKLOADS OF SYSTEMS WITH UNCORRELATED AND NON-DETERMINISTIC WORKLOAD PATTERNS - It has been found that a more reasonable estimation of combined workloads can be achieved by enabling the ability to specify the confidence level in which to estimate the workload values. A method, computer readable medium and system are provided for estimating combined system workloads. The method comprises obtaining a set of quantile-based workload data pertaining to a plurality of systems and normalizing the quantile-based workload data to compensate for relative measures between data pertaining to different ones of the plurality of systems. A confidence interval may then be determined and the confidence interval used to determine a contention probability specifying a degree of predicted workload contention between the plurality of systems according to at least one probabilistic model. The contention probability may then be used to combine workloads for the plurality of systems and a result indicative of one or more combined workloads then provided.11-25-2010
20120284733Scheduling for Parallel Processing of Regionally-Constrained Placement Problem - Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results.11-08-2012
20130160024Dynamic Load Balancing for Complex Event Processing - Disclosed herein are methods, systems, and computer readable storage media for performing load balancing actions in a complex event processing system. Static statistics of a complex event processing node, dynamic statistics of the complex event processing node, and project statistics for projects executing on the complex event processing node are aggregated. A determination is made as to whether the aggregated statistics satisfy a condition. A load balancing action may be performed, based on the determination.06-20-2013
20130185731DYNAMIC DISTRIBUTION OF NODES ON A MULTI-NODE COMPUTER SYSTEM - I/O nodes are dynamically distributed on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjusts the I/O node distribution based on the I/O performance of the executing job.07-18-2013
20130191843SYSTEM AND METHOD FOR JOB SCHEDULING OPTIMIZATION - A system and computer-implemented method for generating an optimized allocation of a plurality of tasks across a plurality of processors or slots for processing or execution in a distributed computing environment. In a cloud computing environment implementing a MapReduce framework, the system and computer-implemented method may be used to schedule map or reduce tasks to processors or slots on the network such that the tasks are matched to processors or slots in a data locality aware fashion wherein the suitability of node and the characteristics of the task are accounted for using a minimum cost flow function.07-25-2013
20130191842PROVISIONING TENANTS TO MULTI-TENANT CAPABLE SERVICES - The present invention extends to methods, systems, and computer program products for implementing a tenant provisioning system in a multi-tenancy architecture using a single provisioning master in the architecture, and a data center provisioner in each data center in the architecture. The provisioning master receives user requests to provision a tenant of a service and routes such requests to an appropriate data center provisioner. Each service in the multi-tenancy architecture implements a common interface by which the corresponding data center provisioner can obtain a common indication of load from each different service deployed in the data center thus facilitating the selection of a scale unit on which a tenant is provisioned. The common interface also enables a service to dynamically register (i.e. without redeploying the tenant provisioning system) with the provisioning master as a multi-tenancy service by registering an endpoint address with the provisioning master.07-25-2013
20120030686THERMAL LOAD MANAGEMENT IN A PARTITIONED VIRTUAL COMPUTER SYSTEM ENVIRONMENT THROUGH MONITORING OF AMBIENT TEMPERATURES OF ENVIRNOMENT SURROUNDING THE SYSTEMS - Thermal load, management in a virtualized environment wherein server controlled physical processor systems are partitioned into a plurality of logical partitions LPARs that comprise first predetermining a set of ambient temperature levels for the surrounding outside environment for a first server controlled system having a plurality of LPARs. Then the ambient set of temperature levels are sensed and, if the set or predetermined pattern of temperature levels are exceeded, one or more of the plurality of LPARs are transferred from said first server controlled system to a second server controlled LPAR system over a connecting network.02-02-2012
20130198759CONTROLLING WORK DISTRIBUTION FOR PROCESSING TASKS - A technique for controlling the distribution of compute task processing in a multi-threaded system encodes each processing task as task metadata (TMD) stored in memory. The TMD includes work distribution parameters specifying how the processing task should be distributed for processing. Scheduling circuitry selects a task for execution when entries of a work queue for the task have been written. The work distribution parameters may define a number of work queue entries needed before a cooperative thread array” (“CTA”) may be launched to process the work queue entries according to the compute task. The work distribution parameters may define a number of CTAS that are launched to process the same work queue entries. Finally, the work distribution parameters may define a step size that is used to update pointers to the work queue entries.08-01-2013
20120036515Mechanism for System-Wide Target Host Optimization in Load Balancing Virtualization Systems - A mechanism for system-wide target host optimization in load balancing virtualization systems is disclosed. A method of the invention includes detecting a condition triggering a load balancing operation, identifying a plurality of candidate target host machines to receive one or more operating virtual machines (VMs) to be migrated, determining a load per resource on each identified candidate target host machine, and scheduling all operating VMs among all of the identified candidate target host machines in view of an expected load per resource on each candidate target host.02-09-2012
20120060172Dynamically Tuning A Server Multiprogramming Level - Methods, apparatus and computer program products for allocating a number of workers to a worker pool in a multiprogrammable computer are provided, to thereby tune server multiprogramming level. The method includes the steps of monitoring throughput in relation to a workload concurrency level and dynamically tuning a multiprogramming level based upon the monitoring. The dynamic tuning includes adjusting with a first adjustment for a first interval and with a second adjustment for a second interval, wherein the second adjustment utilizes data stored from the first adjustment.03-08-2012
20120066689BLADE SERVER AND SERVICE SCHEDULING METHOD OF THE BLADE SERVER - The present invention discloses a blade server and a service scheduling method of the blade server. The method includes the following steps. According to the requirement for processing capability of a service, a blade is selected for a logical partition storing the service data (A03-15-2012
20120066688PROCESSOR THREAD LOAD BALANCING MANAGER - An operating system of an information handling system (IHS) determines a process tree of data sharing threads in an application that the IHS executes. A load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).03-15-2012
20120304193Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage Temperature Of The Nodes During Execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions.11-29-2012
20120304191SYSTEMS AND METHODS FOR CLOUD DEPLOYMENT ENGINE FOR SELECTIVE WORKLOAD MIGRATION OR FEDERATION BASED ON WORKLOAD CONDITIONS - Embodiments relate to systems and methods for a cloud deployment engine for selective workload migration or federation based on workload conditions. A set of aggregate usage history data can record consumption of processor, software, or other resources subscribed to by one or more users in a or clouds. An entitlement engine can analyze the usage history data to identify a subscription margin and other trends or data reflecting short-term consumption trends. An associated deployment engine can analyze the short-term consumption trends, and generate a decision to either deploy any over-subscribed resources to a set of federated backup clouds, or to one or more new host clouds. In aspects, the decision to augment the capacity of the host cloud with either a cloud federation or a complete host cloud replacement can be based on a set of selection criteria, including the margin by which the resources are over-subscribed and/or whether the over-subscription is static, increasing or accelerating, among others.11-29-2012
20120096473MEMORY MAXIMIZATION IN A HIGH INPUT/OUTPUT VIRTUAL MACHINE ENVIRONMENT - A computer implemented method is provided, including monitoring the utilization of resources available within a compute node, wherein the resources include an input/output capacity, a processor capacity, and a memory capacity. The method further comprises allocating virtual machines to the compute node to maximize use of a first one of the resources; and then allocating an additional virtual machine to the compute node to increase the utilization of the resources other than the first one of the resources without over-allocating the first one of the resources. In a web server, the input/output capacity may be the resource to be maximized. However, unused memory capacity and/or processor capacity of the compute node may be used more effectively by identifying an additional virtual machine that is memory intensive or processor intensive to be allocated or migrated to the compute node. The additional virtual machine(s) may be identified in new workload requests or from analysis of virtual machines running on other compute nodes accessible over the network.04-19-2012

Patent applications in class Load balancing