Entries |
Document | Title | Date |
20080201717 | OPTIMIZATION AND/OR SCHEDULING FRAMEWORK FOR A PERIODIC DATA COMMUNICATION SYSTEM HAVING MULTIPLE BUSES AND HARDWARE APPLICATION MODULES - Periodic communication of data packets between modules in time frames having a plurality of frame rates including a base frame rate through a bus is schedule by determining a first load schedule for data packets of base frame and half base frame rates using constraint logic programming techniques, by determining a second load schedule for data packets of other frame rates using mixed integer linear programming techniques, and by scheduling produce and consume loads for each of the modules based on the first and second load schedules. | 08-21-2008 |
20080201718 | Method, an apparatus and a system for managing a distributed compression system - Some embodiments of the invention relate to a method of managing a distributed compression system comprised of a plurality of compression modules. According to some embodiments of the invention, a method of managing a distributed compression system comprised of a plurality of compression modules may include implementing a load balancing distribution scheme in respect of a plurality of currently active compression modules, providing a reference key for each of a plurality of data units which are intended for being compressed, the reference key of each data unit being based upon at least a portion of the content of the data unit, and applying the load balancing distribution scheme in respect the reference key of each of the plurality of data units so as to designate for each data unit a compression module from amongst the plurality of compression modules to which the data unit is to be assigned, thereby giving rise to a substantially balanced distribution of the data units across the plurality of currently active compression modules. | 08-21-2008 |
20080201719 | System and method for balancing information loads - A method and system is provided for routing data in a system. The method includes determining an initial fixed distribution pattern, determining a queue parameter based on at least a current amount of system use and a maximum potential system use, determining a time parameter based on the time that a message in the application has been waiting for its processing, determining a load parameter based on at least the time parameter and the queue parameter, and modifying the distribution pattern based on at least the load parameter. | 08-21-2008 |
20080201720 | System and Method for Load-Balancing in a Resource Infrastructure Running Application Programs - The idea of the present invention is to provide a challenge-response mechanism to acquire work scope split range information from the application's Work Scope Split component of the over-utilized resource. By using the work scope split range information, the provisioning system is able to add a new resource, install a new application for that new resource, configure the new and the over-utilized resource's application, and reconfigure the load-balancer in accordance with the work scope split range information. The present invention adds scalability to complex and stateful application programs and allows dynamic provisioning of resources for these application programs. | 08-21-2008 |
20080209434 | Distribution of data and task instances in grid environments - A partition analyzer may be configured to designate a data partition within a database of a grid network, and to perform a mapping of the data partition to a task of an application, the application to be at least partially executed within the grid network. A provisioning manager may be configured to determine a task instance of the task, and to determine the data partition, based on the mapping, where the data partition may be stored at an initial node of the grid network. A processing node of the grid network having processing resources required to execute the task instance and a data node of the grid network having memory resources required to store the data partition may be determined. The task instance may be deployed to the processing node, and the data partition may be re-located from the initial node to the data node, based on the comparison. | 08-28-2008 |
20080216086 | METHOD OF ANALYZING PERFORMANCE IN A STORAGE SYSTEM - A method of balancing a load in a computer system having at least one storage system, and a management computer, each of the storage systems having physical disks and a disk controller, the load balancing method including the steps of: setting at least one of the physical disks as a parity group; providing a storage area of the set parity group as at least one logical volumes to the host computer; calculating a logical volume migration time when a utilization ratio of the parity group becomes equal to or larger than a threshold; and choosing, as a data migration source volume, one of the logical volumes included in the parity group that has the utilization ratio equal to or larger than the threshold, by referring to the calculated logical volume migration time, the data migration source volume being the logical volume from which data migrates. | 09-04-2008 |
20080216087 | AFFINITY DISPATCHING LOAD BALANCER WITH PRECISE CPU CONSUMPTION DATA - A system for distributing a plurality of tasks over a plurality of nodes in a network includes: a plurality of processors for executing tasks; a plurality of nodes comprising processors; a task dispatcher; and a load balancer. The task dispatcher receives as input the plurality of tasks; calculates a task processor consumption value for the tasks; calculates a node processor consumption value for the nodes; calculates a target node processor consumption value for the nodes; and then calculates a load index value as a difference between the calculated node processor consumption for a node i and the target node processor consumption value for the node i. The balancer distributes the tasks among the nodes to balance the processor workload among the nodes according to the calculated load index value of each node, such that the calculated load index value of each node is substantially zero. | 09-04-2008 |
20080216088 | Coordinating service performance and application placement management - Apparatus, systems and methods for service and/or business for coordinating tasks of performance management and application placement management in a dynamic fashion. An example process is dynamic in the face of fluctuations in the request load to the distributed computer system and the periodic adjustments to the placement of applications onto servers in said distributed computer system. There are two opposite functional flows in said process: a demand estimation function and a capacity adjustment function. The coordination system involves two subsystems: a demand estimator and a capacity adjuster, along with appropriate interfaces to of the performance manager and the application placement manager. This results in application placement process reacting quicker to demand fluctuations, performance guarantees are better met by rearranging the resources to be allocated to the various classes of service, and the management system works in an unsupervised mode, thus reducing manual administration costs and human errors. | 09-04-2008 |
20080222646 | PREEMPTIVE NEURAL NETWORK DATABASE LOAD BALANCER - A preemptive neural network database load balancer configured to observe, learn and predict the resource utilization that given incoming tasks utilize. Allows for efficient execution and use of system resources. Preemptively assigns incoming tasks to particular servers based on predicted CPU, memory, disk and network utilization for the incoming tasks. Direct write-based tasks to a master server and utilizes slave servers to handle read-based tasks. Read-base tasks are analyzed with a neural network to learn and predict the amount of resources that tasks will utilize. Tasks are assigned to a database server based on the predicted utilization of the incoming task and the predicted and observed resource utilization on each database server. The predicted resource utilization may be updated over time as the number of records, lookups, images, PDFs, fields, BLOBs and width of fields in the database change over time. | 09-11-2008 |
20080222647 | Method and system for load balancing of computing resources - A load balancing method incorporates temporarily inactive machines as part of the resources capable of executing tasks during heavy process requests periods to alleviate some of the processing load on other computing resources. This method determines which computing resources are available and prioritizes these resources for access by the load balancing process. A snap shot of the resource configuration and made secured along with all data on this system such that no contamination occurs between resident data on that machine and any data placed on that machine as put of the load balancing activities. After a predetermined period of time or a predetermined event, the availability of the temporary resources for load balancing activities ends. At this point, the original configuration and data is restored to the computing resource such that no trace of use of the resource in load balancing activities is detected to the user. | 09-11-2008 |
20080235704 | Plug-and-play load balancer architecture for multiprocessor systems - One embodiment relates to a multiprocessor system with a modular load balancer. The multiprocessor system includes a plurality of processors, a memory system, and a communication system interconnecting the processors and the memory system. A kernel comprising instructions that are executable by the processors is provided in the memory system, and a scheduler is provided in the kernel. Load balancing routines are provided in the scheduler, the load balancing routines including interfaces for a plurality of balancer operations. At least one balancer plug-in module is provided outside the scheduler, the balancer plug-in module including the plurality of balancer operations. Other embodiments, aspects, and features are also disclosed. | 09-25-2008 |
20080235705 | Methods and Apparatus for Global Systems Management - Techniques for globally managing systems are provided. One or more measurable effects of at least one hypothetical action to achieve a management goal are determined at a first system manager. The one or more measurable effects are sent from the first system manager to a second system manager. At the second system manager, one or more procedural actions to achieve the management goal are determined in response to the one or more received measurable effects. The one or more procedural actions are executed to achieve the management goal. | 09-25-2008 |
20080244611 | PRODUCT, METHOD AND SYSTEM FOR IMPROVED COMPUTER DATA PROCESSING CAPACITY PLANNING USING DEPENDENCY RELATIONSHIPS FROM A CONFIGURATION MANAGEMENT DATABASE - The invention discloses a computer data processing capacity planning system that utilizes known workload planning information along with hardware and/or software configuration information from the actual operating environment to accurately estimate the production system capacity available for use in carrying out one or more processing task(s). | 10-02-2008 |
20080250421 | Data Processing System And Method - A method of forming a cluster from a plurality of potential clusters that share a common node, the method comprising determining a criticality factor of each potential cluster by combining criticality factors of the nodes of each potential cluster; and forming the cluster from the potential cluster with the highest criticality factor. | 10-09-2008 |
20080263562 | Management Information System for Allocating Contractors with Requestors - A management information system, computer implemented method and computer product for allocating contractors and requestors. A networked server is provided which includes a processor, a memory coupled to the processor and a database operatively stored in the memory. The database comprises a first database component operative to maintain a plurality of contract service records, each contract service record being associated with a contractor and including contractor data representing an available contract services and a contractor locality; a second database component is operative to maintain a plurality of requester records, each requester record being associated with an individual requester and including requester data representing a requested contract service and a requester locality. A database engine is operatively loaded into the memory and includes instructions executable by the processor to determine a suggested contractor/requestor allocation in dependence on a correspondence of at least the data representing a contract service and locality among the contract services records and requester records and output a suggested contractor/requestor allocation in a tiered order of preference of contractors and sends notices to the identified contractors. | 10-23-2008 |
20080263563 | METHOD AND APPARATUS FOR ONLINE SAMPLE INTERVAL DETERMINATION - In one embodiment, functional system elements are added to an autonomic manager to enable automatic online sample interval selection. In another embodiment, a method for determining the sample interval by continually characterizing the system workload behavior includes monitoring the system data and analyzing the degree to which the workload is stationary. This makes the online optimization method less sensitive to system noise and capable of being adapted to handle different workloads. The effectiveness of the autonomic optimizer is thereby improved, making it easier to manage a wide range of systems. | 10-23-2008 |
20080271037 | METHOD AND APPARATUS FOR LOAD BALANCE SERVER MANAGEMENT - A computer implemented method, apparatus, and computer usable program code for balancing management loads. Loads are analyzed for a plurality of hardware control points to form an analysis in response to receiving a notification from a hardware control point indicating that a new manageable data processing system has been discovered. One of the plurality of hardware control points is selected using the analysis to form a selected hardware control point. The message is sent to the selected hardware control point to manage the new manageable data processing system, wherein the selected hardware control point manages the new manageable data processing system. | 10-30-2008 |
20080271038 | SYSTEM AND METHOD FOR EVALUATING A PATTERN OF RESOURCE DEMANDS OF A WORKLOAD - A method comprises receiving, by pattern evaluation logic, a plurality of occurrences of a prospective pattern of resource demands in a representative workload. The method further comprises evaluating, by the pattern evaluation logic, the received occurrences of the prospective pattern of resource demands, and determining, by the pattern evaluation logic, based on the evaluation of the received occurrences of the prospective pattern of resource demands, how representative the prospective pattern is of resource demands of the representative workload. | 10-30-2008 |
20080271039 | SYSTEMS AND METHODS FOR PROVIDING CAPACITY MANAGEMENT OF RESOURCE POOLS FOR SERVICING WORKLOADS - A method comprises receiving, by a capacity management tool, a capacity management operation request that specifies a resource pool-level operation desired for managing capacity of a resource pool that services workloads. The capacity management tool determines, in response to the received request, one or more actions to perform in the resource pool for performing the requested capacity management operation in compliance with defined operational parameters of the workloads. The method further comprises performing the determined one or more actions for performing the requested capacity management operation. | 10-30-2008 |
20080276247 | Method for the Real-Time Analysis of a System - The invention relates to a method for the real-time analysis of a system, especially a technical system, which is to process tasks (τ). A job that is defined by processing of a task (τ) generates system expenses. In order to create a particularly quick and accurate method, an approximation of the method is cancelled when it is considered that an interval (I, I | 11-06-2008 |
20080282254 | Geographic Resiliency and Load Balancing for SIP Application Services - A mechanism for achieving resiliency and load balancing for SIP application services and, in particular, in geographic distributed sites. A method performs a distribution of SIP requests among SIP servers, where at least two sites with a load balancer in each site is configured. The method includes receiving a SIP request by a first load balancer in a first site; determining whether the SIP request should be redirected to a second site; and redirecting the SIP request to an address of a second load balancer in the second site. The invention also includes a SIP proxy including a receiving unit receiving SIP requests; a load balancing unit distributing SIP requests between SIP entities; and a health monitoring unit verifying availability of the SIP entities. The SIP proxy may further be configured with a proximity measuring unit determining a proximity to a SIP entity. | 11-13-2008 |
20080288952 | PROCESSING APPARATUS AND DEVICE CONTROL UNIT - A processing apparatus including a plurality of task-processing devices includes a calculation control unit and a device control unit configured to cause the task-processing devices to perform tasks of at least one kind in parallel in accordance with control performed by the calculation control unit. The device control unit sends a command for starting task processing to each of the task-processing devices in accordance with the task group generated by and sent from the calculation control unit. The task-processing devices each execute a task issued from the device control unit, and when the task is complete, each provide a notification that the task is complete to the device control unit. The device control unit provides, in the case in which all tasks included in the task group are complete, a notification that the task group is complete to the calculation control unit. | 11-20-2008 |
20080301695 | Managing a Plurality of Processors as Devices - A computer system's multiple processors are managed as devices. The operating system accesses the multiple processors using processor device modules loaded into the operating system to facilitate a communication between an application requesting access to a processor and the processor. A device-like access is determined for accessing each one of the processors similar to device-like access for other devices in the system such as disk drives, printers, etc. An application seeking access to a processor issues device-oriented instructions for processing data, and in addition, the application provides the processor with the data to be processed. The processor processes the data according to the instructions provided by the application. | 12-04-2008 |
20080301696 | Controlling workload of a computer system through only external monitoring - Provides control of the workload, flow control, and concurrency control of a computer system through the use of only external performance monitors. Data collected by external performance monitors are used to build a simple, black box model of the computer system, comprising two resources: a virtual bottleneck resource and a delay resource representing all non-bottleneck resources combined. The service times of the two resource types are two parameters of the black box model. The two parameters are evaluated based on historical data collected by the external performance monitors. The workload capacity that avoids saturation of the bottleneck resource is then determined and used as a control variable by a flow controller to limit the workload on the computer system. The workload may include a mix of traffic classes. In such a case, data is collected, parameters are evaluated and control variables are determined for each of the traffic classes. | 12-04-2008 |
20080301697 | MULTIPLE TASK MANAGEMENT BETWEEN PROCESSORS - A system for multiple task management between processors includes a first processing device for executing tasks. A respective storage element is provided for storing one or more commands from each of the tasks. A command dispatcher is provided for selectively transferring a command from one of the storage elements to a command queue provided within a second processing device. | 12-04-2008 |
20080320486 | Business Process Automation - A system for business processes within and between organizations and/or individuals may be automated using standards-based, service-oriented business process automation architectures based on XML and Web Services Standards is described. An execution framework for the business processes is also described. Further aspects include a decomposition methodology for deconstructing business process specifications into business flows, business rules and business states. The business flows (FIG. | 12-25-2008 |
20080320487 | SCHEDULING TASKS ACROSS MULTIPLE PROCESSOR UNITS OF DIFFERING CAPACITY - A mechanism is provided for scheduling tasks across multiple processor units of differing capacity. In a multiple processor unit system with processor units of disparate speeds, it is advantageous to have the most processing-intensive tasks run on the processor units with the highest capacity. All tasks are initially scheduled on the lowest capacity processor units. Because processor units with higher capacity are more likely to have idle time, these higher capacity processor units may pull one or more tasks onto themselves from the same or lower capacity processor units. A processor unit will attempt to pull tasks that utilize a larger percentage of the timeslice. When a higher capacity processor unit is overloaded or near capacity, the higher capacity processor unit may push tasks to processor units with the same or lower capacity. A processor unit will attempt to push tasks that utilize a smaller percentage of the timeslice. | 12-25-2008 |
20080320488 | CONTROL DEVICE AND CONTROL METHOD FOR REDUCED POWER CONSUMPTION IN NETWORK DEVICE - This invention provides a data transfer control device for carrying out data transfer using a plurality of transfer resources. The data transfer control device comprises a transfer resource management portion that set the plurality of transfer resources to either one of a transfer-enabled state whereby data transfer is enabled and a plurality of standby states on the basis of a load on the data transfer control device and that manages the plurality of transfer resources so as to assume the set operating status; and a load distribution portion that distributes the data to transfer resources that have been set to the transfer-enabled state. The plurality of standby states are states which data transfer is disabled and which mutually differ at a minimum in terms of at least one of power consumption level and transition time to the transfer-enabled state. | 12-25-2008 |
20080320489 | LOAD BALANCING - In a preferred embodiment, the present invention provides a method of load balancing in a data processing system comprising a plurality of physical CPUs and a plurality of virtual CPUs, the method comprising: mapping one or more virtual CPUs to each of said physical OPUs; and dynamically adapting the mapping depending on the load of said physical CPUs. | 12-25-2008 |
20090007133 | Balancing of Load in a Network Processor - According to an aspect of the present invention, a scheduler balances the load on the microengines comprising one or more threads allocated to execute a corresponding microblock. The scheduler determines the load on each microengine at regular time intervals. The scheduler balances the load of a heavily loaded microengine by distributing the corresponding load among one or more lightly loaded microengines. | 01-01-2009 |
20090013327 | CUSTOMER INFORMATION CONTROL SYSTEM WORKLOAD MANAGEMENT BASED UPON TARGET PROCESSORS REQUESTING WORK FROM ROUTERS - The invention provides for customer information control system (CICS) workload management in performance of computer processing tasks based upon “target” processors requesting work from “routers”, by providing for a target process(or) to first initiate a request to a router seeking distribution of processing task(s) before a new task is assigned by the router to that target for completion. | 01-08-2009 |
20090013328 | CONTENT SWITCHING PROGRAM, CONTENT SWITCHING METHOD, AND CONTENT MANAGEMENT APPARATUS - A computer-readable storage medium on which is recorded a content switching program used to direct a device for transmitting a content corresponding to data to a requester in response to a data acquire request from the requester to perform a content switching process, the process comprising: a load acquiring step of acquiring a load on the device; a content selecting step of selecting, on the basis of the acquired load, one of a plurality of contents that can be a content to be transmitted and each of which has a different volume; and a storage location changing step of changing a storage location of the content to be transmitted into a storage location of the selected content. | 01-08-2009 |
20090019449 | Load balancing method and apparatus in symmetric multi-processor system - Provided are a load balancing method and a load balancing apparatus in a symmetric multi-processor system. The load balancing method includes selecting at least two processors based on a load between a plurality of processors, from among the plurality of processors, migrating a predetermined task stored in a run queue of a first processor to a migration queue of a second processor, and migrating the predetermined task stored in the migration queue of the second processor to a run queue of the second processor. Accordingly, a run queue of a processor is not blocked while migrating a task, an immediate response of the run queue is possible, and a waiting time of a scheduler is reduced. Consequently, the scheduler can speedily perform context switching, and thus performance of the entire operating system is improved. | 01-15-2009 |
20090019450 | APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT FOR TASK MANAGEMENT - A task management apparatus comprises a plurality of processors, and correspondingly stores, a plurality of tasks to be assigned to the processors within a predetermined period of time, and temporal groups each of which is assigned to the plurality of the tasks. The task management apparatus assigns one of the tasks to one of the processors. After having assigned the task, the task management apparatus assigns, to the one of the processors that has finished processing the assigned task, the other tasks that are in correspondence with the same temporal group as the temporal group with which the assigned task is in correspondence, before assigning the tasks that are not in correspondence with the temporal group. | 01-15-2009 |
20090025007 | Method and apparatus for managing virtual ports on storage systems - A storage system is configured to create and manage virtual ports on physical ports. The storage system can transfer associations between virtual ports and physical ports when a failure occurs in a physical port or a link connected to the physical port so that a host can access volumes under the virtual ports through another physical port. The storage system can also change associations between virtual ports and physical ports by taking into account the relative loads on the physical ports. When a virtual machine is migrated from one host computer to another, the loads on the physical ports in the storage system can be used to determine whether load balancing should take place. Additionally, the storage system can transfer virtual ports to a remote storage system that will take over the virtual ports, so that a virtual machine can be migrated to remote location. | 01-22-2009 |
20090031321 | BUSINESS PROCESS MANAGEMENT SYSTEM, METHOD THEREOF, PROCESS MANAGEMENT COMPUTER AND PROGRAM THEREOF - A business process management computer, when the load of a service execution computer etc. is increased, determines the condition of a service call step which is calling a service execution unit, etc. of said service execution computer, etc. If said condition is the bottleneck condition, it determines the condition of the service call step in other process which is calling said service execution unit, etc. If there is no condition other than the bottleneck in that condition, the addition of the resource for said service execution computer, etc. is determined and if there is a condition in which the throughput can be limited, it is determined that the throughput should be limited. In a process which is configured with a plurality of service call steps, when the resource insufficiency has occurred, a means to make the adequate addition of the resource possible can be provided. | 01-29-2009 |
20090037924 | PERFORMANCE OF A STORAGE SYSTEM - A method for operating a storage system, including storing data redundantly in the system and measuring respective queue lengths of input/output requests to operational elements of the system. The queue lengths are compared to an average queue length to determine respective performances of the operational elements of the storage system. In response to the average queue lengths and a permitted deviation from the average an under-performing operational element among the operational elements is identified. An indication of the under-performing operational element is provided to host interfaces in the storage system. One of the host interfaces receives requests for specified items of the data directed to the under-performing element, and in response to the indication, some of the requests are diverted from the under-performing operational element to one or more other operational elements of the storage system that are configured to provide the specified items of the data. | 02-05-2009 |
20090037925 | SMART STUB OR ENTERPRISE JAVA BEAN IN A DISTRIBUTED PROCESSING SYSTEM - A clustered enterprise distributed processing system. The distributed processing system includes a first and a second computer coupled to a communication medium. The first computer includes a virtual machine (JVM) and kernel software layer for transferring messages, including a remote virtual machine (RJVM). The second computer includes a JVM and a kernel software layer having a RJVM. Messages are passed from a RJVM to the JVM in one computer to the JVM and RJVM in the second computer. Messages may be forwarded through an intermediate server or rerouted after a network reconfiguration. Each computer includes a Smart stub having a replica handler, including a load balancing software component and a failover software component. Each computer includes a duplicated service naming tree for storing a pool of Smart stubs at a node. | 02-05-2009 |
20090049450 | METHOD AND SYSTEM FOR COMPONENT LOAD BALANCING - A system for balancing component load. In response to receiving a request, data is updated to reflect a current number of pending requests. In response to analyzing the updated data, it is determined whether throttling is necessary. In response to determining that throttling is not necessary, a corresponding request to the received request is created and a flag is set in the corresponding request. Then, the corresponding request is sent to one of a plurality of lower level components of an input/output stack of an operating system for processing based on the analyzed data to balance component load in the input/output stack of the operating system. | 02-19-2009 |
20090055835 | System and Method for Managing License Capacity in a Telecommunication Network - According to teachings herein, a telecommunication network manages licensed transaction capacity for a licensed service provided by the network, based on dynamically adjusting the allocation of licensed capacity across multiple traffic processors providing the service. Reallocation of licensed capacity is performed with respect to the actual traffic loads at the traffic processors. For example, licensed capacity at a lightly loaded traffic processor is decreased and licensed capacity is correspondingly increased at a heavily loaded traffic processor. This dynamic redistribution of licensed capacity to reflect variations in the distribution of traffic loads across the traffic processors provides for more efficient utilization of the licensed transaction capacity. | 02-26-2009 |
20090064164 | METHOD OF VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT AND MULTITHREADED PROCESSOR WITH VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT - A program product and method of managing task execution on an integrated circuit chip such as a chip-level multiprocessor (CMP) with Simultaneous MultiThreading (SMT). Multiple chip operating units or cores have chip sensors (temperature sensors or counters) for monitoring temperature in units. Task execution is monitored for hot tasks and especially for hotspots. Task execution is balanced, thermally, to minimize hot spots. Thermal balancing may include Simultaneous MultiThreading (SMT) heat balancing, chip-level multiprocessors (CMP) heat balancing, deferring execution of identified hot tasks, migrating identified hot tasks from a current core to a colder core, User-specified Core-hopping, and SMT hardware threading. | 03-05-2009 |
20090064165 | Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090064166 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A system and method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090064167 | System and Method for Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history. | 03-05-2009 |
20090064168 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks By Modifying Tasks - A system and method are provided for providing hardware based dynamic load balancing of message passing interface (MPI) tasks by modifying tasks. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. Thus, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090064169 | System and Method for Sensor Scheduling - A system for sensor scheduling includes a plurality of sensors operable to perform one or more tasks and a processor operable to receive one or more missions and one or more environmental conditions associated with a respective mission. Each mission may include one or more tasks to be performed by one or more of the plurality of sensors. The processor is further operable to select one or more of the plurality of sensors to perform a respective task associated with the respective mission. The processor may also schedule the respective task to be performed by the selected one or more sensors. The scheduling is based at least on a task value that is determined based on an options pricing model. The options pricing model is based at least on the importance of the respective task to the success of the respective mission and one or more scheduling demands. | 03-05-2009 |
20090064170 | COMMUNICATION APPARATUS AND METHOD FOR CONTROLLING COMMUNICATION APPARATUS - A communication apparatus includes a control unit including a controller configured to control the communication apparatus, a first communication unit configured to perform communication under control of the controller, and a second communication unit including a subcontrol unit and configured to perform communication under control of the subcontrol unit, wherein a load condition of the controller is determined, and one of the first communication unit and the second communication unit is selected to perform communication processing based on the determined load condition. | 03-05-2009 |
20090070771 | METHOD AND SYSTEM FOR EVALUATING VIRTUALIZED ENVIRONMENTS - A system and method are provided for incorporating compatibility analytics and virtualization rule sets into a transformational physical to virtual (P2V) analysis for designing a virtual environment from an existing physical environment and for ongoing management of the virtual environment to refine the virtualization design to accommodate changing requirements and a changing environment. | 03-12-2009 |
20090077562 | Client Affinity in Distributed Load Balancing Systems - Aspects of the subject matter described herein relate to client affinity in distributed load balancing systems. In aspects, a request from a requester is sent to each server of a cluster. Each server determines whether it has affinity to the requester. If so, the server responds to the request. Otherwise, if the request would normally be load balanced to the server, the server queries the other servers in the cluster to determine whether any of them have affinity to the requester. If one of them does, the server drops the request and allows the other server to respond to the request; otherwise, the server responds to the request. | 03-19-2009 |
20090083751 | INFORMATION PROCESSING APPARATUS, PARALLEL PROCESSING OPTIMIZATION METHOD, AND PROGRAM - According to one embodiment, an information processing apparatus includes a plurality of execution units and a scheduler which controls assignment of a plurality of basic modules of a program to the plurality of execution units. The scheduler detects a parallel degree representing a parallelization ratio in parallel processing of a program by the plurality of execution units, and detects a load associated with control of assigning the plurality of basic modules in the parallel processing of the program by the plurality of execution units. And then, the scheduler combines two or more basic modules which are successively executed according to a paralleled execution description in order to assign two or more basic modules as a module to a single execution unit, when a value of the parallel degree exceeds a predetermined value and a value of the load exceeds a predetermined value. | 03-26-2009 |
20090089792 | METHOD AND SYSTEM FOR MANAGING THERMAL ASYMMETRIES IN A MULTI-CORE PROCESSOR - In general, the invention relates to a system that includes a multi-core processor and a dispatcher operatively connected to the multi-core processor. The dispatcher is configured to receive a first plurality of threads during a first period of time, dispatch the first plurality of threads only to a first core of the plurality of cores, receive a second plurality of threads during a second period of time, dispatch the second plurality of threads only to a second core of the plurality of cores, migrate to the second core any of the first plurality of threads that are still executing on the first after the first period of time has elapsed. The duration of the first period of time and the duration of the second period of time are determined using a thread migration schedule, and thread migration schedule is determined using at least one thermal characteristic of the multi-core processor. | 04-02-2009 |
20090089793 | Method and Apparatus for Performing Load Balancing for a Control Plane of a Mobile Communication Network - The invention includes a method and apparatus for providing load balancing of control traffic received by a mobility home agent implemented using multiple control elements. A method includes receiving, from a node, a control message intended for the network element, performing a load-balancing operation to select one of the control elements to handle the control message, and propagating the control message toward the selected one of the control elements. The load-balancing operation is performed using at least two load-balancing metrics comprising a first metric and a second metric. The load-balancing operation is performed in a manner for maintaining a context between the node from which the control message is received and the selected one of the control elements, such that subsequent control messages received from the node are propagated to the selected one of the control elements. | 04-02-2009 |
20090089794 | APPARATUS, SYSTEM, AND METHOD FOR CROSS-SYSTEM PROXY-BASED TASK OFFLOADING - An apparatus, system, and method are disclosed for offloading data processing. An offload task | 04-02-2009 |
20090094610 | Scalable Resources In A Virtualized Load Balancer - In one embodiment, a load balancing system may include a first physical device that provides a resource. The first physical device may have a first virtual device running actively thereon. The first virtual device may have the resource allocated to it on the physical device. The first physical device may also have a virtual server load balancer running actively thereon. The server load balancer may be adapted to balance a workload associated with the resource between the first virtual device and a second virtual device. The second virtual device may be running in active mode on a second physical device, and in standby mode on the first physical device. The first virtual device may be in standby mode on the second physical device. | 04-09-2009 |
20090094611 | Method and Apparatus for Load Distribution in Multiprocessor Servers - A method and arrangement for handling incoming requests for multimedia services in an application server having a plurality of processors. A service request is received from a user, requiring the handling of user-specific data. The identity of the user or other consistent user-related parameter is extracted from the received service request. Then, a scheduling algorithm is applied using the extracted identity or other user-related parameter as input, for selecting a processor associated with the user and that stores user-specific data for the user locally. Thereafter, the service request is transferred to the selected processor in order to be processed by handling the user-specific data. | 04-09-2009 |
20090094612 | Method and System for Automated Processor Reallocation and Optimization Between Logical Partitions - A method and system for reallocating processors in a logically partitioned environment. The present invention comprises a Performance Enhancement Program (PEP) and a Reallocation Program (RP). The PEP allows an administrator to designate several parameters and identify donor and recipient candidates. The RP compiles the performance data for the processors and calculates a composite parameter. For each processor in the donor candidate pool, the RP compares the composite parameter to the donor load threshold to determine if the processor is a donor. For each processor in the recipient candidate pool, the RP compares the composite parameter to the recipient load threshold to determine if the processor is a recipient. The RP then allocates the processors from the donors to the recipients. The RP continues to monitor and update the workload statistics based on either a moving window or a discrete window sampling system. | 04-09-2009 |
20090094613 | METHOD OF MANAGING WORKLOADS IN A DISTRIBUTED PROCESSING SYSTEM - An embodiment of the present invention is a method for generating a simulated processor load on a system of CPU's, and introducing a controlled workload into the system that is spread evenly across the available CPU resources and may be arranged to consume a precise, controllable portion of the resources. | 04-09-2009 |
20090100437 | TEMPERATURE-AWARE AND ENERGY-AWARE SCHEDULING IN A COMPUTER SYSTEM - A computer system to schedule loads across a set of processor cores is described. During operation, the computer system receives a process to be executed. Next, the computer system obtains one or more thermodynamic process characteristics associated with the process and one or more thermodynamic processor-core characteristics associated with operation of the set of processor cores. Then, the computer system schedules the process to be executed by at least one of the processor cores based on the one or more thermodynamic process characteristics and the one or more thermodynamic processor-core characteristics. | 04-16-2009 |
20090106767 | WORKLOAD PERIODICITY ANALYZER FOR AUTONOMIC DATABASE COMPONENTS - A computer data processing system and an article of manufacture for determining database workload periodicity. The computer data processing system includes a module for converting database activity samples spanning a time period from the dime domain to the frequency domain, the converting resulting in a frequency spectrum, a module for identifying fundamental peaks of the frequency spectrum, and a module for allocating database resources based on at least one of the fundamental peaks. | 04-23-2009 |
20090113442 | METHOD, SYSTEM AND COMPUTER PROGRAM FOR DISTRIBUTING A PLURALITY OF JOBS TO A PLURALITY OF COMPUTERS - Method and system for providing a mechanism for determining an optimal workload distribution, from a plurality of candidate workload distributions, each of which has been determined to optimize a particular aspect of a workload-scheduling problem. More particularly, the preferred embodiment determines a workload distribution based on resource selection policies. From this workload distribution, the preferred embodiment optionally determines a workload distribution based on job priorities. From either or both of the above parameters, the preferred embodiment determines a workload distribution based on a total prioritized weight parameter. The preferred embodiment also determines a workload distribution which attempts to match the previously determined candidate workload distributions to a goal distribution. Similarly, the preferred embodiment calculates a further workload distribution which attempts to maximize job throughput. | 04-30-2009 |
20090133031 | INFORMATION SYSTEM, LOAD CONTROL METHOD, LOAD CONTROL PROGRAM AND RECORDING MEDIUM - A load control server, computer program product, and method for controlling bottlenecks in an information system that includes application servers and a database server. Each application server executes at least one application program for processing a transaction received by each application server. The database server accesses a database based on a request received from an application server. A processing time required for each application program to process the transaction is monitored. A bottleneck relating to usage of at least one resource is detected. Each resource is a resource of at least one application server, a resource related to input to the transaction, a resource of the database server, or a resource related to the transaction. The detecting responds to the monitoring determining that the processing time for processing the transaction by at least one application server is not within a predesignated permissible processing time range. The detected bottleneck is removed. | 05-21-2009 |
20090144743 | Mailbox Configuration Mechanism - An email configuration system may use a topology database to determine if a change request results in a valid configuration. The topology database may contain a definition of an enterprise email system, including forests, servers, and individual mailboxes. If a valid configuration is found, a change request may be scheduled and implemented. The email configuration system may store the change request so that a change may be undone at a later time. Changes may be implemented to the enterprise mail system by changing the topology definition and running an analysis of the current topology and a desired topology. | 06-04-2009 |
20090144744 | Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - A method for evaluating performance of DMA-based algorithmic tasks on a target multi-core processing system includes the steps of: inputting a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; evaluating performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and providing results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system. | 06-04-2009 |
20090144745 | Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - Apparatus for evaluating the performance of DMA-based algorithmic tasks on a target multi-core processing system includes a memory and at least one processor coupled to the memory. The processor is operative: to input a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; to evaluate performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and to provide results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system. | 06-04-2009 |
20090144746 | ADJUSTING WORKLOAD TO ACCOMMODATE SPECULATIVE THREAD START-UP COST - Methods and apparatus provide for a workload adjuster to estimate the startup cost of one or more non-main threads of loop execution and to estimate the amount of workload to be migrated between different threads. Upon deciding to parallelize the execution of a loop, the workload adjuster creates a scheduling policy with a workload for a main thread and workloads for respective non-main threads. The scheduling policy distributes iterations of a parallelized loop to the workload of the main thread and iterations of the parallelized loop to the workloads of the non-main threads. The workload adjuster evaluates a start-up cost of the workload of a non-main thread and, based on the start-up cost, migrates a portion of the workload for that non-main thread to the main thread's workload. | 06-04-2009 |
20090150898 | MULTITHREADING FRAMEWORK SUPPORTING DYNAMIC LOAD BALANCING AND MULTITHREAD PROCESSING METHOD USING THE SAME - A multithreading framework supporting dynamic load balancing, the multithreading framework being used to perform multi-thread programming, the multithreading framework includes a job scheduler for performing parallel processing by redefining a processing order of one or more unit jobs, transmitted from a predetermined application, based on unit job information included in the respective unit jobs, and transmitting the unit jobs to a thread pool based on the redefined processing order, a device enumerator for detecting a device in which the predetermined application is executed and defining resources used inside the application, a resource manager for managing the resources related to the predetermined application executed using the job scheduler or the device enumerator, and a plug-in manager for managing a plurality of modules which performs various types of functions related to the predetermined application in a plug-in manner, and providing such plug-in modules to the job scheduler. | 06-11-2009 |
20090165012 | SYSTEM AND METHOD FOR EMBEDDED LOAD BALANCING IN A MULTIFUNCTION PERIPHERAL (MFP) - The invention relates to multifunction peripherals (MFPs). More particularly, the invention relates to an embedded load balancer in a multifunction peripheral. An MFP with an embedded load balancer may determine that another suitable device is more capable of handling a job request, and, subsequently, may transfer the job request to the other device. | 06-25-2009 |
20090165013 | DATA PROCESSING METHOD AND SYSTEM - In response to the activation of the data processing system, a request for processing is accepted in parallel with loading a series of data (a data body) from an external storage into a main memory independent of whether the processing of individual data is requested or not, and if target data of the request for processing is not loaded into the main memory, apparent system starting time is reduced by executing processing corresponding to the request after the target data is loaded into the main memory. | 06-25-2009 |
20090165014 | Method and apparatus for migrating task in multicore platform - Provided are a method and apparatus for migrating a task in a multi-core platform including a plurality of cores. The method includes transmitting codes of the task that is being performed in a first core among the plurality of cores to a second core among the plurality of cores, the transmitting of the codes being performed while performing the task at the first core, and resuming performing of the task in the second core based on the transmitted codes. | 06-25-2009 |
20090172693 | Assigning work to a processing entity according to non-linear representations of loadings - To perform load balancing across plural processing entities, load level indications associated with plural processing entities are received. The load level indications are representations based on applying a concave function on loadings of the plural processing entities. A processing entity is selected from among the plural processing entities to assign work according to the load level indications. | 07-02-2009 |
20090178052 | LATENCY-AWARE THREAD SCHEDULING IN NON-UNIFORM CACHE ARCHITECTURE SYSTEMS - A system and method for latency-aware thread scheduling in non-uniform cache architecture are provided. Instructions may be provided to the hardware specifying in which banks to store data. Information as to which banks store which data may also be provided, for example, by the hardware. This information may be used to schedule threads on one or more cores. A selected bank in cache memory may be reserved strictly for selected data. | 07-09-2009 |
20090178053 | DISTRIBUTED SCHEMES FOR DEPLOYING AN APPLICATION IN A LARGE PARALLEL SYSTEM - Embodiments of the invention provide a method for deploying and running an application on a massively parallel computer system, while minimizing the costs associated with latency, bandwidth, and limited memory resources. The executable code of a program may be divided into multiple code fragments and distributed to different compute nodes of a parallel computing system. During program execution, one compute node may fetch code fragments from other compute nodes as necessary. | 07-09-2009 |
20090193428 | Systems and Methods for Server Load Balancing - In one embodiment a system and a method relate to generating a server load balancing algorithm configured to distribute workload across multiple application servers, publishing the server load balancing algorithm to switches of the network, and the switches applying the server load balancing algorithm to received network packets to determine how to distribute the network packets among the multiple application servers. | 07-30-2009 |
20090199199 | Backup procedure with transparent load balancing - In an embodiment of the invention, an apparatus and method provides a backup procedure with transparent load balancing. The apparatus and method perform acts including: performing a preamble phase in order to determine if a file will be backed up from an agent to a portal; and applying a chunking policy on the file, wherein the chunking policy comprises performing chunking of the file on an agent, performing chunking of the file on the portal, or transmitting the file to the portal without chunking. | 08-06-2009 |
20090199200 | Mechanisms to Order Global Shared Memory Operations - A method and data processing system for performing fence operations within a global shared memory (GSM) environment having a local task executing on a processor and providing GSM commands for processing by a host fabric interface (HFI) window that is allocated to the task. The HFI window has one or more registers for use during local fence operations. A first register tracks a first count of task-issued GSM commands, and a second register tracks a second count of GSM operations being processed by the HFI. The processing logic detects a locally-issued fence operation, and responds by performing a series of operations, including: automatically stopping the task from issuing additional GSM commands; monitoring for completion of all the task-issued GSM commands at the HFI; and triggering a resumption of issuance of GSM commands by the task when the completion of all previous task-issued GSM commands is registered by the HFI. | 08-06-2009 |
20090210881 | PROCESS PLACEMENT IN A PROCESSOR ARRAY - There is provided a method for placing a plurality of processes onto respective processor elements in a processor array, the method comprising (i) assigning each of the plurality of processes to a respective processor element to generate a first placement; (ii) evaluating a cost function for the first placement to determine an initial value for the cost function, the result of the evaluation of the cost function indicating the suitability of a placement, wherein the cost function comprises a bandwidth utilisation of a bus interconnecting the processor elements in the processor array; (iii) reassigning one or more of the processes to respective different ones of the processor elements to generate a second placement; (iv) evaluating the cost function for the second placement to determine a modified value for the cost function; and (v) accepting or rejecting the reassignments of the one or more processes based on a comparison between the modified value and the initial value. | 08-20-2009 |
20090217286 | Adjunct Processor Load Balancing - Managing the workload across one or more partitions of a plurality of partitions of a computing environment. One or more processors are identified in a partition to be managed by a quality weight defined according to characteristics of each corresponding processor. A load of each identified processor is measured depending on the requests already allocated to be processed by each corresponding processor. Each identified processor has a performance factor determined based on the measured load and the quality weight. The performance factor is a measurement of processor load. A new request is identified to be allocated to the partition, selecting a processor from the partition with the lowest performance factor. The new request is allocated to the selected processor. | 08-27-2009 |
20090217287 | FEDERATION OF COMPOSITE APPLICATIONS - A predetermined business task of a composite application can be fulfilled. The composite application can include a set of components. The composite application is instantiated by a template means and a predefined collaborative context module controls the interaction of the set of components during the runtime of the composite application. A set of components fulfilling individual services on individual different server systems is leveraged by the composite application. During the instantiation of the composite application from a template, the referenced components (as types) are instantiated leading to runtime instances of these components. The interaction of the different components is controlled on individual different server systems utilizing a primary context module. The primary context module communicates with an appropriate collaborative module implemented locally on the respective set of servers, where the local context modules act as secondary context modules in relation to the primary context modules. For each of the secondary context modules, local components communicate to control the interaction of components. | 08-27-2009 |
20090217288 | Routing Workloads Based on Relative Queue Lengths of Dispatchers - Mechanisms for distributing workload items to a plurality of dispatchers are provided. Each dispatcher is associated with a different computing system of a plurality of computing systems and workload items comprise workload items of a plurality of different workload types. A capacity value for each combination of workload type and computing system is obtained. For each combination of workload type and computing system, a queue length of a dispatcher associated with the corresponding computing system is obtained. For each combination of workload type and computing system, a dispatcher's relative share of incoming workloads is computed based on the queue length for the dispatcher associated with the computing system. In addition, incoming workload items are routed to a dispatcher, in the plurality of dispatchers, based on the calculated dispatcher's relative share for the dispatcher. | 08-27-2009 |
20090222837 | WORK FLOW MANAGEMENT SYSTEM AND WORK FLOW MANAGEMENT METHOD - A work flow management method is executed by a system including a person-in-charge terminal, and a management device, wherein the person-in-charge terminal receives n input of information containing a value requested for decision through an operation by a person in charge, transmits the input information as application information to the management device, and receives result information of an examination about the application information from the management device, and the management device notifies a decider of the application information received from the person-in-charge terminal, receives an input of the result information containing a range in which at least the application information can take a value when approving the application information through the operation of the decider; and transmits the result information to the person-in-charge terminal. | 09-03-2009 |
20090241124 | ONLINE MULTIPROCESSOR SYSTEM RELIABILITY DEFECT TESTING - A multiprocessor system comprising a plurality of processors is disclosed. The plurality of processors includes a first processor including first monitor on-chip and a second processor including a including a second monitor on-chip. The first monitor on-chip is configured to measure load on the second processor and the second monitor on-chip is configured to measure load on the first processor. The first monitor on-chip is configured to cause the second monitor on-chip to perform a self-test on the second processor if the load on the second processor is below a second processor load threshold value and the second monitor on-chip is configured to cause the first monitor on-chip to perform a self-test on the first processor if the load on the first processor is below first processor load threshold value. | 09-24-2009 |
20090249352 | Resource Utilization Monitor - Load-balancing threads among a plurality of processing units. The method may include a first processing unit executing a plurality of software threads using a respective plurality of hardware strands. The plurality of hardware strands may share at least one hardware resource within the first processing unit. The method may further include monitoring the at least one hardware resource, wherein, for each respective hardware strand. Monitoring may include, for each respective hardware resource of the at least one hardware resource: maintaining information regarding the respective hardware strand requesting to use the respective hardware resource but failing to do so because the respective hardware resource is in use, comparing the information to a threshold, and generating an interrupt if the information exceeds the threshold. One or more load-balancing operations may be performed in response to the interrupt. | 10-01-2009 |
20090254918 | Mechanism for Performance Optimization of Hypertext Preprocessor (PHP) Page Processing Via Processor Pinning - A method, system, and computer program product for optimizing “Hypertext Preprocessor” (PHP) processes by identifying the PHP pages which are active on a server and forwarding requests for specific pages to a processor which has recently processed that page. A request processing optimization (RPO) utility assigns an initial request received at the server for a PHP page based on a number of factors which may include a relative usage level of a processor within a pool of available processors on a server. The RPO utility assigns a request to additional processors based on: (1) a threshold frequency of page requests; and (2) a resource intensive factor of a page request measured by average response time of the page request. The assignment of PHP pages to a particular processor(s) enhances cache performance since the requisite code for a specific PHP page is loaded into the processor's cache. | 10-08-2009 |
20090260016 | SYSTEM AND/OR METHOD FOR BULK LOADING OF RECORDS INTO AN ORDERED DISTRIBUTED DATABASE - In a large-scale transaction such as the bulk loading of new records into an ordered, distributed database, a transaction limit such as an insert limit may be chosen, partitions on overfull storage servers may be designated to be moved to underfull storage servers, and the move assignments may be based, at least in part on the degree to which a storage server is underfull and the move and insertion costs of the partitions to be moved. | 10-15-2009 |
20090260017 | WORKFLOW EXECUTION DEVICE AND WORKFLOW EXECUTION METHOD - A workflow execution device is provided. The latest update date information relating to date when update is performed is added to each step of a workflow definition file. When executing each step of the workflow definition file, processing execution date information relating to date when the execution is performed is added to data processed. Subsequently, when executing each step of the workflow definition file, the final processing date of the data is determined from the processing execution date information added to the data processed. In a case where the update date determined from the latest update date information prior to the execution is later than the final processing date of the data to be processed in the execution, the processing in the execution is cancelled. | 10-15-2009 |
20090265713 | PROACTIVE CORRECTION ALERTS - Computerized methods and systems for creating and documenting protocol orders in a molecular diagnostic laboratory environment are provided. Utilizing the methods and systems described herein, protocol statements may require values to be entered in association therewith prior to permitting access to subsequent protocol orders. Accordingly, more accurate test runs and, consequently, more accurate test results, may be achieved. Additionally, as values associated with protocol statements are electronically captured, in accordance with embodiments hereof, such values may be searched to evaluate trends or identify protocol orders and/or results that may be affected by a later discovered error or the like. | 10-22-2009 |
20090271798 | Method and Apparatus for Load Balancing in Network Based Telephony Application - Techniques are disclosed for load balancing in networks such as those networks handling telephony applications. By way of example, a method for directing requests associated with calls to servers in a system comprised of a network routing calls between a plurality of nodes wherein a node participates in a call as a caller or a receiver and wherein a load balancer sends requests associated with calls to a plurality of servers comprises the following steps. A request associated with a node belonging to a group including a plurality of nodes is received. A server is selected to receive the request. A subsequent request is received. A determination is made whether or not the subsequent request is associated with a node belonging to the group. The subsequent request is sent to the server based on determining that the subsequent request is associated with a node belonging to the group. By way of another example, a method for balancing requests among servers in a client server environment wherein a load balancer sends requests associated with a client to a plurality of servers comprises the following steps. Information is maintained regarding a weighted number of requests assigned to each server. The load balancer receives a request from a client. A server s | 10-29-2009 |
20090276787 | PERFORMING DYNAMIC SIMULATIONS WITHIN VIRTUALIZED ENVIRONMENT - A method and apparatus for and article of manufacture for simulating workloads experienced by multiple partitions in a virtualized system are provided. A master workload driver initiates, coordinates and regulates one or more workload drivers that execute one or more workload simulation tasks in a logical partition. Further, each workload driver may be configured to report a measure of performance regarding the workload to the master control driver where results of many workload drivers may be correlated and analyzed. A configuration file specifies the characteristics of each simulation. Further, the rate and nature of workloads may be adjusted dynamically during a given simulation to model the performance under different real-world scenarios of different computational loads that may be experienced by the virtualized system. | 11-05-2009 |
20090288095 | Method and System for Optimizing a Job Scheduler in an Operating System - A workload scheduler determines how to submit jobs to several scheduler agents across multiple systems. The scheduler engine determines the systems to which it is able to submit jobs. A job is received and analyzed to determine systems to which the job can be submitted. The scheduler engine determines which system will receive the job by evaluating the next system in line and determining if the job can be sent to that system and if that system is currently in a healthy state. The scheduler engine sends the job to the selected system. The scheduler agents inform the scheduler engine when the job is submitted and when it is executed. Once a time period has expired, the engine evaluates the health of each of the systems based on the number of jobs submitted and executed by each system. | 11-19-2009 |
20090288096 | LOAD BALANCING FOR IMAGE PROCESSING USING MULTIPLE PROCESSORS - A method and system for load balancing the work of NP processors (NP≧3) configured to generate each image of multiple images in a display area of a display device. The process for each image includes: dividing the display area logically into NP initial segments ordered along an axis of the display area; assigning each processor to a corresponding initial segment; assigning a thickness to each initial segment; simultaneously computing an average work function per pixel for each initial segment; generating a cumulative work function from the average work function per pixel for each initial segment; partitioning a work function domain of the cumulative work function into NP sub-domains; determining NP final segments of the display area by using the cumulative work function to inversely map boundaries of the sub-domains onto the axis; assigning each processor to a final segment, and displaying and/or storing the NP final segments. | 11-19-2009 |
20090300642 | FILE INPUT/OUTPUT SCHEDULER - Handling of input or output (I/O) to or from a media device may be implemented in a system having a memory, a processor unit with a main processor and an auxiliary processor having an associated local memory, and the media device. An incoming I/O request received from an application running on the processor unit may be serviced according to the schedule. A set of processor executable instructions configured to implement I/O handling may include media filter layers. I/O handling may alternatively comprise: receiving an incoming I/O request from an application running on a main processor; inserting the request into a schedule embodied in the main memory; and implementing the request according to the schedule and one or more filters, at least one of which is implemented by an auxiliary processor. | 12-03-2009 |
20090313634 | Dynamically selecting an optimal path to a remote node - In a multi-cell system, a dynamic adjustment of a workload of a data path between multiple cells of the system may be preferred to eliminate system latencies during operation of the system. The dynamic adjustment may include monitoring a workload, or an amount of data traffic, of a data path and determining if the monitored workload of the data path exceeds a predetermined workload threshold. If the workload threshold is exceeded, the dynamic adjustment of the workload of the data path may include transferring a portion of data from the monitored data path to another data path that is also connected to the same cells as the monitored data path. The transfer of data may be to a previously-existing data path that has capacity for the data, to a newly-created data path, or to both a previously-existing data path and a new data path. | 12-17-2009 |
20090313635 | SYSTEM AND/OR METHOD FOR BALANCING ALLOCATION OF DATA AMONG REDUCE PROCESSES BY REALLOCATION - The subject matter disclosed herein relates to a system and/or method for allocating data among reduce processes. | 12-17-2009 |
20090313636 | Executing An Application On A Parallel Computer - Methods, apparatus, and products are disclosed for executing an application on a parallel computer that include: executing, by a current compute node, a current task of the application, including producing results; determining, by the current compute node in dependence upon current network characteristics and application characteristics, whether to transfer the results to a next compute node for further processing by a next task on the next compute node or to execute the next task for further processing of the results on the current compute node; transferring, by the current compute node, the results to the next compute node for further processing by the next task on the next compute node if the determination specifies transferring the results to the next node; and executing, by the current compute node, the next task for further processing of the results if the determination specifies executing the next task on the current compute node. | 12-17-2009 |
20090320038 | REDUCING INSTABILITY WITHIN A HETEROGENEOUS STREAM PROCESSING APPLICATION - Embodiments of the invention provide a method for reducing instability in a heterogeneous job plan of a stream processing application. In one embodiment, a job manager may be configured to select a job plan with the objective of minimizing the potential instability of the job plan. Each job plan may provide a directed graph connecting processing elements (both native and non-native). That is, each job plan illustrates data flow through the stream application framework. The job plan may be selected from multiple available job plans, or may be generated by replacing processing elements of a given job plan. Further, the job plan may be selected on the basis of other objectives in addition to an objective of minimizing the potential instability of the job plan, such as minimizing cost, minimizing execution time, minimizing resource usage, etc. | 12-24-2009 |
20090320039 | REDUCING INSTABILITY OF A JOB WITHIN A HETEROGENEOUS STREAM PROCESSING APPLICATION - Embodiments of the invention provide a method for reducing instability in a heterogeneous job plan of a stream processing application. In one embodiment, a job manager may be configured to select a job plan with the objective of minimizing the potential instability of the job plan. Each job plan may provide a directed graph connecting processing elements (both native and non-native). That is, each job plan illustrates data flow through the stream application framework. The job plan may be selected from multiple available job plans, or may be generated by replacing processing elements of a given job plan. Further, the job plan may be selected on the basis of other objectives in addition to an objective of minimizing the potential instability of the job plan, such as minimizing cost, minimizing execution time, minimizing resource usage, etc. | 12-24-2009 |
20090320040 | Preserving hardware thread cache affinity via procrastination - A method, device, system, and computer readable medium are disclosed. In one embodiment the method includes managing one or more threads attempting to steal task work from one or more other threads. The method will block a thread from stealing a mailed task that is also residing in another thread's task pool. The blocking occurs when the mailed task was mailed to an idle third thread. Additionally, some tasks are deferred instead of immediately spawned. | 12-24-2009 |
20090320041 | COMPUTER PROGRAM AND METHOD FOR BALANCING PROCESSING LOAD IN STORAGE SYSTEM, AND APPARATUS FOR MANAGING STORAGE DEVICES - In a distributed storage system, client terminals make access to virtual storage areas provided as logical segments of a storage volume. Those logical segments are associated with physical segments that serve as real data storage areas. A management data storage unit stores management data describing the association between such logical segments and physical segments. Upon receipt of access requests directed to a specific access range, a segment identification unit consults the management data to identify logical segments in the specified access range and their associated physical segments. A remapping unit subdivides the identified logical segments and physical segments into logical sub-segments and physical sub-segments, respectively, and remaps the logical sub-segments to the physical sub-segments according to a predetermined remapping algorithm. A data access unit executes the access requests based on the remapped logical sub-segments and physical sub-segments. | 12-24-2009 |
20090328054 | ADAPTING MESSAGE DELIVERY ASSIGNMENTS WITH HASHING AND MAPPING TECHNIQUES - A system for efficiently distributing messages to a server farm uses a hashing function and a map-based function, or combinations thereof, to distribute messages associated with a processing request. In one implementation, for example, the hashing function has inputs of an identifier for each message in a processing request, and a list of available servers. Upon identifying that any of the servers is unavailable, or will soon be unavailable, the load balancing server uses an alternate map-based assignment function for new requests, and inputs each assignment into a server map. The load balancing server can then use the map or the hashing function, as appropriate, to direct messages to an operating server. Upon receiving an updated list of available servers, the load balancing server can switch back to the hashing function after the map is depleted, and use the updated server list as an argument. | 12-31-2009 |
20090328055 | SYSTEMS AND METHODS FOR THREAD ASSIGNMENT AND CORE TURN-OFF FOR INTEGRATED CIRCUIT ENERGY EFFICIENCY AND HIGH-PERFORMANCE - A system and method for improving efficiency of a multi-core architecture includes, in accordance with a workload, determining a number of cores to shut down based upon a metric that combines parameters to represent operational efficiency. Threads of the workload are reassigned to cores remaining active by assigning threads based on priority constraints and thread execution history to improve the operational efficiency of the multi-core architecture. | 12-31-2009 |
20090328056 | Entitlement model - Some embodiments of an entitlement model have been presented. In one embodiment, a centralized server distributes copies of an operating system from a software vendor to a set of virtual guests of a virtual host running on a physical computing machine. The centralized server and the physical computing machine are coupled to each other within an internal network of a customer of the software vendor, whereas the centralized server has access to the software vendor external to the internal network of the customer. The centralized server may interact with a hypervisor of the physical computing machine to determine what type of license of the operating system the virtual host has and a number of copies of the operating system requested by the virtual guests. | 12-31-2009 |
20100011371 | Performance of unary bulk IO operations on virtual disks by interleaving - A method and system are provided for executing a unary bulk input/output operation on a virtual disk using interleaving. The performance improvement due to the method is expected to increase as more information about the configuration of the virtual disk and its implementation are taken into account. Performance factors considered may include contention among tasks implementing the parallel process, load on the storage system from other processes, performance characteristics of components of the storage system, and the virtualization relationships (e.g., mirroring, striping, and concatenation) among physical and virtual storage devices within the virtual configuration. | 01-14-2010 |
20100023950 | WORKFLOW PROCESSING APPARATUS - A workflow processing apparatus receives interface information of a function provided by a device on a network from the device on the network and sends, during the processing of a workflow, input information based on the interface information of the function provided by the device on the network and a program for controlling the function provided by the device on the network to the device on the network. | 01-28-2010 |
20100031266 | SYSTEM AND METHOD FOR DETERMINING A NUMBER OF THREADS TO MAXIMIZE UTILIZATION OF A SYSTEM - A system and associated method for determining a number of threads to maximize system utilization. The method begins with determining a first value which corresponds to the current system utilization. Next the method determines a second value which corresponds the current number of threads in the system. Next the method determines a third value which corresponds to the number of processor cores in the system. Next the method receives a fourth value from an end user which corresponds to the optimal system utilization the end user wishes to achieve. Next the method determines a fifth value which corresponds to the number of threads necessary to achieve the optimal system utilization value received from the end user. Finally, the method sends the fifth value to all running applications. | 02-04-2010 |
20100031267 | Distribution Data Structures for Locality-Guided Work Stealing - A data structure, the distribution, may be provided to track the desired and/or actual location of computations and data that range over a multidimensional rectangular index space in a parallel computing system. Examples of such iteration spaces include multidimensional arrays and counted loop nests. These distribution data structures may be used in conjunction with locality-guided work stealing and may provide a structured way to track load balancing decisions so they can be reproduced in related computations, thus maintaining locality of reference. They may allow computations to be tied to array layout, and may allow iteration over subspaces of an index space in a manner consistent with the layout of the space itself. Distributions may provide a mechanism to describe computations in a manner that is oblivious to precise machine size or structure. Programming language constructs and/or library functions may support the implementation and use of these distribution data structures. | 02-04-2010 |
20100043010 | DATA PROCESSING METHOD, CLUSTER SYSTEM, AND DATA PROCESSING PROGRAM - Provided is a data processing system which includes: a first computer for receiving a processing request for a task processing, executing the processing, and holding data used therein; and a second computer for holding a duplicate of the data held in the first computer, halting the first computer if the first computer is determined to be halted, and receiving and processing the processing request. The first computer receives at least an update request as the processing request including request identification information to which unique numbers assigned to the individual processing requests in an ascending order are allocated, updates the held data, and transmits the update request including the request identification information to the second computer. The second computer stores a transmitted reference request and the update request as the processing requests, and processes the processing requests in an ascending order of the unique numbers included in the individual processing requests. | 02-18-2010 |
20100050182 | PARALLEL PROCESSING SYSTEM - A system for processing a user application having a plurality of functions identified for parallel execution. The system includes a client coupled to a plurality of compute engines. The client executes both the user application and a compute engine management module. Each of the compute engines is configured to execute a requested function of the plurality of functions in response to a compute request. If, during execution of the user application by the client, the compute engine management module detects a function call to one of the functions identified for parallel execution, and the module selects a compute engine and sends a compute request to the selected compute engine requesting that it execute the function called. The selected compute engine calculates a result of the requested function and sends the result to the compute engine management module, which receives the result and provides it to the user application. | 02-25-2010 |
20100058352 | System and Method for Dynamic Resource Provisioning for Job Placement - A method for dynamic resource provisioning for job placement includes receiving a request to perform a job on an unspecified computer device. One or more job criteria for performing the job are determined. Each job criteria defines a required operational characteristic needed for a computer device to perform the job. A list of available computer devices is provided. The list includes a plurality of computer devices currently provisioned to perform computer operations. A list of suitable computer devices for performing the job is determined from the list of available computer devices by comparing operational characteristics for each available computer device with the job criteria. The list of suitable computer devices includes one or more computer devices having operational characteristics that satisfy the job criteria. From the list of suitable computer devices, a least active computer device is determined, and the job is forwarded to the least active computer device. | 03-04-2010 |
20100070978 | VDI Storage Overcommit And Rebalancing - A method for managing storage for a desktop pool is described. The desktop pool includes a plurality of virtual machines (VMs), each VM having at least one virtual disk represented as a virtual disk image file on one of a plurality of datastores associated with the desktop pool. To identify a target datastore for a VM, a weight of each datastore is calculated. The weight may be a function of a virtual capacity of the datastore and the sum of maximum sizes of all the virtual disk image files on the datastore. The virtual capacity is a product of the data storage capacity of the datastore and an overcommit factor assigned to the datastore. The target datastore is selected as the datastore having the highest weight. The VM may is moved to or created on the target datastore. | 03-18-2010 |
20100083274 | HARDWARE THROUGHPUT SATURATION DETECTION - Improved hardware throughput can be achieved when a hardware device is saturated with IO jobs. Throughput can be estimated based on the quantifiable characteristics of incoming IO jobs. When IO jobs are received a time cost for each job can be estimated and stored in memory. The estimates can be used to calculate the total time cost of in-flight IO jobs and a determination can be made as to whether the hardware device is saturated based on completion times for IO jobs. Over time the time cost estimates for IO jobs can be revised based on a comparison between the estimated time cost for an IO job and the actual time cost for the IO job using aggregate IO job completion sequences. | 04-01-2010 |
20100095303 | Balancing A Data Processing Load Among A Plurality Of Compute Nodes In A Parallel Computer - Methods, apparatus, and products are disclosed for balancing a data processing load among a plurality of compute nodes in a parallel computer that include: partitioning application data for processing on the plurality of compute nodes into data chunks; receiving, by each compute node, at least one of the data chunks for processing; estimating, by each compute node, processing time involved in processing the data chunks received by that compute node for processing; and redistributing, by at least one of the compute nodes to at least one of the other compute nodes, a portion of the data chunks received by that compute node in dependence upon the processing time estimated by that compute node. | 04-15-2010 |
20100095304 | INFORMATION PROCESSING DEVICE AND LOAD ARBITRATION CONTROL METHOD - The information processing device in the simultaneous multi-threading system is operated in an inter-thread performance load arbitration control method, and includes: an instruction input control unit for sharing among threads control of inputting an instruction in an arithmetic unit for acquiring the instruction from memory and performing an operation on the basis of the instruction; a commit stack entry provided for each thread for holding information obtained by decoding the instruction; an instruction completion order control unit for updating the memory and a general purpose register depending on an arithmetic result obtained by the arithmetic unit in an order of the instructions input from the instruction input control unit; and a performance load balance analysis unit for detecting the information registered in the commit stack entry and controlling the instruction input control unit. | 04-15-2010 |
20100131959 | Proactive application workload management - A method is provided for continuous optimization of allocation of computing resources for a horizontally scalable application which has a cyclical load pattern wherein each cycle may be subdivided into a number of time slots. A computing resource allocation application pre-allocates computing resources at the beginning of a time slot based on a predicted computing resource consumption during that slot. During the servicing of the workload, a measuring application measures actual consumption of computing resources. On completion of servicing, the measuring application updates the predicted computing resource consumption profile, allowing optimal allocation of resources. Un-needed computing resources may be released, or may be marked as releasable, for use upon request by other applications, including applications having the same or lower priority than the original application. Methods, computer systems, and computer programs available as a download or on a computer-readable medium for installation according to the invention are provided. | 05-27-2010 |
20100131960 | Systems and Methods for GSLB Based on SSL VPN Users - The present invention provides a system and a method for global server load balancing of a plurality of sites based on a number of Secure Socket Layer Virtual Private Network (SSL VPN) users. The SSL VPN users may access servers at each of the plurality of sites. A global server load balancing virtual server (GSLB) may receive a request to access a server. The GSLB virtual server may load balance a plurality of sites wherein each of the plurality of sites may further comprising a load balancing virtual server load balancing users accessing the server accessing servers via an SSL VPN session. GSLB may receive from a first load balancing virtual server at a first site, a first number of current SSL VPN users accessing servers from the first site via SSL VPN sessions. The GSLB may also receive from a second load balancing virtual server at a second site, a second number of current SSL VPN users of the users accessing servers from the second site via SSL VPN sessions. GSLB may determine to forward the request to one of the first load balancing virtual server of the first site or the second load balancing virtual server of the second site by load balancing SSL VPN users across the plurality of sites based on the first number of current SSL VPN users and the second number of current SSL VPN users. | 05-27-2010 |
20100131961 | PACKAGE REVIEW PROCESS WORKFLOW - A workflow module automates and monitors a package review process. A package review module receives a package created by a contributor to be reviewed for compliance with a set of guidelines. The workflow module initiates, monitors, and manages a plurality of package review tasks to be performed on the package. A user interface module provides user interface for creating a package and a user interface for reviewing a package. The workflow module automates review tasks, interfaces with external servers performing review tasks, gathers review task results, determines whether to send a notification regarding the status of a review task, sends notifications regarding the status of a review task and stores successfully review packages in a repository. | 05-27-2010 |
20100146516 | Distributed Task System and Distributed Task Management Method - A distributed task system has a task transaction server and at least one task server. Instead of being merely passively called by the task transaction server for executing a task, the task server performs self-balancing according to task execution conditions and operation conditions of the task server. The task transaction server receives task requests from the task server, records the execution conditions, and provides feedback to the task server, and the task server executes the task according to the received feedback and the operation conditions of the task server. The task transaction server may determine if the task server can execute the task according to the execution conditions of the task, and feedback to the task server. A self-balancing unit of the task server may further determine whether the task server is busy, and if not busy, trigger a task execution unit of the task server to execute the task. | 06-10-2010 |
20100146517 | SYSTEM AND METHOD FOR A RATE CONTROL TECHNIQUE FOR A LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL OVER MQSERIES (LOM) SERVER - A system and method for controlling rates for a Lightweight Directory Access Protocol (LDAP) over MQSeries (LoM) server. The system comprises a health metrics engine configured to calculate an actual delay value, at least one LoM server configured to asynchronously obtain the actual delay value from the health metrics engine and place the delay value between one or more requests, and a LDAP master configured to accept the one or more requests and send information in the one or more requests to a LDAP replica. | 06-10-2010 |
20100146518 | All-To-All Comparisons on Architectures Having Limited Storage Space - Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory. | 06-10-2010 |
20100153962 | Method and system for controlling distribution of work items to threads in a server - A system and method are presented to control distribution of work items to threads in a server. The system and method include a permit dispenser that keeps track of permits, and a plurality of thread pools each including a queue with a configurable size, being configured with a desired concurrency and a size of the queue that is equal to a total number of work items to be executed by pool threads in the thread pool. The number of permits specifies a total number of threads available for executing the work items in the server. Each pool thread executes a work item in the thread pool, determines whether a thread surplus or a thread deficit exists, and shrinks or grows the thread pool respectively. | 06-17-2010 |
20100153963 | Workload management in a parallel database system - Embodiments of the present invention are directed to a workload management service component of a parallel database-management system that monitors usage of computational resources in the parallel database-management system and that provides a query-processing-task-management interface and a query-execution engine that receives query-processing requests associated with one of a number of services from host computers and accesses the workload-management-services component to determine whether to immediately launch execution of query-processing tasks corresponding to the received query-processing requests or to place the query-processing requests on wait queues for subsequent execution based on the current usage of computational resources within the parallel database-management system. | 06-17-2010 |
20100153964 | LOAD BALANCING OF ADAPTERS ON A MULTI-ADAPTER NODE - Load balancing of adapters on a multi-adapter node of a communications environment. A task executing on the node selects an adapter resource unit to be used as its primary port for communications. The selection is based on the task's identifier, and facilitates a balancing of the load among the adapter resource units. Using the task's identifier, an index is generated that is used to select a particular adapter resource unit from a list of adapter resource units assigned to the task. The generation of the index is efficient and predictable. | 06-17-2010 |
20100153965 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON INTER-THREAD COMMUNICATIONS - A technique for operating a high performance computing (HPC) cluster includes monitoring communication between threads assigned to multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more of the threads are moved to a different one of the multiple processors based on the communication between the threads. | 06-17-2010 |
20100153966 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER USING LOCAL JOB TABLES - A technique for operating a high performance computing cluster includes monitoring workloads of multiple processors. The high performance computing cluster includes multiple nodes that each include two or more of the multiple processors. Workload information for the multiple processors is periodically updated in respective local job tables maintained in each of the multiple nodes. Based on the workload information in the respective local job tables, one or more threads are periodically moved to a different one of the multiple processors. | 06-17-2010 |
20100162260 | Data Processing Apparatus - A method of providing a service application on a data processing apparatus comprising an interconnect and a plurality of data processing nodes, the method comprising the steps of, registering a service class at the interconnect, the service class having an associated service descriptor, generating a service object at a data processing node, the service object comprising an instance of the service class, and storing subscription information at the interconnect to permit messages to be routed to the service object in accordance with the service descriptor. | 06-24-2010 |
20100162261 | Method and System for Load Balancing in a Distributed Computer System - In an embodiment, a distributed computer system comprises a plurality of computers connected in substantial logical ring architecture. The computers are configured having a synchronized clock operation. At least one predetermined token designated with any one of a busy or an idle status circulates through the logical ring, wherein the computers are configured to check the status and give away or receive a predetermined job for completion, based on one or more predetermined conditions. Further, any deadlock generated is released by preempting the jobs based on predetermined criteria. | 06-24-2010 |
20100169892 | Processing Acceleration on Multi-Core Processor Platforms - Embodiments disclosed herein include an accelerator module that modifies a single application to run on multiple processing cores of a single CPU. In one aspect, the application performs a task that includes some parallel operations and some serial operations. The parallel tasks may be run on different cores concurrently. In addition, serial tasks may be broken up to execute among different cores simultaneously without errors. In a particular embodiment, a FFMPEG decoding application is modified by the accelerator module to execute on multiple cores and perform video decoding in real time or faster than real time. | 07-01-2010 |
20100169893 | Computing Resource Management Systems and Methods - An information handling system may include a first subsystem operable to receive data associated with computing resources from at least one computing resource provider. The system may further include a second subsystem in communication with the first subsystem, the second subsystem operable to provide the computing resources to at least one computing resource customer, wherein the at least one computing resource provider receives compensation paid by the at least one computing resource customer for completion of a workload. A method for managing a computing resource within an information handling system may include receiving data associated with the computing resource from at least one computing resource provider and providing the computing resources to at least one computing resource customer. The at least one computing resource provider may receive compensation paid by the at least one resource customer for completion of a workload. | 07-01-2010 |
20100175070 | VIRTUAL MACHINE MANAGING DEVICE, VIRTUAL MACHINE MANAGING METHOD, AND VIRTUAL MACHINE MANAGING PROGRAM - An object of the present invention is to suppress a variation in virtual machine startup times when multiple virtual machines are started in a computer system having multiple virtual machine providing servers. Execution server distribution unit | 07-08-2010 |
20100186020 | SYSTEM AND METHOD OF MULTITHREADED PROCESSING ACROSS MULTIPLE SERVERS - In one embodiment the present invention includes a computer implemented system and method of multithreaded processing on multiple servers. Jobs may be received in a jobs table for execution. Each of a plurality of servers may associate a thread for executing a particular job type. As a job is received in the job table, the associated thread on each server may access the jobs table and pick up the job if the job type for the job is associated with the thread. Jobs may include sequential and parallel tasks to be performed. Sequential job tasks may be performed by one associated thread on one server, while parallel job tasks may be performed by each associated thread on each server. In one embodiment, a metadata table is used to coordinate multithreaded processing across multiple servers. | 07-22-2010 |
20100192158 | Modeling Computer System Throughput - A method of determining an estimated data throughput capacity for a computer system includes the steps of creating a first model of data throughput of a central processing subsystem in the computer system as a function of latency of a memory subsystem of the computer system; creating a second model of the latency in the memory subsystem as a function of bandwidth demand of the memory subsystem; and finding a point of intersection of the first and second models. The point of intersection corresponds to a possible operating point for said computer system. | 07-29-2010 |
20100205609 | USING TIME STAMPS TO FACILITATE LOAD REORDERING - Some embodiments of the present invention provide a system that supports load reordering in a processor. The system maintains at least one counter value for each thread which is used to assign time stamps for the thread. While performing a load for the thread, the system reads a time stamp from a cache line to which the load is directed. Next, if the counter value is equal to the time stamp, the system performs the load. Otherwise, if the counter value is greater-than the time stamp, the system performs the load and increases the time stamp to be greater-than-or-equal-to the counter. Finally, if the load is a speculative load, which is speculatively performed earlier than an older load in program order, and the counter value is less-than the time stamp, the system fails speculative execution for the thread. | 08-12-2010 |
20100211958 | AUTOMATED RESOURCE LOAD BALANCING IN A COMPUTING SYSTEM - A method for automated resource load balancing in a computing system includes partitioning a plurality of physical resources to create a plurality of dedicated resource sets. A plurality of separate environments are created on the computing system. Each created separate environment is associated with at least one dedicated resource set. The method further includes establishing a user policy that includes a utilization threshold, and for each separate environment, monitoring the utilization of the associated at least one dedicated resource set. The physical resources associated with a particular separate environment are automatically changed based on the monitored utilization for the particular separate environment, and in accordance with the user policy. This provides automated resource load balancing in the computing system. | 08-19-2010 |
20100223621 | Statistical tracking for global server load balancing - Server load-balancing operation-related data, such as data associated with a system configured for global server load balancing (GSLB) that orders IP addresses into a list based on a set of performance metrics, is tracked. Such operation-related data includes inbound source IP addresses (e.g., the address of the originator of a DNS request), the requested host and zone, identification of the selected “best” IP addresses resulting from application of a GSLB algorithm and the selection metric used to decide on an IP address as the “best” one. Furthermore, the data includes a count of the selected “best” IP addresses selected via application of the GSLB algorithm, and for each of these IP addresses, the list of deciding performance metrics, along with a count of the number of times each of these metrics in the list was used as a deciding factor in selection of this IP address as the best one. This tracking feature allows better understanding of GSLB policy decisions (such as those associated with performance, maintenance, and troubleshooting) and intelligent deployment of large-scale resilient GSLB networks. | 09-02-2010 |
20100223622 | Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions - In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition. | 09-02-2010 |
20100223623 | METHODS AND SYSTEMS FOR WORKFLOW MANAGEMENT - Systems and method are described for workflow management, and in particular, for workflow management with respect to filming. In response to a filming permit request, a workflow computer system examines workloads associated with permit coordinators. Optionally, the examination takes into account coordinator performance in attempting to balance workloads. The permit request is routed to a selected permit coordinator who is tasked with resolving permit issues. In addition, the permit request is routed to approving entities associated with the permit workflow. Optionally, conflicts with other permits are identified. Substantially real-time workflow status updates are provided to the requester and/or coordinator. The workflow computer system automatically identifies to the coordinator deficiencies associated with the permit that are to be resolved. | 09-02-2010 |
20100229180 | Information processing system - An information processing system includes a first system and a second system. The first system and the second system each includes: hardware; a compensation section configured to provide execution environments for execution of a process using the hardware of the system to which the compensation section belongs; and a processing section configured to execute a predetermined process in the execution environments provided by the compensation section. The hardware of the first system and the hardware of the second system are different in nature from each other. The compensation section of one of the first system and the second system compensates for the differences between the hardware of the first system and the hardware of the second system to provide the processing section of the other with the execution environments which are not affected by the differences between the hardware of the first system and the hardware of the second system. | 09-09-2010 |
20100235845 | SUB-TASK PROCESSOR DISTRIBUTION SCHEDULING - A method for processing of processor executable tasks and a processor readable medium having embodied therein processor executable instructions for implementing the method are disclosed. A system for distributing processing work amongst a plurality of distributed processors is also disclosed. A task generated with a local node is divided into one or more sub-tasks. An optimum number of nodes x on which to process the sub-tasks is determined If x is greater than one a determination is made to either (1) execute the task at the local node with the processor unit, (2), distribute the task among two or more local node processors, (3) distribute the task to one or more of the distributed nodes accessible to the local node over a LAN, or (4) distribute the task to one or more of the distributed nodes that are accessible to the local node over a WAN. | 09-16-2010 |
20100251256 | Scheduling Data Analysis Operations In A Computer System - A technique receiving identifiers from a plurality of nodes. Each identifier identifies an associated data object, and at least some of the data objects being replicated on different nodes. The technique includes scheduling analysis of the data objects on the nodes based at least in part on a distribution of replicas of the data objects among the nodes and modeled performances of the nodes. | 09-30-2010 |
20100251257 | METHOD AND SYSTEM TO PERFORM LOAD BALANCING OF A TASK-BASED MULTI-THREADED APPLICATION - A method and system to balance the load of a task-based multi-threaded application on a platform. When the work required by the multi-threaded application is represented as a task with a computational requirement that is proportional to the amount of the work, embodiments of the invention control the recursive binary task division of the task using auxiliary partitions to create subtasks of balanced loads to enhance resource utilization and to improve application performance. The task is binary partitioned recursively into a plurality of subtasks until the plurality of subtasks is equal to the plurality of resources available on the platform to execute the subtasks. | 09-30-2010 |
20100251258 | RECORDING MEDIUM HAVING LOAD BALANCING PROGRAM RECORDED THEREON, LOAD BALANCING APPARATUS AND METHOD THEREOF - A load balancing method for servers including allocating a job to one or more servers, respectively, having a load lower that a first reference value, upon detection of a first server having a load that is higher that the first reference value and is lower that a second reference value, reducing a load of a second server having the lowest load among the servers by a load balancing, and upon detection of any server having a load that is higher that the second reference value, reallocating a job of the any server to another server having the lowest load among the servers. | 09-30-2010 |
20100251259 | System And Method For Recruitment And Management Of Processors For High Performance Parallel Processing Using Multiple Distributed Networked Heterogeneous Computing Elements - A parallel processing computer is described that has several processing devices of several different processing device types each communicating over a computer network. The computer has at least one conversion device in communication with the processing devices, the conversion device being a processing device having conversion code for translating at least some task allocation and other messages from a format understood by the conversion device into a format understood for execution by a particular type of the several types of the processing devices. The computer also has at least one access device in communication with the at least one conversion device, the access device having program code for allocating tasks to processing devices and generating task allocation messages to processing devices. The computer network in an embodiment involves portions of the cellular telephone network as well as part of the internet. | 09-30-2010 |
20100262974 | Optimized Virtual Machine Migration Mechanism - A virtual machine management system may perform a three phase migration analysis to move virtual machines off of less efficient hosts to more efficient hosts. In many cases, the migration may allow inefficient host devices to be powered down and may reduce overall energy costs to a datacenter or other user. The migration analysis may involve performing a first consolidation, a load balancing, and a second consolidation when consolidating virtual machines and freeing host devices. The migration analysis may also involve performing a first load balancing, a consolidation, and a second load balancing when expanding capacity. | 10-14-2010 |
20100262975 | AUTOMATED WORKLOAD SELECTION - A job submission method that presents a set of algorithms that provide automated workload selection to a batch processing system that has the ability to receive and run jobs on various computing resources simultaneously is provided. If all machines in the batch system are running jobs, a queue containing the extra jobs for execution results. For compute intensive workloads, such as chip design, an automated workload selection system software layer submits jobs to the batch processing system. This keeps the batch processing system continually full of useful work The job submission system provides for organizing workloads, assigning relative ratios between workloads, associating arbitrary workload validation algorithms with a workload or parent workload, associating arbitrary selection algorithms with a workload or workload group, defining high priority workloads that preserve fairness and balancing the workload selection based on current status of the batch system, validation status, and the workload ratios. | 10-14-2010 |
20100269118 | SPECULATIVE POPCOUNT DATA CREATION - A method and a data processing system by which population count (popcount) operations are efficiently performed without incurring the latency and loss of critical processing cycles and bandwidth of real time processing. The method comprises: identifying data to be stored to memory for which a popcount may need to be determined; speculatively performing a popcount operation on the data as a background process of the processor while the data is being stored to memory; storing the data to a first memory location; and storing a value of the popcount generated by the popcount operation within a second memory location. The method further comprises: determining a size of data; determining a granular level at which the popcount operation on the data will be performed; and reserving a size of said second memory location that is sufficiently large to hold the value of the popcount. | 10-21-2010 |
20100293552 | Altering Access to a Fibre Channel Fabric - A mechanism is provided for altering access to a network. A virtual I/O server controller in a virtual I/O server operating system receives an indication that an identified communications adapter requires attention. The virtual I/O server controller issues a set of calls to a set of N_port identification virtualization server adapters coupled to the identified communications adapter. Each of the set of calls indicates to each of the set of N_port identification virtualization server adapters a request to move a set of clients from their assigned port on the identified communications adapter to an available port on a failover communications adapter. The set of N_port identification virtualization server adapters moves the set of clients from the identified communications adapter to the failover communications adapter. | 11-18-2010 |
20100299675 | SYSTEM AND METHOD FOR ESTIMATING COMBINED WORKLOADS OF SYSTEMS WITH UNCORRELATED AND NON-DETERMINISTIC WORKLOAD PATTERNS - It has been found that a more reasonable estimation of combined workloads can be achieved by enabling the ability to specify the confidence level in which to estimate the workload values. A method, computer readable medium and system are provided for estimating combined system workloads. The method comprises obtaining a set of quantile-based workload data pertaining to a plurality of systems and normalizing the quantile-based workload data to compensate for relative measures between data pertaining to different ones of the plurality of systems. A confidence interval may then be determined and the confidence interval used to determine a contention probability specifying a degree of predicted workload contention between the plurality of systems according to at least one probabilistic model. The contention probability may then be used to combine workloads for the plurality of systems and a result indicative of one or more combined workloads then provided. | 11-25-2010 |
20100306781 | DETERMINING AN IMBALANCE AMONG COMPUTER-COMPONENT USAGE - The present invention is directed to determining an imbalance among computer-component usage. Based on a performance value (e.g. utilization value, response time, queuing delay, Input/Output operations, bytes transferred, work threads used, connections made, etc) that describes a respective computer component among a set of computer components, and an average performance value of the set, a component value of each computer component in the set can be determined. Each component value quantifies a contribution of the usage of a respective computer component toward an imbalanced assignment of computer operations. Component values are information rich and comparisons of component values suggest levels of over-utilization and under-utilization of the computer components. Based on the component values of a set of computer components, decisions can be made as to what portion of computer operations should be reassigned to enable computer operations to be executed in a more balanced manner by the set of computer components. | 12-02-2010 |
20100333104 | Service-Based Endpoint Discovery for Client-Side Load Balancing - A server farm includes a plurality of server devices. The plurality of server devices includes a plurality of topology service endpoints and a plurality of target service endpoints. A client computing system sends a topology service request to one of the topology service endpoints. In response, the topology service endpoint sends target service endpoint Uniform Resource Identifiers (URIs) to the client computing system. When a client application at the client computing system needs to send a target service request to one of the target service endpoints, the client computing system applies a load balancing algorithm to select one of the target service endpoint URIs. The client computing system then sends a target service request to the target service endpoint identified by the selected one of the target service endpoint URIs. In this way, the client computing system may use a load balancing algorithm appropriate for the client application. | 12-30-2010 |
20100333105 | PRECOMPUTATION FOR DATA CENTER LOAD BALANCING - Pre-computing a portion of forecasted workloads may enable load-balancing of data center workload, which may ultimately reduce capital and operational costs associated with data centers. Computing tasks performed by the data centers may be analyzed to identify computing tasks that are eligible for pre-computing, and may be performed prior to an actual data request from a user or entity. In some aspects, the pre-computing tasks may be performed during a low-volume workload period prior to a high-volume workload period to reduce peaks that typically occur in data center workloads that do not utilize pre-computation. Statistical modeling methods can be used to make predictions about the tasks that can be expected to maximally contribute to bottlenecks at data centers and to guide the speculative computing. | 12-30-2010 |
20110016473 | MANAGING SERVICES FOR WORKLOADS IN VIRTUAL COMPUTING ENVIRONMENTS - Methods and apparatus involve managing computing services for workloads. A storage of services available to the workloads are maintained as virgin or golden computing images. By way of a predetermined policy, it is identified which of those services are necessary to support the workloads during use. Thereafter, the identified services are packaged together for deployment as virtual machines on a hardware platform to service the workloads. In certain embodiments, services include considerations for workload and service security, quality of service, deployment sequence, storage management, and hardware requirements necessary to support virtualization, to name a few. Meta data in open virtual machine formats (OVF) are also useful in defining these services. Computer program products and computing arrangement are also disclosed. | 01-20-2011 |
20110023048 | INTELLIGENT DATA PLACEMENT AND MANAGEMENT IN VIRTUAL COMPUTING ENVIRONMENTS - Methods and apparatus involve intelligently pre-placing data for local consumption by workloads in a virtual computing environment. Access patterns of the data by the workload are first identified. Based thereon, select data portions are migrated from a first storage location farther away the workload to a second storage location closer the workload. Migration also occurs at a time when needed by the workload during use. In this manner, bandwidth for data transmission is minimized. Latency effects created by consumption of remotely stored data is overcome as well. In various embodiments, a data vending service and proxy are situated between a home repository of the data and the workload. Together they serve to manage and migrate the data as needed. Data recognition patterns are disclosed as is apportionment of the whole of the data into convenient migration packets. De/Encryption, (de)compression, computing systems and computer program products are other embodiments. | 01-27-2011 |
20110023049 | OPTIMIZING WORKFLOW EXECUTION AGAINST A HETEROGENEOUS GRID COMPUTING TOPOLOGY - Optimizing workflow execution by the intelligent dispatching of workflow tasks against a grid computing system or infrastructure. For some embodiments, a grid task dispatcher may be configured to dispatch tasks in a manner that takes into account information about an entire workflow, rather than just an individual task. Utilizing information about the tasks (task metadata), such a workflow-scoped task dispatcher may more optimally assign work to compute resources available on the grid, leading to a decrease in workflow execution time and more efficient use of grid computing resources. | 01-27-2011 |
20110029982 | NETWORK BALANCING PROCEDURE THAT INCLUDES REDISTRIBUTING FLOWS ON ARCS INCIDENT ON A BATCH OF VERTICES - A representation of a flow network having vertices connected by arcs is provided. The vertices include a first set of vertices that provide flow to a second set of vertices over arcs connecting the first set and second set of vertices. A balancing procedure in the network is performed that includes redistributing flows on arcs incident on the second set of vertices. The balancing procedure includes selecting a batch of the vertices in the second set, and redistributing flows on arcs incident on the selected batch of vertices. The selecting and redistributing are repeated for other batches of vertices in the second set. | 02-03-2011 |
20110029983 | SYSTEMS AND METHODS FOR DATA AWARE WORKFLOW CHANGE MANAGEMENT - A method includes providing a baseline workflow as an electronic representation of an actual workflow, the baseline workflow including baseline tasks, data items, and baseline data scopes, and providing a fragment workflow as an electronic representation of an actual fragment workflow, the fragment workflow including at least one fragment task, and at least one fragment data scope. A baseline data scope is identified as an affected data scope based on a structural change operation, the baseline workflow and the fragment workflow, and the affected data scope is compared to the at least one fragment data scope to identify at least one change operation. The fragment and baseline workflows are integrated based on the structural change operation to provide an integrated workflow, and the at least one data scope change operation is executed to provide at least one integrated data scope in the integrated workflow. | 02-03-2011 |
20110035754 | WORKLOAD MANAGEMENT FOR HETEROGENEOUS HOSTS IN A COMPUTING SYSTEM ENVIRONMENT - Methods and apparatus involve managing workload migration to host devices in a data center having heterogeneously arranged computing platforms. Fully virtualized images include drivers compatible with varieties of host devices. The images also include an agent that detects a platform type of a specific host device upon deployment. If the specific host is a physical platform type, the agent provisions native drivers. If the specific host is a virtual platform type, the agent also detects a hypervisor. The agent then provisions front-end drivers that are most compatible with the detected hypervisor. Upon decommissioning of the image, the image is returned to its pristine state and saved for later re-use. In other embodiments, detection methods of the agent are disclosed as are computing systems, data centers, and computer program products, to name a few. | 02-10-2011 |
20110041136 | METHOD AND SYSTEM FOR DISTRIBUTED COMPUTATION - A system for processing a computational task is presented. The system includes a plurality of nodes operationally coupled to one another via one or more networks. The plurality of nodes includes a base node including a processing subsystem configured to receive the computational task, select a subset of available nodes from the plurality of nodes based upon a present status, processing capability, distance, network throughput, range, resources, features, or combinations thereof of the plurality of nodes, divide the computational task into a plurality of sub-tasks, distribute the plurality of sub-tasks among the subset of available nodes based upon a number of nodes in the subset of available nodes, completion time period allowed for the plurality of sub-tasks, a distribution criteria, level of security required for the completion of the plurality of sub-tasks, resources available with the subset of available nodes, processing capability of the subset of available nodes, range of the subset of available nodes, features in the subset of available nodes, reliability of the subset of available nodes, trust in the subset of available nodes, the current load on the subset of available nodes, domain of the plurality of sub-tasks, or combinations thereof, receive sub-solutions corresponding to the plurality of sub-tasks from the subset of available nodes in a desired time period, and reassemble the sub-solutions to determine a solution corresponding to the computational task. | 02-17-2011 |
20110047554 | DECENTRALIZED LOAD DISTRIBUTION TO REDUCE POWER AND/OR COOLING COSTS IN AN EVENT-DRIVEN SYSTEM - A computer-implemented method, computer program product and computer readable storage medium directed to decentralized load placement in an event-driven system so as to minimize energy and cooling related costs. Included are receiving a data flow to be processed by a plurality of tasks at a plurality of nodes in the event-driven system having stateful and stateless event processing components, wherein the plurality of tasks are selected from the group consisting of hierarchical tasks (a task that is dependent on the output of another task), nonhierarchical tasks (a task that is not dependent on the output of another task) and mixtures thereof. Nodes are considered for quiescing whose current tasks can migrate to other nodes while meeting load distribution and energy efficiency parameters and the expected duration of the quiesce provides benefits commensurate with the costs of quiesce and later restart. Additionally, tasks are considered for migrating to neighbor nodes to distribute the system load of processing the tasks and reduce cooling costs. | 02-24-2011 |
20110047555 | DECENTRALIZED LOAD DISTRIBUTION IN AN EVENT-DRIVEN SYSTEM - A computer-implemented method, computer program product and computer readable storage medium directed to decentralized load distribution in an event-driven system. Included are receiving a data flow to be processed by a plurality of tasks at a plurality of nodes in the event-driven system having stateful and stateless event processing components, wherein the plurality of tasks are selected from the group consisting of hierarchical tasks (a task that is dependent on the output of another task), nonhierarchical tasks (a task that is not dependent on the output of another task) and mixtures thereof. Tasks are considered for migration to distribute the system load of processing tasks. The target node, to which the at least one target task is migrated, is chosen wherein the target node meets predetermined criteria in terms of load distribution quality. The computer-implemented method, computer program product and computer readable storage medium of the present invention may also include migrating tasks to target nodes to reduce cooling costs and selecting at least one node to go into quiescent mode. | 02-24-2011 |
20110055845 | Technique for balancing loads in server clusters - In a network arrangement where a client requests a service from a server system, e.g., through the Internet, a multiple-load balancer is used for balancing loads in two or more server clusters in the server system to completely identify a sequence of servers for processing the service request. Each server in the resulting sequence belongs to a different server cluster. The service request is sent to the first server in the sequence, along with information for routing the request through the sequence of servers. | 03-03-2011 |
20110067033 | AUTOMATED VOLTAGE CONTROL FOR SERVER OUTAGES - Information regarding a scheduled outage for a server associated with a cluster of servers is received at a voltage regulation system (VRS) for the cluster of servers. A work load increase is determined for each remaining server within the cluster of servers due to the scheduled outage for the server. A voltage adjustment is calculated for each remaining server based upon the determined work load increase for each remaining server. Voltage for each remaining server is automatically adjusted based upon the calculated voltage adjustment. | 03-17-2011 |
20110072440 | PARALLEL PROCESSING SYSTEM AND METHOD - A parallel processing system determines whether to drive all or some processors so as to process data that are input based on capacity or time for processing the input data. Also, the system temporarily stores the data that are processed and output by the respective processors, and controls the same to be output when it becomes the calculated output time based on the traffic processing time for the input data. | 03-24-2011 |
20110078700 | TASK DISPATCHING IN MULTIPLE PROCESSOR SYSTEMS - A method and system is disclosed for dispatching tasks to multiple processors that all share a shared memory. A composite queue size for multiple work queues each having an associated processor is determined. A queue availability flag is stored in shared memory for each processor work queue and is set based upon the composite queue size and the size of the work queue for that processor. Each queue availability flag indicates availability or unavailability of the work queue to accept new tasks. A task is placed in a selected work queue based on that work queue having an associated queue availability flag indicating availability to accept new tasks. The data associated with task dispatching is maintained so as to increase the likelihood that valid copies of the data remain present in each processor's local cache without requiring updating do to their being changed by other processors. | 03-31-2011 |
20110078701 | Method and arrangement for distributing the computing load in data processing systems during execution of block-based computing instructions, as well as a corresponding computer program and a corresponding computer-readable storage medium - The invention is directed to a method and an arrangement for distributing the computing load in data processing system while executing of block-based computing instructions, as well as a corresponding computer program and a corresponding computer-readable storage medium, which can be used to uniformly distribute the computing load in processors for periodically occurring computing operations. The block-based computing instructions are hereby divided into blocks, wherein a block requires a number of time-sequential incoming input values, wherein the number can be predetermined for each block. A particular area of application is the field of digital processing of multimedia signals, such as in particular audio signals, video signals, and the like. | 03-31-2011 |
20110083135 | VIRTUAL COMPUTER SYSTEMS AND COMPUTER VIRTUALIZATION PROGRAMS - Disclosed are a virtual computer system and method, wherein computer resources are automatically and optimally allocated to logical partitions according to loads to be accomplished by operating systems in the logical partitions and setting information based on a knowledge of workloads that run on the operating systems. Load measuring modules are installed on the operating systems in order to measure the loads to be accomplished by the operating systems. A manager designates the knowledge concerning the workloads on the operating systems through a user interface. An adaptive control module determines the allocation rations of the computer resources relative to the logical partitions according to the loads and the settings, and issues an allocation varying instruction to a hypervisor so as to thus instruct variation of allocations. | 04-07-2011 |
20110088041 | Hardware support for thread scheduling on multi-core processors - A method, device, and system are disclosed. In one embodiment the method includes scheduling a thread to run on first core of a multi-core processor. The determination as to which core the thread is scheduled on uses one or more processes. These processes may include ranking all of the cores specific to a workload of the thread, establishing a current utilization of each core of the multi-core processor, and calculating an inter-core migration cost for the thread. | 04-14-2011 |
20110093862 | WORKLOAD-DISTRIBUTING DATA REPLICATION SYSTEM - A method for more effectively distributing the I/O workload in a data replication system is disclosed herein. In selected embodiments, such a method may include generating an I/O request and identifying a storage resource group associated with the I/O request. In the event the I/O request is associated with a first storage resource group, the I/O request may be directed to a first storage device and a copy of the I/O request may be mirrored from the first storage device to a second storage device. Alternatively, in the event the I/O request is associated with a second storage resource group, the I/O request may be directed to a second storage device and a copy of the I/O request may be mirrored from the second storage device to the first storage device. A corresponding system, apparatus, and computer program product are also disclosed and claimed herein. | 04-21-2011 |
20110099553 | SYSTEMS AND METHODS FOR AFFINITY DRIVEN DISTRIBUTED SCHEDULING OF PARALLEL COMPUTATIONS - Embodiments of the invention provide efficient scheduling of parallel computations for higher productivity and performance. Embodiments of the invention provide various methods effective for affinity driven and distributed scheduling of multi-place parallel computations with physical deadlock freedom. | 04-28-2011 |
20110107344 | MULTI-CORE APPARATUS AND LOAD BALANCING METHOD THEREOF - A multi-core apparatus and method for balancing load in the multi-core apparatus. The multi-core apparatus includes a first core that sends a save request including a context of a task, when a task is switched from an active state to a sleep state, a second core that receives an execution request and executes a task corresponding to the execution request, and a load balancer that receives the save request transmitted by the first core, and sends the execution request to the second core. | 05-05-2011 |
20110119678 | ISOLATING WORKLOAD PARTITION SPACE - A method, system, and computer usable program product for isolating a workload partition space are provided in the illustrative embodiments. A boot process of a workload partition in a data processing system is started using a scratch file system, the scratch file system being in a global space. A portion of a storage device containing a file system for the workload partition is exported to the workload partition, the portion forming an exported disk. The partially booted up workload partition may discover the exported disk. The exporting causes an association between the global space and the exported disk to either not form, or sever. The exporting places the exported disk in a workload partition space associated with the workload partition. The boot process is transitioned to stop using the scratch file system and start using the data in the exported disk for continuing the boot process. | 05-19-2011 |
20110119679 | METHOD AND SYSTEM OF AN I/O STACK FOR CONTROLLING FLOWS OF WORKLOAD SPECIFIC I/O REQUESTS - A method and system of a host device hosting multiple workloads for controlling flows of I/O requests directed to a storage device is disclosed. In one embodiment, a type of a response from the storage device reacting to an I/O request issued by an I/O stack layer of the host device is determined. Then, a workload associated with the I/O request is identified among the multiple workloads based on the response to the I/O request. Further, a maximum queue depth assigned to the workload is adjusted based on the type of the response, where the maximum queue depth is a maximum number of I/O requests from the workload which are concurrently issuable by the I/O stack layer. | 05-19-2011 |
20110126209 | Distributed Multi-Core Memory Initialization - In a system having a plurality of processing nodes, a control node divides a task into a plurality of sub-tasks, and assigns the sub-tasks to one or more additional processing nodes which execute the assigned sub-tasks and return the results to the control node, thereby enabling a plurality of processing nodes to efficiently and quickly perform memory initialization and test of all assigned sub-tasks. | 05-26-2011 |
20110131585 | DATA PROCESSING SYSTEM - A data processing apparatus is constructed by an input device for inputting an instruction for causing a job processor to perform a job, an analyzing unit for analyzing the instruction inputted by the input device, a discriminating unit for discriminating a processing ability of the job processor which performs the job based on the instruction inputted by the input device, and a controller for controlling a supply of the instruction inputted by the input device to the job processor in accordance with a result of the analysis by the analyzing unit and a result of the discrimination by the discriminating unit. The job processor performs a job to transmit input data to another apparatus, and the input device inputs an instruction including a designation of destinations to transmit data by the job processor. | 06-02-2011 |
20110138395 | THERMAL MANAGEMENT IN MULTI-CORE PROCESSOR - Techniques described herein generally relate to multi-core processors including two or more processor cores. Example embodiments may set forth devices, methods, and computer programs related to thermal management in the multi-core processor. Some example methods may include retrieving a first temperature reading for the first processor core during a scheduling interval, retrieving a second temperature reading for the second processor core also during the scheduling interval, and assigning a first task to the first processor core to be executed based on a comparison of the first temperature reading and the second temperature reading retrieved during the scheduling interval. | 06-09-2011 |
20110138396 | METHOD AND SYSTEM FOR DATA DISTRIBUTION IN HIGH PERFORMANCE COMPUTING CLUSTER - The present invention discloses a method and system for data distribution in a High-Performance Computing cluster, the High-Performance Computing cluster comprising a Management node and M computation nodes where M is an integer greater than or equal to 2, the Management node distributing the specified data to the M computation nodes, the method comprising steps of: dividing the M computation nodes into m layers where m is an integer greater than or equal to 2; dividing the specified data into k shares where k is an integer greater than or equal to 2; distributing, by the Management node, the k shares of data to a first layer of computation nodes as sub-nodes thereof, each of the first layer of computation nodes obtaining at least one share of data therein; distributing, by each of the computation nodes, the at least one share of data distributed by a parent node thereof to sub-computation nodes thereof; and requesting, by each of the computation nodes, the remaining specified data to other computation nodes, to thereby obtain all the specified data. The method and system enable data to be distributed rapidly to various computation nodes in the High-Performance Computing cluster. | 06-09-2011 |
20110154356 | METHODS AND APPARATUS TO BENCHMARK SOFTWARE AND HARDWARE - Example methods, apparatus and articles of manufacture to benchmark hardware and software are disclosed. A disclosed example method includes initiating a first thread to execute a set of instructions on a processor, initiating a second thread to execute the set of instructions on the processor, determining a first duration for the execution of the first thread, determining a second duration for the execution of the second thread, and determining a thread fairness value for the computer system based on the first duration and the second duration. | 06-23-2011 |
20110154357 | Storage Management In A Data Processing System - The invention relates to a method for storage management in a data processing system having a plurality of storage devices with different performance attributes and a workload. The workload is being associated with respective sets of data blocks to be stored in said plurality of storage devices. The method comprises the steps of dynamically determining performance requirements of the workload and dynamically determining performance attributes of the storage devices. The method further comprises the step of allocating data blocks to the storage devices depending on the performance requirements of the associated workload and the performance attributes of the storage devices. | 06-23-2011 |
20110154358 | METHOD AND SYSTEM TO AUTOMATICALLY OPTIMIZE EXECUTION OF JOBS WHEN DISPATCHING THEM OVER A NETWORK OF COMPUTERS - A computer implemented method, system, and/or computer program product selects a target computer to execute a job. For each computer in a system, a statistical mean of last job duration values is computed from historical records for all computers that have executed the job. Multiple pools of computers are selected based on a statistical mean of last job duration values. A ratio for each pool from the multiple pools is computed. This ratio is a ratio of the quantity of current executions of the job in a particular pool compared to a total of current job executions of the job in all of the multiple pools of computers. A particular pool of computers, which has a computed ratio that is closest to a preselected ratio, is selected. A target computer is selected from the particular pool of computers to execute a next iteration of the job. | 06-23-2011 |
20110161979 | MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION - Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode. | 06-30-2011 |
20110161980 | Load Balancing Web Service by Rejecting Connections - A load balancer allocates requests to a pool of web servers configured to have low queue capacities. If the queue capacity of a web server is reached, the web server responds to an additional request with a rejection notification to the load balancer, which enables the load balancer to quickly send the rejected request to another web server. Each web server self-monitors its rejection rate. If the rejection rate exceeds a threshold, the number of processes concurrently running on the web server is increased. If the rejection rate falls below a threshold, the number of processes concurrently running on the web server is decreased. | 06-30-2011 |
20110185366 | Load-balancing of processes based on inertia - A process is selected for movement from a current node to a new node, based on an inertia of the process. The inertia is a quantified measure of the impact resulting from the process being inaccessible while being moved. The inertia can take into account the number of current external connections to the process; the larger the number of current external connections is, the greater the inertia. The inertia can take into account the extent to which the process accepts external connections; the greater the extent to which the process accepts external connections is, the greater the inertia. The inertia can take into account the desired availability of the process; the greater the desired availability is, the greater the inertia. The inertia can take into account a specified quality of service of the process; the higher the specified quality of service is, the greater the inertia. | 07-28-2011 |
20110191783 | Techniques for managing processor resource for a multi-processor server executing multiple operating systems - A multiprocessor server system executes a plurality of multiprocessor or single-processor operating systems each using a plurality of storage adapters and a plurality of network adapters. Each operating system maintains load information about all its processors and shares the information with other operating systems. Upon changes in the processor load of the operating systems, processors are dynamically reassigned among operating systems to improve performance if the maximum load of the storage adapters and network adapters of the reassignment target operating system is not already reached. Processor reassignment includes shutting down and restarting dynamically operating systems to allow the reassignment of the processors used by single-processor operating systems. Furthermore, the process scheduler of multi-processor operating systems keeps some processors idle under light processor load conditions in order to allow the immediate reassignment of processors to heavily loaded operating systems. | 08-04-2011 |
20110197198 | LOAD AND BACKUP ASSIGNMENT BALANCING IN HIGH AVAILABILITY SYSTEMS - Among other things, embodiments described herein enable systems, e.g., Availability Management Forum (AMF) systems, having service units to operate with balanced loads both before and after the failure of one of the service units. A configuration can be generated which provides for distributed backup roles and balanced active loads. When a failure of a service unit occurs, the active loads previously handled by that service unit are substantially evenly picked up as active loads by remaining service units. | 08-11-2011 |
20110219383 | PROCESSING MODEL-BASED COMMANDS FOR DISTRIBUTED APPLICATIONS - The present invention extends to methods, systems, and computer program products for processing model based commands for distributed applications. Embodiments facilitate execution of model-based commands, including software lifecycle commands, using model-based workflow instances. Data related to command execution is stored in a shared repository such that command processors can understand their status in relationship to workflow instances. Further, since the repository is shared, command execution can be distributed and balanced across a plurality of different executive services. Embodiments also include model-based error handling and error recovery mechanisms. | 09-08-2011 |
20110225594 | Method and Apparatus for Determining Resources Consumed by Tasks - In a computer system comprising a plurality of computing devices wherein the plurality of computing devices processes a plurality of tasks and each task has a task type, a method for determining overheads associated with task types comprises the following steps. Overheads are estimated for a plurality of task types. One of the plurality of computing devices is selected to execute one of the plurality of tasks, wherein the selection comprises estimating load on at least a portion of the plurality of computing devices from tasks assigned to at least a portion of the plurality of computing devices and the estimates of overheads of the plurality of task types. One or more of the estimates of overheads of the plurality of task types are varied. | 09-15-2011 |
20110231860 | LOAD DISTRIBUTION SYSTEM - A load distribution system for allocating a job to one of a plurality of arithmetic devices includes a temperature data acquirer, a candidate selector, and a job allocator. The temperature data acquirer acquires temperature data indicating temperature of each of the plurality of arithmetic devices. The candidate selector selects at least one of the plurality of arithmetic devices as a candidate for a device to which the job is to be allocated. The job allocator allocates the job to the selected candidate. | 09-22-2011 |
20110247005 | Methods and Apparatus for Resource Capacity Evaluation in a System of Virtual Containers - Methods and apparatus are provided for evaluating potential resource capacity in a system where there is elasticity and competition between a plurality of containers. A dynamic potential capacity is determined for at least one container in a plurality of containers competing for a total capacity of a larger container. A current utilization by each of the plurality of competing containers is obtained, and an equilibrium capacity is determined for each of the competing containers. The equilibrium capacity indicates a capacity that the corresponding container is entitled to. The dynamic potential capacity is determined based on the total capacity, a comparison of one or more of the current utilizations to one or more of the corresponding equilibrium capacities and a relative resource weight of each of the plurality of competing containers. The dynamic potential capacity is optionally recalculated when the set of plurality of containers is changed or after the assignment of each work element. | 10-06-2011 |
20110247006 | Apparatus and method of dynamically distributing load in multiple cores - Provided is an apparatus and method of dynamically distributing load occurring in multiple cores that may determine a corresponding core to perform functions constituting an application program, thereby enhancing the entire processing rate. | 10-06-2011 |
20110258634 | Method for Monitoring Operating Experiences of Images to Improve Workload Optimization in Cloud Computing Environments - An embodiment of the invention includes a method for workload optimization in a network (e.g., cloud computing environment). Usage of resources in the network is monitored in order to maintain a metadata catalog of operating experiences of the resources. A request for a resource in the network is received; and, resources that are available in the network are identified. Units that are included in the resources are also identified. The metadata catalog is queried for operating experiences associated with the requested resource. The requested resource is provisioned by the host system based on the operating experiences associated with the resource. This includes assembling the units that are included in the requested resource and/or automatically allocating workloads of the computing modules based on the cataloging of the workloads in the metadata catalog. The metadata catalog is updated with an operating experience associated with the provisioning of the requested resource. | 10-20-2011 |
20110265095 | Resource Affinity via Dynamic Reconfiguration for Multi-Queue Network Adapters - A mechanism is provided for providing resource affinity for multi-queue network adapters via dynamic reconfiguration. A device driver allocates an initial queue pair within a memory. The device driver determines whether workload of the data processing system has risen above a predetermined high threshold. Responsive to the workload rising above the predetermined high threshold, the device driver allocates and initializes an additional queue pair in the memory. The device driver programs a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair. The device driver enables transmit tuple hashing to the additional queue pair. | 10-27-2011 |
20110265096 | MANAGING RESOURCES IN A MULTIPROCESSING COMPUTER SYSTEM - Embodiments of the invention relate to multiprocessing systems. An aspect of the invention concerns a multiprocessing system that comprises a hardware control component for selecting a hardware management action responsive to a hardware policy and a virtualization component for presenting virtual hardware resources to a software task execution environment. The system may further comprise a software workload management component for controlling at least one running software task and routing at least one new software task using the virtual hardware resources; and a communication component for signaling the software workload management component to perform a software management action in compliance with the hardware management action. The hardware policy may be a hardware power management policy, and the software management action may comprise quiescing the at least one running software task or routing the new software tasks to a different software task execution environment. | 10-27-2011 |
20110276982 | Load Balancer and Load Balancing System - In a system including a load balancer to select a virtual server to which a request is to be transferred, the load balancer includes a function to monitor resource use states of physical and virtual servers and a function to predict a packet loss occurring in a virtual switch. The request count of requests processible by each virtual server is calculated based on the resource amount available for the virtual server and a packet loss rate of the virtual switch, to thereby select a virtual server capable of processing a larger number of requests. | 11-10-2011 |
20110289508 | METHODS AND SYSTEMS FOR EFFICIENT API INTEGRATED LOGIN IN A MULTI-TENANT DATABASE ENVIRONMENT - Methods and systems for efficient API integrated login in a multi-tenant database environment and for decreasing latency delays during an API login request authentication including receiving a plurality of API login requests at a load balancer of a datacenter, where each of the plurality of API login requests specify a user identifier (userID) and/or an organizational identifier (orgID), fanning the plurality of API login requests across a plurality of redundant instances executing within the datacenter, assigning each API login request to one of the plurality of redundant instances for authentication, and for each of the respective plurality of API login requests, performing a recursive query algorithm at the assigned redundant instance, at one or more recursive redundant instances within the datacenter, and at a remote recursive redundant instance executing in a second datacenter, as necessary, until the login request is authenticated or determined to be invalid. | 11-24-2011 |
20110307903 | SOFT PARTITIONS AND LOAD BALANCING - A method and system are provided for load balancing and partial task-processor binding. The method may provide for migrating at least one first task partially bound to and executing on at least one first processor. In accordance with the method, if at least one first condition is true, then the at least one first task may be migrated to at least one second processor such that the at least one second processor executes the at least one first task. Moreover, in accordance with the method, if at least one second condition is true, the at least one first task may be migrated back to the at least one first processor such that the at least one first processor executes the at least one first task. | 12-15-2011 |
20110321056 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node. | 12-29-2011 |
20110321057 | MULTITHREADED PHYSICS ENGINE WITH PREDICTIVE LOAD BALANCING - A circuit arrangement and method utilize predictive load balancing to allocate the workload among hardware threads in a multithreaded physics engine. The predictive load balancing is based at least in part upon the detection of predicted future collisions between objects in a scene, such that the reallocation of respective loads of a plurality of hardware threads may be initiated prior to detection of the actual collisions, thereby increasing the likelihood that hardware threads will be optimally allocated when the actual collisions occur. | 12-29-2011 |
20110321058 | Adaptive Demand-Driven Load Balancing - The present disclosure involves systems, software, and computer implemented methods for providing adaptive demand-driven load balancing for processing jobs in business applications. One process includes operations for identifying a workload for distribution among a plurality of work processes. A subset of the workload is assigned to a plurality of work processes for processing of the subset of the workload based on an application-dependent algorithm. An indication of availability is received from one of the plurality of work processes, and a new subset of the workload is assigned to the work process. | 12-29-2011 |
20120005686 | Annotating HTML Segments With Functional Labels - A method and apparatus is described for assigning functional labels to segments of web pages in an application-independent way. In the approach described herein, one of a generic set functional labels are automatically assigned to each segment of a web page, where the generic functional labels may be topic-independent and application-independent. Applications with different needs can determine which segments of the web page to process based on which functional labels correspond to the types of information needed by each application. Thus, the work of classifying the function of each segment of a web page is separated from the work of selecting which segments satisfy the need of a particular application. The work of classification can be performed in an application-independent way, relieving the burden from every application developer from having to create their own classifiers. | 01-05-2012 |
20120011519 | PARALLEL CHECKPOINTING FOR MIGRATION OF WORKLOAD PARTITIONS - A method includes receiving a command for migration of a workload partition having multiple processes from a source machine to a target machine. The method includes executing, for each of the multiple processes at least partially in parallel, an operation to create checkpoint data. The operation to create the checkpoint data provides an estimation of a size of the checkpoint data that is needed for migration, wherein the operation to create the checkpoint data is independent of storing the checkpoint data in the file. The method includes allocating areas within the file for storage of the checkpoint data for each of the multiple processes. The method includes storing the checkpoint data, for each of the multiple processes at least partially in parallel, into the areas allocated within the file based on offsets in the file for each of the multiple processes. | 01-12-2012 |
20120017220 | Systems and Methods for Distributing Validation Computations - In one embodiment, a method includes statically analyzing a validation toolkit environment. The method may also include, identifying a plurality of computational threads that do not share data structures with each other based on analysis of the validation toolkit environment. The method may additionally include calculating computational requirements of the computational threads. The method may further include distributing the threads among a plurality of resources such that the aggregate computational requirements of the computational threads are approximately evenly balanced among the plurality of resources. | 01-19-2012 |
20120023504 | NETWORK OPTIMIZATION - A method for handling communication data involving identifying available resources for applying compression tasks and estimating a throughput reduction value to be achieved by applying each of a plurality of different compression tasks to a plurality of media items. A cost of applying the plurality of different compression tasks to the plurality of media items is estimated. The method further includes finding an optimization solution that maximizes the throughput reduction value over possible pairs of the compression tasks and the media items, while keeping the cost of the tasks of the solution within the identified available resources and providing instructions to apply compression tasks according to the optimization solution. | 01-26-2012 |
20120030686 | THERMAL LOAD MANAGEMENT IN A PARTITIONED VIRTUAL COMPUTER SYSTEM ENVIRONMENT THROUGH MONITORING OF AMBIENT TEMPERATURES OF ENVIRNOMENT SURROUNDING THE SYSTEMS - Thermal load, management in a virtualized environment wherein server controlled physical processor systems are partitioned into a plurality of logical partitions LPARs that comprise first predetermining a set of ambient temperature levels for the surrounding outside environment for a first server controlled system having a plurality of LPARs. Then the ambient set of temperature levels are sensed and, if the set or predetermined pattern of temperature levels are exceeded, one or more of the plurality of LPARs are transferred from said first server controlled system to a second server controlled LPAR system over a connecting network. | 02-02-2012 |
20120036515 | Mechanism for System-Wide Target Host Optimization in Load Balancing Virtualization Systems - A mechanism for system-wide target host optimization in load balancing virtualization systems is disclosed. A method of the invention includes detecting a condition triggering a load balancing operation, identifying a plurality of candidate target host machines to receive one or more operating virtual machines (VMs) to be migrated, determining a load per resource on each identified candidate target host machine, and scheduling all operating VMs among all of the identified candidate target host machines in view of an expected load per resource on each candidate target host. | 02-09-2012 |
20120042322 | Hybrid Program Balancing - A method for balancing loads in a system having multiple processing elements (800) includes executing a plurality of load balancing algorithms in a dry run on load data from the system (810, 820, 830, 840), recording the results of each of the load balancing algorithms (815, 825, 835, 845), evaluating the results of each of the load balancing algorithms (850), selecting a load balancing algorithm providing the best results (855) and implementing the results of the selected algorithm on the system (860). | 02-16-2012 |
20120060171 | Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity. | 03-08-2012 |
20120060172 | Dynamically Tuning A Server Multiprogramming Level - Methods, apparatus and computer program products for allocating a number of workers to a worker pool in a multiprogrammable computer are provided, to thereby tune server multiprogramming level. The method includes the steps of monitoring throughput in relation to a workload concurrency level and dynamically tuning a multiprogramming level based upon the monitoring. The dynamic tuning includes adjusting with a first adjustment for a first interval and with a second adjustment for a second interval, wherein the second adjustment utilizes data stored from the first adjustment. | 03-08-2012 |
20120066688 | PROCESSOR THREAD LOAD BALANCING MANAGER - An operating system of an information handling system (IHS) determines a process tree of data sharing threads in an application that the IHS executes. A load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS). | 03-15-2012 |
20120066689 | BLADE SERVER AND SERVICE SCHEDULING METHOD OF THE BLADE SERVER - The present invention discloses a blade server and a service scheduling method of the blade server. The method includes the following steps. According to the requirement for processing capability of a service, a blade is selected for a logical partition storing the service data (A | 03-15-2012 |
20120072919 | MOBILE DEVICE AND METHOD FOR EXPOSING AND MANAGING A SET OF PERFORMANCE SCALING ALGORITHMS - A mobile device, a method for managing and exposing a set of performance scaling algorithms on the device, and a computer program product are disclosed. The mobile device includes a multiple-core processor communicatively coupled to a non-volatile memory. The non-volatile memory includes a set of programs defined by a respective combination of a performance scaling algorithm and a set of parameters, a startup program that when executed by the multiple-core processor identifies at least one member of the set of programs suitable for monitoring operation of the mobile device and scaling the performance of an identified processor core and an application programming interface that exposes the set of programs. | 03-22-2012 |
20120079499 | LOAD BALANCING DATA ACCESS IN VIRTUALIZED STORAGE NODES - Systems and methods of load balancing data access in virtualized storage nodes are disclosed. An embodiment of a method includes receiving a data access request from a client for data on a plurality of the virtualized storage nodes. The method also includes connecting the client to one of the plurality of virtualized storage nodes having data for the data access request. The method also includes reconnecting the client to another one of the plurality of virtualized storage nodes to continue accessing data in the data access request. | 03-29-2012 |
20120079500 | PROCESSOR USAGE ACCOUNTING USING WORK-RATE MEASUREMENTS - Accounting charges are assigned to workloads by measuring a relative use of computing resources by the workloads, then scaling the results using determined work-rate for the corresponding workload. Usage metrics for the individual resources may be selectable for the resources being measured and the work-rates may be determined from an analytical model or from empirical model that determines work-rates from an indication of processor throughput. Under single workload conditions on a platform, or other suitable conditions, a workload type may be used to select the particular usage metrics applied for the various resources. | 03-29-2012 |
20120079501 | Application Load Adaptive Processing Resource Allocation - The invention provides hardware-automated systems and methods for efficiently sharing a multi-core data processing system among a number of application software programs, by dynamically reallocating processing cores of the system among the application programs in an application processing load adaptive manner. The invention enables maximizing the whole system data processing throughput, while providing deterministic minimum system access levels for each of the applications. With invented techniques, each application on a shared multi-core computing system dynamically gets a maximized number of cores that it can utilize in parallel, so long as all applications on the system still get at least up to their entitled number of cores whenever their actual processing load so demands. The invention provides inherent security and isolation between applications, as each application resides in its dedicated system memory segments, and can safely use the shared processing system as if it was the sole application running on it. | 03-29-2012 |
20120084788 | COMPLEX EVENT DISTRIBUTING APPARATUS, COMPLEX EVENT DISTRIBUTING METHOD, AND COMPLEX EVENT DISTRIBUTING PROGRAM - A server calculates correlations between complex event processing processes performed by virtual machines (VMs) so as to detect events from streams using condition expressions for identifying the events. The server obtains the load status of each of the VMs. The server then detects a VM having a processing load exceeding a predetermined level based on the load status thus obtained. When a VM having a processing load exceeding a predetermined level is detected, the server distributes the complex event processing processes to the respective VMs based on the calculated correlations between the complex event processing processes. | 04-05-2012 |
20120084789 | System and Method for Optimizing the Evaluation of Task Dependency Graphs - One embodiment of the present invention is a technique for optimizing a task graph that specifies multiple tasks and the dependencies between the specified tasks. When optimizing the task graph, the optimization engine performs multiple iterations of runtime optimization operations on the task graph. At each iteration, an optimized task graph is generated based on a different task aggregation topology. The optimized task graph is then compiled and executed. Runtime statistics related to the execution are collected, and, in subsequent iterations, the task graph is further optimized based on the collected statistics. Once the optimization process is complete, the most optimal task graph topology that was identified during the process is used to generate an optimized task graph for execution. | 04-05-2012 |
20120096473 | MEMORY MAXIMIZATION IN A HIGH INPUT/OUTPUT VIRTUAL MACHINE ENVIRONMENT - A computer implemented method is provided, including monitoring the utilization of resources available within a compute node, wherein the resources include an input/output capacity, a processor capacity, and a memory capacity. The method further comprises allocating virtual machines to the compute node to maximize use of a first one of the resources; and then allocating an additional virtual machine to the compute node to increase the utilization of the resources other than the first one of the resources without over-allocating the first one of the resources. In a web server, the input/output capacity may be the resource to be maximized. However, unused memory capacity and/or processor capacity of the compute node may be used more effectively by identifying an additional virtual machine that is memory intensive or processor intensive to be allocated or migrated to the compute node. The additional virtual machine(s) may be identified in new workload requests or from analysis of virtual machines running on other compute nodes accessible over the network. | 04-19-2012 |
20120102501 | ADAPTIVE QUEUING METHODOLOGY FOR SYSTEM TASK MANAGEMENT - A task management methodology for system having multiple processors and task queues adapts a queuing topology by monitoring a queue pressure and adjusting the queue topology from a selection of at least two different queue topologies. The queue pressure may be periodically monitored and queues with different granularities selected. The methodology reduced contention when there is high pressure on the queues while also reducing overhead to manage queues when there is less pressure on the queues. | 04-26-2012 |
20120110594 | LOAD BALANCING WHEN ASSIGNING OPERATIONS IN A PROCESSOR - A method and apparatus for assigning operations in a processor are provided. An incoming instruction is received. The incoming instruction is capable of being processed: only by a first processing unit (PU), only by a second PU or by either first and second PUs. The processing of first and second PUs is load balanced by assigning the received instructions capable of being processed by either the first and the second PUs based on a metric representing differential loads placed on the first and the second PUs. | 05-03-2012 |
20120117571 | LOAD BALANCER AND FIREWALL SELF-PROVISIONING SYSTEM - A method and system may receive a request to configure a computing resource, such as a load balancer or firewall based on configuration information received from a user via a web portal. The configuration information may be stored and a subsequent request to commit the stored configuration information may be received. One or more jobs may be queued in a jobs database based on the request to commit the configuration information. The one or more jobs may be dequeued by a workflow engine and executed to configure the computing resource. | 05-10-2012 |
20120131593 | SYSTEM AND METHOD FOR COMPUTING WORKLOAD METADATA GENERATION, ANALYSIS, AND UTILIZATION - A method for managing computing resources includes generating a first workload metadata for a first workload, generating a second workload metadata for a second workload, and comparing the first workload and the second workload metadata against resource metadata. The method includes, based upon the comparison of workload metadata against resource metadata, identifying a potential conflict in resource requirements between the first workload and the computing resources available to the processing entity, and assigning the second workload for execution by one of the processing entities. The metadata characterize computing resources required by the associated workload. The first workload metadata is initially prioritized over the second workload metadata. The workloads are to be executed by processing entities. The resource metadata is available to the processing entities. The potential conflict in resource requirements does not exist between the resource requirements of the second workload and the computing resources of the processing entity. | 05-24-2012 |
20120131594 | SYSTEMS AND METHODS FOR GENERATING DYNAMICALLY CONFIGURABLE SUBSCRIPTION PARAMETERS FOR TEMPORARY MIGRATION OF PREDICTIVE USER WORKLOADS IN CLOUD NETWORK - Embodiments relate to systems and methods for generating dynamically configurable subscription parameters for the temporary migration of predictive user workloads in a cloud network. Aspects relate to platforms and techniques for analyzing overnight or other off-peak or temporary deployments of user workloads to underutilized host clouds. A cloud management system can capture usage history data for a user operating in a default deployment, such as a premise/cloud mix. A deployment engine can determine the resources required for the user's workload pattern, and examine corresponding resources available in a set of other geographically-dispersed host clouds. The host clouds can comprise clouds based in different time zones, so that cloud capacity during U.S. West Coast evening time or European overnight hours can be packaged and offered to U.S. East Coast users at reduced rates. The deployment engine can generate different sets of dynamic subscription terms or parameters to be offered to the user, such as different costs or service levels at staggered off-peak periods. | 05-24-2012 |
20120131595 | PARALLEL COLLISION DETECTION METHOD USING LOAD BALANCING AND PARALLEL DISTANCE COMPUTATION METHOD USING LOAD BALANCING - Disclosed herein is a parallel collision detection method using load balancing in order to detect collision between two objects of a polygon soup. The parallel collision detection method is processed in parallel using a plurality of threads. The parallel collision detection method includes traversing a Bounding Volume Traversal Tree (BVTT) using Bounding Volume Hierarchies (BVHs) related to the polygon soup in a depth first search manner or a width first search manner; recursively traversing the children node of an internal node (a parent node) when a currently traversed node is the internal node and two Boundary Volumes (BVs) in the corresponding node overlap, and stopping to traverse the node when the currently traversed node is the internal node and two Boundary Volumes (BVs) do not overlap; and storing collision primitives in a leaf node when the currently traversed node is the leaf node and collision primitives in the leaf node overlap. | 05-24-2012 |
20120151494 | METHOD FOR DETERMINING A NUMBER OF THREADS TO MAXIMIZE UTILIZATION OF A SYSTEM - A method for determining a number of threads to maximize system utilization. The method begins with determining a first value which corresponds to the current system utilization. Next the method determines a second value which corresponds to the current number of threads in the system. Next the method determines a third value which corresponds to the number of processor cores in the system. Next the method receives a fourth value from an end user which corresponds to the optimal system utilization the end user wishes to achieve. Next the method determines a fifth value which corresponds to the number of threads necessary to achieve the optimal system utilization value received from the end user. Finally, the method sends the fifth value to all running applications. | 06-14-2012 |
20120159510 | HANDLING AND REPORTING OF OBJECT STATE TRANSITIONS ON A MULTIPROCESS ARCHITECTURE - Techniques are described for managing states of an object using a finite-state machine. The states may be used to indicate whether an object has been added, removed, requested or updated. Embodiments of the invention generally include dividing a process into at least two threads where a first thread changes the state of the object while the second thread performs the processing of the data found in the object. While the second thread is processing the data, the first thread may receive additional updates and change the states of the objects to inform the second thread that it should process the additional updates when the second thread becomes idle. | 06-21-2012 |
20120174117 | MEMORY-AWARE SCHEDULING FOR NUMA ARCHITECTURES - A topology reader may determine a topology of a Non-Uniform Memory Access (NUMA) architecture including a number of, and connections between, a plurality of sockets, each socket including one or more cores and at least one memory configured to execute a plurality of threads of a software application. A core list generator may generate, for each designated core of the NUMA architecture, and based on the topology, a proximity list listing non-designated cores in an order corresponding to a proximity of the non-designated cores to the designated core. A core selector may determine, at a target core and during the execution of the plurality of threads, that the target core is executing an insufficient number of the plurality of threads, and may select a source core at the target core, according to the proximity list associated therewith, for subsequent transfer of a transferred thread from the selected source core to the target core for execution thereon. | 07-05-2012 |
20120174118 | STORAGE APPARATUS AND LOAD DISTRIBUTION METHOD - A storage apparatus having plural control processors that interpret and process requests sent from a host computer includes a distribution judgment unit for judging, after a control processor receives a request sent from the host computer, whether or not to allocate processing relevant to the request from the control processor that received the request to another control processor, and a control processor selection unit for selecting an allocation target control processor if the distribution judgment unit judges to allocate the processing to another control processor. | 07-05-2012 |
20120180066 | VIRTUAL TAPE LIBRARY CLUSTER - Various embodiments for managing a virtual tape library cluster are provided. A virtual tape library system is enhanced by representing virtual tape resources in cluster nodes with a unique serial number. A least utilized cluster node is determined. One of the virtual tape resources represented within the least utilized cluster node is selected. | 07-12-2012 |
20120185867 | Optimizing The Deployment Of A Workload On A Distributed Processing System - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes. | 07-19-2012 |
20120185868 | WORKLOAD PLACEMENT ON AN OPTIMAL PLATFORM IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for optimizing workload placement in a networked computing environment (e.g., a cloud computing environment). Specifically, under embodiments of the present invention, a workload placement technique is applied to determine an optimal platform for handling an identified workload. The workload placement technique can comprise one or more of the following: a shadow placement technique whereby the workload is placed on multiple similar platforms substantially contemporaneously; a simultaneous placement technique whereby the workload is placed on multiple different platforms substantially contemporaneously; and/or a single platform placement technique whereby the workload is placed on a single platform at a given time. Once an optimal platform is identified, a workload timing method may be applied to determine when the workload should be placed thereon. The workload timing method can comprise one or more of the following: a time-based method whereby the workload is placed on the optimal platform at a predetermined time or time interval; and/or an event-based method whereby the workload is placed on the optimal platform based on an occurrence of one or more events external to the workload itself (e.g., a certain CPU or memory consumption, etc.). Once the workload is placed on the optimal platform, optimization data can be gathered for future assessments. | 07-19-2012 |
20120185869 | MULTIMEDIA PRE-PROCESSING APPARATUS AND METHOD FOR VIRTUAL MACHINE IN MULTICORE DEVICE - A multimedia data preprocessing apparatus for a virtual machine is provided. The multimedia data preprocessing apparatus includes a detection unit configured to detect multimedia data included in an application, a generation unit configured to generate a thread for processing the detected multimedia data, and an allocation unit configured to allocate the generated thread to an idle core. | 07-19-2012 |
20120185870 | All-to-All Comparisons on Architectures Having Limited Storage Space - Mechanisms for performing all-to-all comparisons on architectures having limited storage space are provided. The mechanisms determine a number of data elements to be included in each set of data elements to be sent to each processing element of a data processing system, and perform a comparison operation on at least one set of data elements. The comparison operation comprises sending a first request to main memory for transfer of a first set of data elements into a local memory associated with the processing element and sending a second request to main memory for transfer of a second set of data elements into the local memory. A pair wise comparison computation of the all-to-all comparison of data elements operation is performed at approximately a same time as the second set of data elements is being transferred from main memory to the local memory. | 07-19-2012 |
20120192200 | Load Balancing in Heterogeneous Computing Environments - Load balancing may be achieved in heterogeneous computing environments by first evaluating the operating environment and workload within that environment. Then, if energy usage is a constraint, energy usage per task for each device may be evaluated for the identified workload and operating environments. Work is scheduled on the device that maximizes the performance metric of the heterogeneous computing environment. | 07-26-2012 |
20120192201 | Dynamic Work Partitioning on Heterogeneous Processing Devices - A method, system and article of manufacture for balancing a workload on heterogeneous processing devices. The method comprising accessing a memory storage of a processor of one type by a dequeuing entity associated with a processor of a different type, identifying a task from a plurality of tasks within the memory that can be processed by the processor of the different type, synchronizing a plurality of dequeuing entities capable of accessing the memory storage, and dequeuing the task form the memory storage | 07-26-2012 |
20120198470 | COMPACT NODE ORDERED APPLICATION PLACEMENT IN A MULTIPROCESSOR COMPUTER - A multiprocessor computer system comprises a plurality of nodes, wherein the nodes are ordered using a snaking dimension-ordered numbering. An application placement module is operable to place an application in nodes with preference given to nodes ordered near one another. | 08-02-2012 |
20120204187 | Hybrid Cloud Workload Management - A method, apparatus, and computer program product for managing a workload in a hybrid cloud. It is determined whether first data processing resources processing a portion of a workload are overloaded. Responsive to a determination that the first data processing resources are overloaded, second data processing resources are automatically provisioned and the portion of the workload is automatically moved to the second data processing resources for processing. The second data processing resources are data processing resources that are provided as a service on the hybrid cloud. Processing of a first portion of a workload being processed on first data processing resources of a hybrid cloud are monitored simultaneously with monitoring processing of a second portion of the workload being processed on second data processing resources of the hybrid cloud. The workload may be allocated automatically between the first portion and the second portion responsive to the simultaneous monitoring. | 08-09-2012 |
20120204188 | PROCESSOR THREAD LOAD BALANCING MANAGER - A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS). | 08-09-2012 |
20120216214 | MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION - Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode. | 08-23-2012 |
20120222041 | TECHNIQUES FOR CLOUD BURSTING - Techniques for automated and controlled cloud migration or bursting are provided. A schema for a first cloud in a first cloud processing environment is used to evaluate metrics against thresholds defined in the schema. When a threshold is reached other metrics for other clouds in second cloud processing environments are evaluated and a second cloud processing environment is selected. Next, a second cloud is cloned in the selected second cloud processing environment for the first cloud and traffic associated with the first cloud is automatically migrated to the cloned second cloud. | 08-30-2012 |
20120222042 | MANAGEMENT OF HETEROGENEOUS WORKLOADS - Systems and methods for managing a system of heterogeneous workloads are provided. Work that enters the system is separated into a plurality of heterogeneous workloads. A plurality of high-level quality of service goals is gathered. At least one of the plurality of high-level quality of service goals corresponds to each of the plurality of heterogeneous workloads. A plurality of control functions are determined that are provided by virtualizations on one or more containers in which one or more of the plurality of heterogeneous workloads run. An expected utility of a plurality of settings of at least one of the plurality of control functions is determined in response to the plurality of high-level quality of service goals. At least one of the plurality of control functions is exercised in response to the expected utility to effect changes in the behavior of the system. | 08-30-2012 |
20120233625 | TECHNIQUES FOR WORKLOAD COORDINATION - Techniques for workload coordination are provided. An automated discovery service identifies resources with hardware and software specific dependencies for a workload. The dependencies are made generic and the workload and its configuration with the generic dependencies are packaged. At a target location, the packaged workload is presented and the generic dependencies automatically resolved with new hardware and software dependencies of the target location. The workload is then automatically populated in the target location. | 09-13-2012 |
20120233626 | SYSTEMS AND METHODS FOR TRANSPARENTLY OPTIMIZING WORKLOADS - Systems, methods, and media for transparently optimizing a workload of a containment abstraction are provided herein. Methods may include monitoring a workload of the containment abstraction, the containment abstraction being at least partially hardware bound, the workload corresponding to resource utilization of the containment abstraction, converting the containment abstraction from being at least partially hardware bound to being entirely central processing unit (CPU) bound by placing the containment abstraction in a memory store, based upon the workload, and allocating the workload of the containment abstraction across at least a portion of a data center to optimize the workload of the containment abstraction. | 09-13-2012 |
20120240129 | RANKING SERVICE UNITS TO PROVIDE AND PROTECT HIGHLY AVAILABLE SERVICES USING N+M REDUNDANCY MODELS - Among other things, embodiments described herein enable systems, e.g., Availability Management Forum (AMF) systems, having service units to operate with balanced loads both before and after the failure of one of the service units. A method for balancing standby workload assignments and active workload assignments for a group of service units in a system which employs an N+M redundancy model, wherein N service units are active service units and M service units are standby service units is described. An active workload that the N active service units need to handle is calculated and each of the N active service units in the group is provided with an active workload assignment based on the calculated active workload. Standby workload assignments are distributed among the M standby service units substantially equally. | 09-20-2012 |
20120240130 | VIRTUAL WORLD SUBGROUP DETERMINATION AND SEGMENTATION FOR PERFORMANCE SCALABILITY - A system and method of decreasing server loads and, more particularly, to decrease server load by automatically determining subgroups based on object interactions and computational expenditures. The system includes a plurality of servers; a subgroup optimization module configured to segment a plurality of objects into optimal subgroups; and a server transfer module configured to apportion one or more of the optimal subgroups between the plurality of servers based on a load of each of the plurality of servers. The method includes determining a relationship amongst a plurality of objects; segmenting the objects into optimized subgroups based on the relationships; and apportioning the optimized subgroups amongst a plurality of servers based on server load. | 09-20-2012 |
20120266179 | DYNAMIC MAPPING OF LOGICAL CORES - A processor that dynamically remaps logical cores to physical cores is disclosed. In one embodiment, the processor includes a plurality of physical cores, and is configured to store a mapping of logical cores to the plurality of physical cores. The processor further includes an assignment unit configured to remap the logical cores to the plurality of physical cores subsequent to a boot process of the processor. In some embodiments, the assignment unit is configured to remap the logical cores in response to receiving an indication that one or more of the plurality of physical cores have entered an idle state. The processor may be configured to load a first of the plurality of physical cores with an execution state of a second of the plurality of physical cores upon the first physical core exiting an idle state. | 10-18-2012 |
20120266180 | Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history. | 10-18-2012 |
20120266181 | SCALABLE PACKET PROCESSING SYSTEMS AND METHODS - A data processing architecture includes multiple processors connected in series between a load balancer and reorder logic. The load balancer is configured to receive data and distribute the data across the processors. Appropriate ones of the processors are configured to process the data. The reorder logic is configured to receive the data processed by the processors, reorder the data, and output the reordered data. | 10-18-2012 |
20120278813 | LOAD BALANCING - Efforts to avoid time-outs during execution of an application in a managed execution environment may be implemented by monitoring memory allocation. | 11-01-2012 |
20120284733 | Scheduling for Parallel Processing of Regionally-Constrained Placement Problem - Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results. | 11-08-2012 |
20120291044 | Routing Workloads Based on Relative Queue Lengths of Dispatchers - Mechanisms for distributing workload items to a plurality of dispatchers are provided. Each dispatcher is associated with a different computing system of a plurality of computing systems and workload items comprise workload items of a plurality of different workload types. A capacity value for each combination of workload type and computing system is obtained. For each combination of workload type and computing system, a queue length of a dispatcher associated with the corresponding computing system is obtained. For each combination of workload type and computing system, a dispatcher's relative share of incoming workloads is computed based on the queue length for the dispatcher associated with the computing system. In addition, incoming workload items are routed to a dispatcher, in the plurality of dispatchers, based on the calculated dispatcher's relative share for the dispatcher. | 11-15-2012 |
20120304191 | SYSTEMS AND METHODS FOR CLOUD DEPLOYMENT ENGINE FOR SELECTIVE WORKLOAD MIGRATION OR FEDERATION BASED ON WORKLOAD CONDITIONS - Embodiments relate to systems and methods for a cloud deployment engine for selective workload migration or federation based on workload conditions. A set of aggregate usage history data can record consumption of processor, software, or other resources subscribed to by one or more users in a or clouds. An entitlement engine can analyze the usage history data to identify a subscription margin and other trends or data reflecting short-term consumption trends. An associated deployment engine can analyze the short-term consumption trends, and generate a decision to either deploy any over-subscribed resources to a set of federated backup clouds, or to one or more new host clouds. In aspects, the decision to augment the capacity of the host cloud with either a cloud federation or a complete host cloud replacement can be based on a set of selection criteria, including the margin by which the resources are over-subscribed and/or whether the over-subscription is static, increasing or accelerating, among others. | 11-29-2012 |
20120304192 | LIFELINE-BASED GLOBAL LOAD BALANCING - Work-stealing is efficiently extended to distributed memory using low degree, low-diameter, fully-connected directed lifeline graphs. These lifeline graphs include k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, that node quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. Each quiesced node reactivates when work arrives from a lifeline, itself sharing this work with its incoming lifelines that are activated. Termination occurs when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination is detected automatically using the finish construct. | 11-29-2012 |
20120304193 | Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage Temperature Of The Nodes During Execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions. | 11-29-2012 |
20120311602 | STORAGE APPARATUS AND STORAGE APPARATUS MANAGEMENT METHOD - The overall processing function of a storage apparatus is improved by suitably migrating ownership. | 12-06-2012 |
20120311603 | STORAGE APPARATUS AND STORAGE APPARATUS MANAGEMENT METHOD - The overall processing performance of a storage apparatus is improved by migrating MPPK ownership with suitable timing. | 12-06-2012 |
20120324471 | CONTROL DEVICE, MANAGEMENT DEVICE, DATA PROCESSING METHOD OF CONTROL DEVICE, AND PROGRAM - A virtual server for measuring performance ( | 12-20-2012 |
20120331479 | LOAD BALANCING DEVICE FOR BIOMETRIC AUTHENTICATION SYSTEM - A load balancing device is provided that allocates, to one of a plurality of authentication servers, biometric authentication requests of users received from client terminals by comparing input biometric authentication data and registration target biometric authentication data so as to estimate a check process time, including storing a process time for an authentication request being processed for each of the authentication servers, and allocating a process for a biometric authentication request from the client terminal to an authentication server having a process time that is short by estimating a check process time on the basis of a quality of the input biometric data and a quality of registration target biometric data and referring to the process time stored in the storage unit for each authentication server when the biometric authentication request has been received from the client terminal. | 12-27-2012 |
20130007764 | ASSIGNING WORK TO A PROCESSING ENTITY ACCORDING TO NON-LINEAR REPRESENTATIONS OF LOADINGS - To perform load balancing across plural processing entities, load level indications associated with plural processing entities are received. The load level indications are representations based on applying a concave function on loadings of the plural processing entities. A processing entity is selected from among the plural processing entities to assign work according to the load level indications. | 01-03-2013 |
20130024871 | Thread Management in Parallel Processes - A method and system are provided for thread management in parallel processes in a multi-core or multi-node system. The method includes receiving monitored hardware metrics information from the multiple cores or multiple nodes on which processes are executed, receiving monitored process and thread information; and globally monitoring the processing across the multiple cores or multiple nodes. The method further includes analyzing the monitored information to minimize imbalances between the multiple cores and/or to improve core or node exploitation and dynamically adjusting the number of threads per process based on the analysis. | 01-24-2013 |
20130024872 | Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity. | 01-24-2013 |
20130031562 | MECHANISM FOR FACILITATING DYNAMIC LOAD BALANCING AT APPLICATION SERVERS IN AN ON-DEMAND SERVICES ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for facilitating dynamic load balancing at application servers in an on-demand services environment. In one embodiment and by way of example, a method includes polling a plurality of application servers for status, receiving status from each of the plurality of application servers, assigning a priority level to each of the plurality of application servers based on its corresponding status, and facilitating load balancing at the plurality of application servers based on their corresponding priority levels. | 01-31-2013 |
20130031563 | STORAGE SYSTEM - The storage system includes a progress status detection unit that detects respective progress statuses representing proportions of the amounts of processing performed by respective processing units to the amount of processing performed by the entire storage system, each of the processing units being implemented in the storage system and performing a predetermined task; a target value setting unit that sets target values of processing states of the processing units, based on the detected progress statuses of the respective processing units and ideal values of the progress statuses which are preset for the respective processing units; and a processing operation controlling unit that controls the processing states of the processing units such that the processing states of the processing units meet the set target values. | 01-31-2013 |
20130047165 | Context-Aware Request Dispatching in Clustered Environments - The present disclosure involves systems, software, and computer implemented methods for providing context-aware request dispatching in a clustered environment. One process includes operations for receiving an event at a first computer node. The contents of the event are analyzed to determine a target process instance for handling the event. A target computer node hosting the target process instance is determined, and the event is sent to the target computer node for handling by the target process instance. | 02-21-2013 |
20130047166 | Systems and Methods for Distributing an Aging Burden Among Processor Cores - Systems and methods are presented for reducing the impact of high load and aging on processor cores in a processor. A Power Management Unit (PMU) can monitor aging, temperature, and increased load on the processor cores. The PMU instructs the processor to take action such that aging, temperature, and/or increased load are approximately evenly distributed across the processor cores, so that the processor can continue to efficiently process instructions. | 02-21-2013 |
20130061237 | Switching Tasks Between Heterogeneous Cores - The present disclosure describes techniques for switching tasks between heterogeneous cores. In some aspects it is determined that a task being executed by a first core of a processor can be executed by a second core of a processor, the second core having an instruction set that is different from that of the first core, and execution of the task is switched from the first core to the second core effective to decrease an amount of energy consumed by the processor. | 03-07-2013 |
20130061238 | OPTIMIZING THE DEPLOYMENT OF A WORKLOAD ON A DISTRIBUTED PROCESSING SYSTEM - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes. | 03-07-2013 |
20130081048 | POWER CONTROL APPARATUS, POWER CONTROL METHOD, AND COMPUTER PRODUCT - A power control apparatus includes a processor that causes thermal fluid analysis of the amount of increase in power consumption for cooling a plurality of servers, where the increase in power consumption is consequent to an increase in the volume of tasks at each server among the servers. Based on analysis results obtained by the thermal fluid analysis, the processor selects from among the servers, a server to execute a task and causes the selected server to execute the task. | 03-28-2013 |
20130086593 | AUTOMATED WORKLOAD PERFORMANCE AND AVAILABILITY OPTIMIZATION BASED ON HARDWARE AFFINITY - A method, apparatus, and program product deploy a workload on a host within a computer system having a plurality of hosts. Different hosts may be physically located in proximity to different resources, such as storage and network I/O modules, and therefore exhibit different latency when accessing the resources required by the workload. Eligible hosts within the system are evaluated for their capacity to take on a given workload, then scored on the basis of their proximity to the resources required by the workload. The workload is deployed on a host having sufficient capacity to run it, as well as a high affinity score. | 04-04-2013 |
20130091509 | OFF-LOADING OF PROCESSING FROM A PROCESSOR BLADE TO STORAGE BLADES - A processor blade determines whether a selected processing task is to be off-loaded to a storage blade for processing. The selected processing task is off-loaded to the storage blade via a planar bus communication path, in response to determining that the selected processing task is to be off-loaded to the storage blade. The off-loaded selected processing task is processed in the storage blade. The storage blade communicates the results of the processing of the off-loaded selected processing task to the processor blade. | 04-11-2013 |
20130104143 | RUN-TIME ALLOCATION OF FUNCTIONS TO A HARDWARE ACCELERATOR - An accelerator work allocation mechanism determines at run-time which functions to allocate to a hardware accelerator based on a defined accelerator policy, and based on an analysis performed at run-time. The analysis includes reading the accelerator policy, and determining whether a particular function satisfies the accelerator policy. If so, the function is allocated to the hardware accelerator. If not, the function is allocated to the processor. | 04-25-2013 |
20130111494 | MANAGING WORKLOAD AT A DATA CENTER | 05-02-2013 |
20130111495 | Load Balancing Servers | 05-02-2013 |
20130125133 | System and Method for Load Balancing of Fully Strict Thread-Level Parallel Programs - A system and method for executing fully strict thread-level parallel programs and performing load balancing between concurrently executing threads may allow threads to efficiently distribute work among themselves. A parent function of a thread may spawn children on one or more processors, pushing a stack frame onto a deque, then may sync by determining whether its children remain in the deque. If not, and/or if not all stolen children have returned, the thread may abandon its stack as an orphan, acquire an empty stack, and begin stealing work from other threads. Stealing work may include identifying an element in a deque of another thread, removing the element from the deque, and executing the associated child function. If this is the last child of a parent on the other thread's orphan stack, the thread may release its stack, adopt the orphan stack of the other thread, and continue its execution. | 05-16-2013 |
20130132971 | SYSTEM, METHOD AND PROGRAM PRODUCT FOR STREAMLINED VIRTUAL MACHINE DESKTOP DISPLAY - A shared resource system, method of updating client displays and computer program products therefor. At least one client device locally displays activity with resources shared with the client device. A management system on provider computers that is providing resources shared by the client devices selectively generates prioritized display updates. The management system provides updates to respective client devices according to update priority. Updates may also be ordered for network load balancing. | 05-23-2013 |
20130132972 | THERMALLY DRIVEN WORKLOAD SCHEDULING IN A HETEROGENEOUS MULTI-PROCESSOR SYSTEM ON A CHIP - Various embodiments of methods and systems for thermally aware scheduling of workloads in a portable computing device that contains a heterogeneous, multi-processor system on a chip (“SoC”) are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different processing efficiencies at a given temperature, and because more than one of the processing components may be capable of processing a given block of code, thermally aware workload scheduling techniques that compare performance curves of the individual processing components at their measured operating temperatures can be leveraged to optimize quality of service (“QoS”) by allocating workloads in real time, or near real time, to the processing components best positioned to efficiently process the block of code. | 05-23-2013 |
20130132973 | SYSTEM AND METHOD OF DYNAMICALLY CONTROLLING A PROCESSOR - A method of executing a dynamic clock and voltage scaling (DCVS) algorithm in a central processing unit (CPU) is disclosed and may include monitoring CPU activity and determining whether a workload is designated as a special workload when the workload is added to the CPU activity. | 05-23-2013 |
20130139175 | PROCESS MAPPING PARALLEL COMPUTING - A method of mapping processes to processors in a parallel computing environment where a parallel application is to be run on a cluster of nodes wherein at least one of the nodes has multiple processors sharing a common memory, the method comprising using compiler based communication analysis to map Message Passing Interface processes to processors on the nodes, whereby at least some more heavily communicating processes are mapped to processors within nodes. Other methods, apparatus, and computer readable media are also provided. | 05-30-2013 |
20130139176 | SCHEDULING FOR REAL-TIME AND QUALITY OF SERVICE SUPPORT ON MULTICORE SYSTEMS - In a first embodiment of the present invention, a method of assigning tasks in a multicore electronic device is provided, the method comprising: receiving a set of tasks; ordering the tasks in non-increasing order of a utilization value of each task; partitioning the ordered tasks using a schedulability-centric algorithm; repartitioning the partitioned ordered tasks by reordering the partitioned ordered tasks in non-decreasing order of the utilization value of each task and partitioning the partitioned reordered tasks using a load-balancing-centric algorithm; and assigning the repartitioned tasks to one or more cores of the multicore electronic device based on results of the repartitioning. | 05-30-2013 |
20130160024 | Dynamic Load Balancing for Complex Event Processing - Disclosed herein are methods, systems, and computer readable storage media for performing load balancing actions in a complex event processing system. Static statistics of a complex event processing node, dynamic statistics of the complex event processing node, and project statistics for projects executing on the complex event processing node are aggregated. A determination is made as to whether the aggregated statistics satisfy a condition. A load balancing action may be performed, based on the determination. | 06-20-2013 |
20130167153 | IMFORMATION PROCESSING SYSTEM FOR DATA TRANSFER - A disclosed method includes: determining whether a value of a load caused by a transfer processing to transmit data received from first processing apparatuses to second processing apparatuses in response to a request from the second processing apparatuses exceeds a threshold; upon determining that the value of the load exceeds the threshold, counting, for each first processing apparatus, the number of second processing apparatuses that request data transmitted by the first processing apparatus; identifying a first processing apparatus that is a transmission source of data transferred in the transfer processing to be allocated to another transfer apparatus of plural transfer apparatuses, based on the counted number; and transmitting a change request requesting that the transfer processing of data transmitted by the identified first processing apparatus is to be allocated to the another transfer apparatus, to a management apparatus managing allocation of the transfer processing for the plural transfer apparatuses. | 06-27-2013 |
20130167154 | ENERGY EFFICIENT JOB SCHEDULING IN HETEROGENEOUS CHIP MULTIPROCESSORS BASED ON DYNAMIC PROGRAM BEHAVIOR - Methods for efficient job scheduling in a heterogeneous chip multiprocessor that include logic comparisons of performance metrics to determine if programs should be moved from an advanced core to a simple core or vice versa. | 06-27-2013 |
20130174176 | WORKLOAD MANAGEMENT IN A DATA STORAGE SYSTEM - According to certain aspects, the presently disclosed subject matter includes a method, system and apparatus, for managing a plurality of disk drives in a storage system. The workload of at least one disk drive among the plurality of disk drives is monitored, wherein the monitoring comprises receiving data indicative of a temperature of the at least one disk drive. In case the measured temperature matches a predefined criterion, the modification of workload distribution across the plurality of disk drives is enabled, in order to reduce workload of the at least one disk drive. | 07-04-2013 |
20130174177 | LOAD-AWARE LOAD-BALANCING CLUSTER - A load-aware load-balancing cluster includes a switch having a plurality of ports; and a plurality of servers connected to at least some of the plurality of ports of the switch. Each server is addressable by the same virtual Internet Protocol (VIP) address. Each server in the cluster has a mechanism constructed and adapted to respond to determine the particular server's own measured load; convert the particular server's own measured load to a corresponding own particular load category of a plurality of load categories; provide the particular server's own particular load category to other servers of the plurality of servers; obtain load category information from other servers of the plurality of servers; and maintain, as an indication of server load of each of the plurality of servers, the particular server's own particular load category and the load category information from the other servers. | 07-04-2013 |
20130174178 | AUTOMATED TEST CYCLE ESTIMATION SYSTEM AND METHOD - A system and method is disclosed to estimate both, the time and number of resources required to execute a test suite or a subset of test suite in parallel, with the objective of providing a balanced workload distribution. The present invention partitions test suite for parallelization, given the dependencies that exists between test cases and test execution time. | 07-04-2013 |
20130185731 | DYNAMIC DISTRIBUTION OF NODES ON A MULTI-NODE COMPUTER SYSTEM - I/O nodes are dynamically distributed on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjusts the I/O node distribution based on the I/O performance of the executing job. | 07-18-2013 |
20130191842 | PROVISIONING TENANTS TO MULTI-TENANT CAPABLE SERVICES - The present invention extends to methods, systems, and computer program products for implementing a tenant provisioning system in a multi-tenancy architecture using a single provisioning master in the architecture, and a data center provisioner in each data center in the architecture. The provisioning master receives user requests to provision a tenant of a service and routes such requests to an appropriate data center provisioner. Each service in the multi-tenancy architecture implements a common interface by which the corresponding data center provisioner can obtain a common indication of load from each different service deployed in the data center thus facilitating the selection of a scale unit on which a tenant is provisioned. The common interface also enables a service to dynamically register (i.e. without redeploying the tenant provisioning system) with the provisioning master as a multi-tenancy service by registering an endpoint address with the provisioning master. | 07-25-2013 |
20130191843 | SYSTEM AND METHOD FOR JOB SCHEDULING OPTIMIZATION - A system and computer-implemented method for generating an optimized allocation of a plurality of tasks across a plurality of processors or slots for processing or execution in a distributed computing environment. In a cloud computing environment implementing a MapReduce framework, the system and computer-implemented method may be used to schedule map or reduce tasks to processors or slots on the network such that the tasks are matched to processors or slots in a data locality aware fashion wherein the suitability of node and the characteristics of the task are accounted for using a minimum cost flow function. | 07-25-2013 |
20130191844 | MANAGEMENT OF THREADS WITHIN A COMPUTING ENVIRONMENT - Threads of a computing environment are managed to improve system performance. Threads are migrated between processors to take advantage of single thread processing mode, when possible. As an example, inactive threads are migrated from one or more processors, potentially freeing-up one or more processors to execute an active thread. Active threads are migrated from one processor to another to transform multiple threading mode processors to single thread mode processors. | 07-25-2013 |
20130191845 | LOAD CONTROL DEVICE AND LOAD CONTROL METHOD - A load control device controlling a load of an executed program includes an arithmetic processing unit configured to execute the program, a load detection unit configured to detect a load factor of the arithmetic processing unit, a load-difference detection unit configured to obtain a difference between a predetermined load factor and the load factor detected by the load detection unit and a load controller configured to control, for a predetermined time, the start or stop of the program executed by the arithmetic processing unit so that the arithmetic processing unit has the predetermined load factor on the basis of the difference detected by the load-difference detection unit. | 07-25-2013 |
20130198759 | CONTROLLING WORK DISTRIBUTION FOR PROCESSING TASKS - A technique for controlling the distribution of compute task processing in a multi-threaded system encodes each processing task as task metadata (TMD) stored in memory. The TMD includes work distribution parameters specifying how the processing task should be distributed for processing. Scheduling circuitry selects a task for execution when entries of a work queue for the task have been written. The work distribution parameters may define a number of work queue entries needed before a cooperative thread array” (“CTA”) may be launched to process the work queue entries according to the compute task. The work distribution parameters may define a number of CTAS that are launched to process the same work queue entries. Finally, the work distribution parameters may define a step size that is used to update pointers to the work queue entries. | 08-01-2013 |
20130219405 | APPARATUS AND METHOD FOR MANAGING DATA STREAM DISTRIBUTED PARALLEL PROCESSING SERVICE - Disclosed herein are an apparatus and method for managing a data stream distributed parallel processing service. The apparatus includes a service management unit, a Quality of Service (QoS) monitoring unit, and a scheduling unit. The service management unit registers a plurality of tasks constituting the data stream distributed parallel processing service. The QoS monitoring unit gathers information about the load of the plurality of tasks and information about the load of a plurality of nodes constituting a cluster which provides the data stream distributed parallel processing service. The scheduling unit arranges the plurality of tasks by distributing the plurality of tasks among the plurality of nodes based on the information about the load of the plurality of tasks and the information about the load of the plurality of nodes. | 08-22-2013 |
20130219406 | COMPUTER SYSTEM, JOB EXECUTION MANAGEMENT METHOD, AND PROGRAM - In a computer system of the present invention, whether or not master data has been updated is managed for each division key as master data management information. If the master data has been updated, a job is re-executed, but when the job is re-executed, data is divided using only a division key corresponding to updated master data, and thereby a sub-job which is a re-execution target is localized with the division key unit so as to re-execute a job (refer to FIG. | 08-22-2013 |
20130219407 | OPTIMIZED JOB SCHEDULING AND EXECUTION IN A DISTRIBUTED COMPUTING GRID - A disclosed example involves determining whether there is at least one valid combination of nodes and links from the network of nodes with capability and capacity over time to complete a computer-executable job by a deadline. A total cost combination of nodes and links is selected from among the at least one valid combination of nodes and links with the capability and capacity over time to complete the computer-executable job by the deadline. The computer-executable job is scheduled to be executed on at least one selected node. The scheduling is based on compiled instructions comprising the computer-executable job. At least some of the link capacity of at least one of the links connected to the at least one selected node is reserved, to match a job transport capacity requirement of the computer-executable job. | 08-22-2013 |
20130232504 | METHOD AND APPARATUS FOR MANAGING PROCESSING RESOURCES IN A DISTRIBUTED PROCESSING SYSTEM - In one aspect, the present invention reduces average power consumption in a distributed processing system by concentrating an overall processing load to the minimum number of processing units required to maintain a defined level of processing redundancy. When the required number of active processing units is fewer than all available processing units, the inactive processing units may be held in a reduced-power condition. The present invention thereby maintains the defined level of processing redundancy for reallocating jobs responsive to the failure of one of the active processing units, while reducing power consumption and simplifying jobs allocation and re-allocation when expanding or shrinking the active set of processing units responsive to changing processing load. As a non-limiting example, the distributed processing system is implemented within a telecommunications network router or other apparatus having a configured set of processing cards, such as control-plane processing cards. | 09-05-2013 |
20130232505 | USING GATHERED SYSTEM ACTIVITY STATISTICS TO DETERMINE WHEN TO SCHEDULE A PROCEDURE - Provided are a method, system, and computer program product for using gathered system activity statistics to determine when to schedule a procedure. Selection is made of one of at least one lull window having a plurality of consecutive time slots each having an activity value lower than a threshold point comprising a low activity level among during time slots within a distribution of activity values of the time slots over recurring time periods. The procedure in the computer system is scheduled to be performed during the time slots in the lull window in a future time period. | 09-05-2013 |
20130239119 | Dynamic Processor Mapping for Virtual Machine Network Traffic Queues - An algorithm for dynamically adjusting the number of processors servicing Virtual Machine Queues (VMQ) and the mapping of the VMQ to the processors based on network load and processor usage in the system The algorithm determines the total load on a processor and depending on whether the total load exceeds or falls below a threshold respectively, the algorithm moves at least one of the VMQs to a different processor based on certain criteria such as whether the destination processor is the home processor to the VMQ or whether it shares a common NUMA node with the VMQ. By doing so, better I/O throughput and lower power consumption can be achieved. | 09-12-2013 |
20130247067 | GPU Compute Optimization Via Wavefront Reforming - Methods and systems are provided for graphics processing unit optimization via wavefront reforming including queuing one or more work-items of a wavefront into a plurality of queues of a compute unit. Each queue is associated with a particular processor within the compute unit. A plurality of work passes are performed. A determination is made which of the plurality of queues are below a threshold amount of work-items. Remaining one or more work-items from the queues with remaining ones of the work-items are redistributed to the below threshold queues. A subsequent work pass is performed. The, repeating of the determining, redistributing, and performing the subsequent work pass is done until all the queues are empty. | 09-19-2013 |
20130247068 | LOAD BALANCING METHOD AND MULTI-CORE SYSTEM - A multi-core system includes at least three cores, a load comparator and a load migrator. The comparator simultaneously compares at least three loads of the at least three cores to detect a maximum load and a minimum load. The load migrator determines a first core having the maximum load as a source core and a second core having the minimum load as a target core of the at least three cores to migrate tasks from the source core to the target core. | 09-19-2013 |
20130254778 | DECENTRALIZED LOAD DISTRIBUTION TO REDUCE POWER AND/OR COOLING COSTS IN AN EVENT-DRIVEN SYSTEM - A computer program product and computer readable storage medium directed to decentralized load placement in an event-driven system so as to minimize energy and cooling related costs. Included are receiving a data flow to be processed by a plurality of tasks at a plurality of nodes in the event-driven system having stateful and stateless event processing components, wherein the plurality of tasks are selected from the group consisting of hierarchical tasks (a task that is dependent on the output of another task), nonhierarchical tasks (a task that is not dependent on the output of another task) and mixtures thereof. Nodes are considered for quiescing whose current tasks can migrate to other nodes while meeting load distribution and energy efficiency parameters and the expected duration of the quiesce provides benefits commensurate with the costs of quiesce and later restart. | 09-26-2013 |
20130263151 | Consistent Hashing Table for Workload Distribution - Described is a technology by which a consistent hashing table of bins maintains values representing nodes of a distributed system. An assignment stage uses a consistent hashing function and a selection algorithm to assign values that represent the nodes to the bins. In an independent mapping stage, a mapping mechanism deterministically maps an object identifier/key to one of the bins as a mapped-to bin. | 10-03-2013 |
20130283289 | ENVIRONMENTALLY AWARE LOAD-BALANCING - A method and associated systems for the environmentally aware load-balancing of components of a multi-component power-consuming system. The environmentally aware load-balancer receives continually updated values from at least two environmental sensors that monitor and report the values of environmental metrics that characterize components of the power-consuming system and the environments within which those components are located. When the load-balancer receives a task request directed to the power-consuming system, the load-balancer selects a balanced workload allocation as a function of the values of the received environmental metrics and communicates that balanced workload allocation to a routing mechanism. The routing mechanism then uses the communicated balanced workload allocation to determine which component or components of the power-consuming system should receive the task request. | 10-24-2013 |
20130298137 | MULTI-TASK SCHEDULING METHOD AND MULTI-CORE PROCESSOR SYSTEM - A multi-task scheduling method includes assigning a first thread to a first processor; detecting a second thread that is executed after the first thread; calculating based on a load of a processor that is assigned a third thread that generates the second thread, a first time that lasts until a start of the second thread; calculating a second time that lasts until completion of execution of the first thread; and changing a first time slice of the first processor to a second time slice when the second time is greater than the first time. | 11-07-2013 |
20130312005 | Apparatus and Method to Manage Device Performance in a Storage System - A method to optimize workload across a plurality of storage devices of a storage system, where the method monitors a workload of a first storage device belonging to a first tier of the storage system, calculates a performance of the workload of the first storage device belonging to a first tier of the storage system, interpolates a performance threshold for the first storage device using the-workload pattern of the first storage device and a profile of the first storage device, the profile identifying a benchmark performance of the first storage device, and optimizes a usage of the first storage device within the storage system to improve a performance of the first storage device. | 11-21-2013 |
20130312006 | SYSTEM AND METHOD OF MANAGING JOB PREEMPTION - Disclosed are methods for estimating a time associated with shifting a first workload from a first compute environment to a second compute environment, separate from the first compute environment, estimating a likelihood of success associated with a likelihood that the first workload could successfully be shifted to the second compute environment, dividing or using the likelihood of success by the time to yield or produce a risk-adjusted shift time and, when a comparison of the shift time is longer than a maximum acceptable wait time, proceeding with a first operation associated with how to preempt the first workload by the second workload. | 11-21-2013 |
20130318539 | CHARACTERIZATION OF WITHIN-DIE VARIATIONS OF MANY-CORE PROCESSORS - A system and method for operating a many-core processor including resilient cores may include determining a frequency variation map for the many-core processor and scheduling execution of a plurality of tasks on respective resilient cores of the many-core processor in accordance to the frequency variation map. | 11-28-2013 |
20130332938 | Non-Periodic Check-Pointing for Fine Granular Retry of Work in a Distributed Computing Environment - Distributing work in a distributed computing environment that includes multiple nodes. An individual node can receive a work assignment, which can then be divided into a plurality of work units. A first work unit can then be distributed to a first worker node. At least a portion of the first work unit can be re-distributed to a second worker node in response to determining that the first worker node has experienced a failure condition with respect to the first work unit. | 12-12-2013 |
20130339977 | MANAGING TASK LOAD IN A MULTIPROCESSING ENVIRONMENT - Managing load in a set of multiple processing modules interconnected by an interconnection network includes: communicating with each of the processing modules in the set, from a load management unit, over respective communication channels that are independent from the interconnection network. In a memory of the load management unit, information is stored indicative of quantities of tasks assigned for execution by respective ones of the processing modules in the set. The load management unit communicates with processing modules in the set over the communication channels to request reassignment of tasks for execution by different processing modules based at least in part on the stored information. | 12-19-2013 |
20130339978 | LOAD BALANCING FOR HETEROGENEOUS SYSTEMS - A method and an apparatus for performing load balancing in a heterogeneous computing system including a plurality of processing elements are presented. A program places tasks into a queue. A task from the queue is distributed to one of the plurality of processing elements, wherein the distributing includes the one processing element sending a task request to the queue and receiving a task to be done from the queue. The task is performed by the one processing element. A result of the task is sent from the one processing element to the program. The load balancing is performed by distributing tasks from the queue to processing elements that complete the tasks faster. | 12-19-2013 |
20130339979 | APPARATUS, SYSTEM AND METHOD FOR HETEROGENEOUS DATA SHARING - An apparatus, system, and method are disclosed for offloading data processing. An offload task hosted on a first data processing system provides internal functionality substantially equivalent to that of a second task | 12-19-2013 |
20130346998 | Methods And Apparatus For Load Balancing - Systems and techniques for computational load balancing. A problem space is partitioned into subspaces and the subspaces are assigned to processing nodes. The load of nodes associated with outer subspaces is compared with the load of nodes associated with inner subspaces, and partition boundary adjustments are made based on the relative loads of outer versus inner subspaces. | 12-26-2013 |
20130346999 | Methods And Apparatus For Load Balancing - Systems and techniques for computational load balancing. A problem space is partitioned into subspaces and the subspaces are assigned to processing nodes. The load of nodes associated with outer subspaces is compared with the load of nodes associated with inner subspaces, and partition boundary adjustments are made based on the relative loads of outer versus inner subspaces. | 12-26-2013 |
20130347000 | COMPUTER, VIRTUALIZATION MECHANISM, AND SCHEDULING METHOD - Computer including a plurality of physical CPUs, a plurality of virtual computers which execute predetermined processing and to which one of the plurality of physical CPUs is assigned, and a virtual computer control component able to cause the plurality of physical CPUs to execute overhead processing required by plurality of virtual computers. Virtual computer control component configured to: (A) upon causing the physical CPU, in which processing of the virtual computer is in a running state, to execute overhead processing, measure a run time used by the physical CPU to manage a cumulative run time, for each of the physical CPUs; and (B) upon causing the overhead processing to be executed subsequent to the (A), select a physical CPU in which the cumulative run time is smallest as the physical CPU to execute the overhead processing. | 12-26-2013 |
20140007132 | RESILIENT DATA PROCESSING PIPELINE ARCHITECTURE | 01-02-2014 |
20140019989 | MULTI-CORE PROCESSOR SYSTEM AND SCHEDULING METHOD - A multi-core processor system includes plural CPUs; memory that is shared among the CPUs; and a monitoring unit that instructs a change of assignment of threads to the CPUs based on a first process count stored in the memory and representing a count of processes under execution by the CPUs and a second process count representing a count of processes assigned to the CPUs, respectively. | 01-16-2014 |
20140026144 | Systems And Methods For Load Balancing Of Time-Based Tasks In A Distributed Computing System - A load manager comprises a configuration manager and a load monitor. The load manager is configured to monitor and manage aspects of a distributed computer system comprising a plurality of servers. Each server is configured to perform tasks according to a respective time-based scheduler configuration. In some embodiments, the load monitor monitors one or more load metrics of each of the one or more servers. In response to one or more load metrics exceeding a threshold, the configuration manager determines the current time-based task scheduler configuration of the server exceeding the threshold. The load manager is further configured to modify the time-based task scheduler configuration to adjust a further task load on the server based on the at least on or more load metrics. | 01-23-2014 |
20140026145 | PARALLEL PROCESSING IN HUMAN-MACHINE INTERFACE APPLICATIONS - A human-machine interface (HMI) application ( | 01-23-2014 |
20140026146 | MIGRATING THREADS BETWEEN ASYMMETRIC CORES IN A MULTIPLE CORE PROCESSOR - Some implementations provide techniques and arrangements to migrate threads from a first core of a processor to a second core of the processor. For example, some implementations may identify one or more threads scheduled for execution at a processor. The processor may include a plurality of cores, including a first core having a first characteristic and a second core have a second characteristic that is different than the first characteristic. Execution of the one or more threads by the first core may be initiated. A determination may be made whether to apply a migration policy. In response to determining to apply the migration policy, migration of the one or more threads from the first core to the second core may be initiated. | 01-23-2014 |
20140026147 | VARYING A CHARACTERISTIC OF A JOB PROFILE RELATING TO MAP AND REDUCE TASKS ACCORDING TO A DATA SIZE - A job profile is received that includes characteristics of a job to be executed, where the characteristics of the job profile relate to map tasks and reduce tasks of the job. The map tasks produce intermediate results based on input data, and the reduce tasks produce an output based on the intermediate results. The characteristics of the job profile include at least one particular characteristic that varies according to a size of data to be processed. The at least one particular characteristic of the job profile is set based on the size of the data to be processed. | 01-23-2014 |
20140033222 | CONTAMINATION BASED WORKLOAD MANAGEMENT - Computer-implemented methods for workload management and related computer program products are disclosed. One method comprises receiving corrosion rate signals from a first sensor associated with a first compute node, determining a first corrosion level for the first compute node as a function of the corrosion rate signal received from the first sensor, and automatically removing a first workload from the first compute node in response to determining that the first compute node has a first corrosion level that is greater than a setpoint level of corrosion. | 01-30-2014 |
20140033223 | LOAD BALANCING USING PROGRESSIVE SAMPLING BASED ON LOAD BALANCING QUALITY TARGETS - A method, system, and computer program product for of load balancing in a parallel map/reduce paradigm. The method commences by sampling a first set of input records, and forming a prospective load balancing assignment by assigning the first set of input records to the plurality of worker tasks based on a workload estimate for each of the worker tasks. To measure the prospective load balancing assignment, the method compares the workload variance over the plurality of worker tasks to a workload variance target, and also calculates a confidence level based on the sampled first set of input records. If the measured quality of the prospective load balancing assignment is not yet achieved, then the method samples additional input records; for example when the calculated workload variance is greater than the maximum workload variance target or when the calculated confidence level is lower than a confidence level threshold. | 01-30-2014 |
20140040914 | Load Determination Method - Methods and devices for determining load on a computing device ( | 02-06-2014 |
20140068627 | DYNAMIC RESOURCE SCHEDULING - Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance. | 03-06-2014 |
20140068628 | STORAGE MEDIUM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE - A non-transitory computer-readable recording medium storing a program causing a computer to execute a process, the process includes starting N (two or more) processes from a first program; and controlling variation in the number of processes for which a second program is deployed, depending on a load on the processes for which the second program is deployed, within a range of the N processes by deploying the second program to a storage area corresponding to one process of the N processes and undeploying the second program deployed to the storage area. | 03-06-2014 |
20140082629 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group. | 03-20-2014 |
20140082630 | PROVIDING AN ASYMMETRIC MULTICORE PROCESSOR SYSTEM TRANSPARENTLY TO AN OPERATING SYSTEM - In one embodiment, the present invention includes a multicore processor with first and second groups of cores. The second group can be of a different instruction set architecture (ISA) than the first group or of the same ISA set but having different power and performance support level, and is transparent to an operating system (OS). The processor further includes a migration unit that handles migration requests for a number of different scenarios and causes a context switch to dynamically migrate a process from the second core to a first core of the first group. This dynamic hardware-based context switch can be transparent to the OS. Other embodiments are described and claimed. | 03-20-2014 |
20140082631 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group. | 03-20-2014 |
20140089936 | MULTI-CORE DEVICE AND MULTI-THREAD SCHEDULING METHOD THEREOF - A multi-core device and a multi-thread scheduling method thereof are disclosed. The multi-thread scheduling method includes the following steps: recording thread performance-associated parameters for a thread; and performing a thread load balancing between multiple central processing units of a multi-core processor of the multi-core device. The thread load balancing is performed according to a thread critical performance condition of the thread and the thread critical performance condition is determined based on the thread performance-associated parameters. | 03-27-2014 |
20140089937 | PROCESSOR SYSTEM OPTIMIZATION - In order to enable the optimization of a processor system without relying upon knowhow or manual labor, an apparatus includes: information obtainment unit for reading, from memory, trace information of the processor system and performance information corresponding to the trace information; information analysis unit for analyzing the trace information and the performance information so as to obtain a performance factor such as an idle time, a processing completion time of a task, or the number of interprocessor communications as a result of the analysis; and optimization method output unit for displaying and outputting a method of optimizing the system in response to a result of the analysis. | 03-27-2014 |
20140101668 | Adaptive Auto-Pipelining for Stream Processing Applications - An embodiment of the invention provides a method for adaptive auto-pipelining of a stream processing application, wherein the stream processing application includes one or more threads. Runtime of the stream processing application is initiated with a stream processing application manager. The stream processing application is monitored with a monitoring module during the runtime, wherein the monitoring of the stream processing application includes identifying threads in the stream processing application that execute operators in a data flow graph, and determining an amount of work that each of the threads are performing on operators of the logical data flow graph. A processor identifies one or more operators in the data flow graph to add one or more additional threads based on the monitoring of the stream processing application during the runtime. | 04-10-2014 |
20140101669 | APPARATUS AND METHOD FOR PROCESSING TASK - Provided is a task processing apparatus and method that may select a task corresponding to predetermined task selection information when a task execution is completed and thus, an idle server occurs in at least one server, may separate the selected task into a first task and a second task, and may control the first task and the second task to be allocated to an existing allocation server of the selected task and an idle server, respectively. | 04-10-2014 |
20140101670 | COMPUTING SYSTEM INCLUDING MULTI-CORE PROCESSOR AND LOAD BALANCING METHOD THEREOF - A load balancing method of a computing system includes calculating a workload of at least one core of a plurality of cores of a multi-core processor that is entering an idle state, and selecting a core from among the plurality of cores to operate as a common core according to the calculated workload, wherein the common core operates while in the idle state. | 04-10-2014 |
20140101671 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - On a multiprocessor, a task may move between processors, and a context of the processor and a context of a co processor are together transferred at the time of a task switch, resulting in a reduced execution efficiency. The movement between the processors of the task using the co processor is restricted, to reduce the number of times of transfer of the co processor context. | 04-10-2014 |
20140101672 | Load Balanced Profiling - A method, load regulator, and profiling tool for monitoring and analyzing system performance and spare CPU capacity in a system such as a telecommunication system. The load regulator and profiling tool utilize a communication interface to balance the profiling performance of the profiling tool with the available spare CPU capacity in the system. The load regulator regularly sends information to the profiling tool of any spare CPU capacity during profiling, and the profiling tool adjusts the profiling performance gradually in response to the received information. | 04-10-2014 |
20140109105 | INTRUSION DETECTION APPARATUS AND METHOD USING LOAD BALANCER RESPONSIVE TO TRAFFIC CONDITIONS BETWEEN CENTRAL PROCESSING UNIT AND GRAPHICS PROCESSING UNIT - An intrusion detection apparatus and method using a load balancer responsive to traffic conditions between a central processing unit (CPU) and a graphics processing unit (GPU) are provided. The intrusion detection apparatus includes a packet acquisition unit, a character string check task allocation unit, a CPU character string check unit, and a GPU character string check unit. The packet acquisition unit receives packets, and stores the packets in a single task queue. The character string check task allocation unit determines the number of packets in the packet acquisition unit, and allocates character string check tasks to the CPU or the GPU. The CPU character string check unit compares the character strings of the packets with a character string defined in at least one detection rule inside the CPU. The GPU character string check unit compares the character strings of the packets with the character string inside the GPU. | 04-17-2014 |
20140115602 | INTEGRATION OF A CALCULATION ENGINE WITH A SOFTWARE COMPONENT - Various embodiments of systems and methods for integrating a calculation engine of an in-memory database with a software component are described herein. A control unit schedules and triggers jobs to be processed by an operational unit. The control unit and the operational unit are at an application level. The operational unit divides a job workload corresponding to a job trigger into work packages based on one or more parameters. The work packages are sent to a calculation engine in an in-memory database. At the in-memory database, operations are performed on the work packages and an output is generated. A log in the control unit is updated based on the output. | 04-24-2014 |
20140115603 | METHOD, APPARATUS, AND SYSTEM FOR SCHEDULING PROCESSOR CORE IN MULTIPROCESSOR CORE SYSTEM - The present invention discloses a method, an apparatus, and a system for scheduling a processor core in a multiprocessor core system, which relate to the field of multiprocessor core systems, and can meet the demand for real-time network I/O processing, thereby improving the efficiency of the multiprocessor core system. The method for scheduling a processor core in a multiprocessor core system includes: obtaining, in the running process of the multiprocessor core system, a first control parameter, a second control parameter, a third control parameter, and a fourth control parameter; transferring a packet of a data flow that enters the multiprocessor core system to an idle processor core for processing based on the first control parameter, the second control parameter, and the third control parameter; and switching over the processor core in the multiprocessor core system between an interruption mode and a polling mode based on the fourth control parameter. | 04-24-2014 |
20140130058 | DISTRIBUTION OF TASKS AMONG ASYMMETRIC PROCESSING ELEMENTS - Techniques to control power and processing among a plurality of asymmetric cores. In one embodiment, one or more asymmetric cores are power managed to migrate processes or threads among a plurality of cores according to the performance and power needs of the system. | 05-08-2014 |
20140137134 | LOAD-BALANCED SPARSE ARRAY PROCESSING - A sparse array is partitioned into first partitions and a second array is partitioned into second partitions based on an invariant relationship between the sparse array and the second array. The sparse array and the second array are associated with a computation involving the sparse array and the second array. The first partitions and the corresponding second partitions are distributed to workers. A different first partition and corresponding second partition is distributed to each of the workers. Third partitions of the sparse array and corresponding fourth partitions of the second array are determined based on the invariant relationship and measurements of load are received from each of the workers. At least one of the first partitions and the corresponding second partition is different from one of the third partitions and the corresponding fourth partition. The at least one of the first partitions and the corresponding second partition that is different is redistributed among the workers. A different third partition and corresponding fourth partition is executed by each of the workers. | 05-15-2014 |
20140137135 | MULTI-CORE-BASED LOAD BALANCING DATA PROCESSING METHODS - Systems and methods for processing data are provided. A system can include a plurality of cores and a core manager. A load balancing unit can check and compare loads of the cores. An address mapping unit can perform a mapping process based on the loads of the cores, and the core manager can route data appropriately, thereby improving the overall performance of the system. | 05-15-2014 |
20140143789 | ADJUSTMENT OF THREADS FOR EXECUTION BASED ON OVER-UTILIZATION OF A DOMAIN IN A MULTI-PROCESSOR SYSTEM - Embodiments provide various techniques for dynamic adjustment of a number of threads for execution in any domain based on domain utilizations. In a multiprocessor system, the utilization for each domain is monitored. If a utilization of any of these domains changes, then the number of threads for each of the domains determined for execution may also be adjusted to adapt to the change. | 05-22-2014 |
20140143790 | DATA PROCESSING SYSTEM AND SCHEDULING METHOD - A data processing system includes an interrupt controller that counts, as an interrupt processing execution count, executions of interrupt processing by threads executed by data processing devices; and a processor that is configured to select one scheduling method from among a plurality of scheduling methods, based on the interrupt processing execution count. | 05-22-2014 |
20140157284 | THREAD PROCESSING ON AN ASYMMETRIC MULTI-CORE PROCESSOR - An ASMP computing device comprising one or more computing components and one or more memory devices. The one or more computing components comprise a plurality of processing units and the one or more memory devices are communicatively coupled to the one or more computing components. Stored on the one or more memory devices are first processing frequency data and second processing frequency data. The first processing frequency data comprise a synchronization frequency, the synchronization frequency comprising a frequency for application to all online processing units when a measured highest load of any online processing unit is greater than a first ramp-up processor load threshold and an operating frequency of the online processing unit is lower than the synchronization frequency. The second processing frequency data comprises a ramp-up frequency, the ramp-up frequency comprising a frequency for application to any online processing unit when a measured processing load of the any online processing unit is greater than a second ramp-up processing load threshold. | 06-05-2014 |
20140157285 | DYNAMIC RECONFIGURABLE HETEROGENEOUS PROCESSOR ARCHITECTURE WITH LOAD BALANCING AND DYNAMIC ALLOCATION METHOD THEREOF - A dynamic reconfigurable heterogeneous processor architecture with load balancing and dynamic allocation method thereof is disclosed. The present invention uses a work control logic unit to detect load imbalance between different types of processors, and employs a number of dynamic reconfigurable heterogeneous processors to offload the heavier loaded processors. Hardware utilization of such design can be enhanced, and variation in computation needs among different computation phases can be better handled. To design the dynamic reconfigurable heterogeneous processors, a method of how to choose the basic building blocks and place the routing components is included. With the present invention, performance can be maximized at a minimal hardware cost. Hence the dynamic reconfigurable heterogeneous processor(s) so constructed and the load balancing and dynamic allocation method together will have the best performance at least cost. | 06-05-2014 |
20140165075 | EXECUTING A COLLECTIVE OPERATION ALGORITHM IN A PARALLEL COMPUTER - Executing a collective operation algorithm in a parallel computer includes a compute node of an operational group determining a required number of participants for execution of a collective operation algorithm and determining a number of contributing nodes having data to participate in the algorithm. Embodiments also include the compute node calculating a number of ghost nodes to participate in the algorithm. According to embodiments of the present invention, the number of ghost nodes is the required number of participants minus the number of contributing nodes having data to participate. Embodiments also include the compute node selecting from a plurality of ghost nodes, the calculated number of ghost nodes for participation in the execution of the algorithm and executing the algorithm with both the selected ghost nodes and the contributing nodes. | 06-12-2014 |
20140165076 | EXECUTING A COLLECTIVE OPERATION ALGORITHM IN A PARALLEL COMPUTER - Executing a collective operation algorithm in a parallel computer includes a compute node of an operational group determining a required number of participants for execution of a collective operation algorithm and determining a number of contributing nodes having data to participate in the algorithm. Embodiments also include the compute node calculating a number of ghost nodes to participate in the algorithm. According to embodiments of the present invention, the number of ghost nodes is the required number of participants minus the number of contributing nodes having data to participate. Embodiments also include the compute node selecting from a plurality of ghost nodes, the calculated number of ghost nodes for participation in the execution of the algorithm and executing the algorithm with both the selected ghost nodes and the contributing nodes. | 06-12-2014 |
20140165077 | Reducing The Scan Cycle Time Of Control Applications Through Multi-Core Execution Of User Programs - A method for pipeline parallelizing a control program for multi-core execution includes using ( | 06-12-2014 |
20140173623 | METHOD FOR CONTROLLING TASK MIGRATION OF TASK IN HETEROGENEOUS MULTI-CORE SYSTEM BASED ON DYNAMIC MIGRATION THRESHOLD AND RELATED COMPUTER READABLE MEDIUM - A method for controlling a task migration of a task in a heterogeneous multi-core system having at least a first cluster and a second cluster is provided. The method may include at least the following steps: dynamically adjusting a migration threshold; comparing a load of the task running on one core of the first cluster with the migration threshold, and accordingly generating a comparison result; and selectively controlling the task to migrate to the second cluster according to at least the comparison result, wherein each core in the first cluster has first processor architecture, and each core in the second cluster has second processor architecture different from the first processor architecture. | 06-19-2014 |
20140173624 | LOAD BALANCING SCHEME - Technologies are generally described for load balancing scheme in a cloud computing environment hosting a mobile device. In some examples, a load balancer may include multiple request processing units, each of the multiple request processing units comprising a network socket that is connected to at least one application server and at least one cache server and a programmable processor configured to process a cache request from one of the at least one application server, a performance checking unit configured to measure processing loads of the programmable processors, and a processor managing unit configured to adjust the processing loads by writing or deleting a load balancing program in at least one of the programmable processors. | 06-19-2014 |
20140181833 | PROCESSOR PROVISIONING BY A MIDDLEWARE SYSTEM FOR A PLURALITY OF LOGICAL PROCESSOR PARTITIONS - A middleware processor provisioning process provisions a plurality of processors in a multi-processor environment. The processing capability of the multiprocessor environment is subdivided and multiple instances of service applications start protected processes to service a plurality of user processing requests, where the number of protected processes may exceed the number of processors. A single processing queue is created for each processor. User processing requests are portioned and dispatched across the plurality of processing queues and are serviced by protected processes from corresponding service applications, thereby efficiently using available processing resources while servicing the user processing requests in a desired manner. | 06-26-2014 |
20140181834 | LOAD BALANCING METHOD FOR MULTICORE MOBILE TERMINAL - Methods and apparatus are provided for load-balancing in a portable terminal having a plurality of Central Processing Units (CPUs). A utilization is calculated for each of the plurality of CPUs, when a state of a task is changed. An average of the utilizations of the plurality of CPUs is calculated. It is determined whether the average exceeds a predetermined threshold. Load-balancing is performed when the average exceeds the predetermined threshold. | 06-26-2014 |
20140189709 | METHOD OF DISTRIBUTING PROCESSOR LOADING BETWEEN REAL-TIME PROCESSOR THREADS - A method of distributing processor loading in a real-time operating system between a high frequency processing task and a lower frequency processing task, the method including: making a processing request to the high frequency processing task from the lower frequency processing task, the processing request including a plurality of discrete processing commands; queuing the plurality of discrete processing commands; and executing a subset of the queued processing commands with the execution of each of a plurality of high frequency processing tasks such that the execution of the plurality of discrete processing commands is distributed across the plurality of high frequency processing tasks. | 07-03-2014 |
20140189710 | THERMALLY DRIVEN WORKLOAD SCHEDULING IN A HETEROGENEOUS MULTI-PROCESSOR SYSTEM ON A CHIP - Various embodiments of methods and systems for thermally aware scheduling of workloads in a portable computing device that contains a heterogeneous, multi-processor system on a chip (“SoC”) are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different processing efficiencies at a given temperature, and because more than one of the processing components may be capable of processing a given block of code, thermally aware workload scheduling techniques that compare performance curves of the individual processing components at their measured operating temperatures can be leveraged to optimize quality of service (“QoS”) by allocating workloads in real time, or near real time, to the processing components best positioned to efficiently process the block of code. | 07-03-2014 |
20140196054 | ENSURING PERFORMANCE OF A COMPUTING SYSTEM - A system includes a plurality of computing systems, wherein each computing system comprises memory, a network interface and a processor. At least one computing system is configured to issue a command to run abbreviated measurements of performance for one or more computing nodes to determine whether a number of the computing nodes is adequate to perform a computing job. The at least one computing system is configured to assign the computing job to a set of the number of computing nodes if each of the set of the number of computing nodes is adequate to perform the computing job according to performance measurement results of the abbreviated measurements. For any of the one or more computing nodes that is inadequate to perform the computing job according to performance measurement results of the abbreviated measurements, the at least one computing system is configured to indicate those computing nodes as low performing. | 07-10-2014 |
20140196055 | HIGH PERFORMANCE LOG-BASED PROCESSING - Each of a plurality of Worker processes are allowed to perform any and all of the following tasks involving logged work items: (1) reading a subset of the work items from a log; (2) sequentially ordering work items for corresponding data objects; (3) applying a sequentially ordered set of work items to a corresponding data object; and (4) transmitting a subset of work items to a Worker process running on another database server in a cluster, if necessary. These tasks can be performed concurrently, at will, and as available, by the Worker processes. An improved checkpointing technique eliminates the need for the Worker processes to get to a synchronization point and stop. Instead, a Coordinator process examines the current state of progress of the Worker processes and computes a past point in the sequence of work items at which all work items before that point have been completely processed, and records this point as the checkpoint. | 07-10-2014 |
20140196056 | VIRTUAL SERVER AGENT LOAD BALANCING - Virtual machine (VM) proliferation may be reduced through the use of Virtual Server Agents (VSAs) assigned to a group of VM hosts that may determine the availability of a VM to perform a task. Tasks may be assigned to existing VMs instead of creating a new VM to perform the task. Furthermore, a VSA coordinator may determine a grouping of VMs or VM hosts based on one or more factors associated with the VMs or the VM hosts, such as VM type or geographical location of the VM hosts. The VSA coordinator may also assign one or more VSAs to facilitate managing the group of VM hosts. In some embodiments, the VSA coordinators may facilitate load balancing of VSAs during operation, such as during a backup operation, a restore operation, or any other operation between a primary storage system and a secondary storage system. | 07-10-2014 |
20140201758 | COMPUTING DEVICE, METHOD, AND PROGRAM FOR DISTRIBUTING COMPUTATIONAL LOAD - Embodiments of the present invention provide a computing device configured to operate as a particular computing device among a plurality of interconnected computing devices, comprising: a load information obtaining unit configured to obtain, from the particular computing device and from the or each of a group of one or more other computing devices from among the plurality of interconnected computing devices, load information representing the current computational load of the computing device from which the information is obtained; and a load redistribution determination unit configured, in dependence upon the obtained load information, to determine whether or not to redistribute computational load among the particular computing device and the group, and if it is determined to redistribute computational load, to determine the redistribution and to instruct the determined redistribution. | 07-17-2014 |
20140208331 | METHODS OF PROCESSING CORE SELECTION FOR APPLICATIONS ON MANYCORE PROCESSORS - A runtime method is disclosed that dynamically sets up core containers and thread-to-core affinity for processes running on manycore coprocessors. The method is completely transparent to user applications and incurs low runtime overhead. The method is implemented within a user-space middleware that also performs scheduling and resource management for both offload and native applications using the manycore coprocessors. | 07-24-2014 |
20140215486 | Cluster Maintenance System and Operation Thereof - A method of operating a cluster of machines that includes receiving a request for a disruption, determining a subset of machines of the cluster affected by the requested disruption, and determining a set of jobs having corresponding tasks on the affected machines. The method also includes computing, on a data processor, a drain time for a drain that drains the tasks of the jobs from the affected machines, and scheduling on a drain calendar stored in non-transitory memory a drain interval for the drain, the drain interval having a start time and an end time. | 07-31-2014 |
20140237481 | LOAD BALANCER FOR PARALLEL PROCESSORS - Invented systems and methods provide a scalable architecture and hardware logic algorithms for intelligent, realtime load balancing of incoming processing work load among instances of a number of application programs hosted on parallel arrays of manycore processors, which can be dynamically shared among the hosted applications according to incoming processing data load variations for each of the application instances as well as the processing capacity entitlements of the individual applications. | 08-21-2014 |
20140237482 | COMPUTATIONAL RESOURCE MANAGEMENT - Computational tasks are completed using third-party (user) devices. The tasks are delivered to a computational resource manager (broker) by a task originator. The originator pays a fee to the broker to have the task completed. The broker has a relationship with a content publisher (for web sites or apps) which has users. The publisher inserts inline code in its web page or app supplied by the broker. The code, when executed by the user's browser, enables the broker to communicate with the user. The broker identifies users who have devices that are suitable for completing the task. The task is assigned and executes on that device. When completed, the task output is delivered to the broker who makes it accessible to the task originator. In these processes, user, task originator, and publisher identities are protected. The broker manages the transactions and message passing among the parties. | 08-21-2014 |
20140250440 | SYSTEM AND METHOD FOR MANAGING STORAGE INPUT/OUTPUT FOR A COMPUTE ENVIRONMENT - Disclosed herein are systems, methods, and computer-readable storage media for managing storage data input/output in a compute environment. The system receives data associated with workload or jobs that is to be processed in a compute environment. The system receives more data associated with a job that is to be scheduled to consume compute resources in the compute environment. Based on all the received data, the system transmits a signal to a storage input/output manager. The signal instructs the storage/output manager regarding how to manage a file transfer between the compute environment and a storage environment. The file transfer is associated with processing the job in the compute environment. | 09-04-2014 |
20140259023 | ADAPTIVE VIBRATION MITIGATION - In accordance with one implementation, a system for adaptive vibration mitigation includes a distributed workload scheduler configured to allocate individual workloads between a plurality of storage nodes in a distributed computing and storage environment. The distributed workload scheduler synthesizes and analyzes feedback data from the storage nodes in order to modify workload scheduling policies and/or the behavior of other system components in a way that mitigates the impact of vibrations on the system. | 09-11-2014 |
20140282594 | DISTRIBUTING PROCESSING OF ARRAY BLOCK TASKS - A technique includes distributing a plurality of tasks among a plurality of worker nodes to perform a processing operation on an array. Each task is associated with a set of a least one data block of the array, and an order of the tasks is defined by an array-based programming language. Distribution of the tasks includes, for at least one of the worker nodes, selectively reordering the order defined by the array-based programming language to regulate an amount of data transferred to the worker node. | 09-18-2014 |
20140282595 | Systems and Methods for Implementing Work Stealing Using a Configurable Separation of Stealable and Non-Stealable Work Items - A system may perform work stealing using a dynamically configurable separation between stealable and non-stealable work items. The work items may be held in a double-ended queue (deque), and the value of a variable (index) may indicate the position of the last stealable work item or the first non-stealable work item in the deque. A thread may steal a work item only from the portion of another thread's deque that holds stealable items. The owner of a deque may add work items to the deque and may modify the number or percentage of stealable work items, the number or percentage of non-stealable work items, and/or the ratio between stealable and non-stealable work items in the deque during execution. For example, the owner may convert stealable work items to non-stealable work items, or vice versa, in response to changing conditions and/or according to various work-stealing policies. | 09-18-2014 |
20140282596 | ACHIEVING CONTINUOUS AVAILABILITY FOR PLANNED WORKLOAD AND SITE SWITCHES WITH NO DATA LOSS - Embodiments of the disclosure are directed to methods, systems and computer program products for performing a planned workload switch. A method includes receiving a request to switch a site of an active workload and stopping one or more long running processes from submitting a new request to the active workload. The method also includes preventing a new network connection from accessing the active workload and processing one or more transactions in a queue of the active workload for a time period. Based on a determination that the queue of the active workload is not empty after the time period, the method includes aborting all remaining transactions in the queue of the active workload. The method further includes replicating all remaining committed units of work to a standby workload associated with the active workload. | 09-18-2014 |
20140282597 | Bottleneck Detector for Executing Applications - A bottleneck detector may analyze individual workloads processed by an application by logging times when the workload may be processed at different checkpoints in the application. For each checkpoint, a curve fitting algorithm may be applied, and the fitted curves may be compared between different checkpoints to identify bottlenecks or other poorly performing sections of the application. A real time implementation of a detection system may compare newly captured data points against historical curves to detect a shift in the curve, which may indicate a bottleneck. In some cases, the fitted curves from neighboring checkpoints may be compared to identify sections of the application that may be a bottleneck. An automated system may apply one set of checkpoints in an application, identify an area for further investigation, and apply a second set of checkpoints in the identified area. Such a system may recursively search for bottlenecks in an executing application. | 09-18-2014 |
20140282598 | METHOD AND DEVICE FOR PROCESSING A WINDOW TASK - A method and a device for processing a window task are provided. The method includes: creating a thread class including a first member variable for representing whether a task processed currently has been cancelled and a first member function for initiating a backstage thread; creating a backstage thread object based on the thread class when a task that takes time needs to be processed, and initializing the first member variable in the backstage thread object as FALSE, invoking the first member function in the backstage thread object to initiate the backstage thread; in process of the backstage thread processing the task that takes time, if a close instruction for a current window is received, setting the first member variable in the backstage thread object as TRUE to release the memory space occupied by the backstage thread object and closing the current window. | 09-18-2014 |
20140289737 | UPDATING PROGRESSION OF PERFORMING COMPUTER SYSTEM MAINTENANCE - A computer-implemented method, computer program product, and computer system for updating progression of performing computer system management. A computer system receives a log-on of a change implementer onto a managed computer system and searches a change request on a managing computer system. In response to that the change request is found, the computer system receives from the change implementer a command with a current date and time and matches the command to one or more tasks within the change request. In response to determining that the command matches the one or more tasks, the computer system updates start dates and times of the one or more tasks. And, in response to that the one or more tasks are completed, the computer system updates stop dates and times of the one or more tasks. | 09-25-2014 |
20140304712 | METHOD FOR OPERATING TASK AND ELECTRONIC DEVICE THEREOF - A method and a device for operation a task in an electronic device are provided. The method for operating a task in an electronic device includes generating at least one task on a protocol layer basis based on a work to process, executing at least one task generated on a layer basis through at least one Central Processing Unit (CPU), determining whether a workload to process is changed, and changing, if the workload to process is changed, a workload of the executing of the at least one task. | 10-09-2014 |
20140310722 | Dynamic Load Balancing in Circuit Simulation - Methods and systems are disclosed related to dynamic load balancing in circuit simulation. In one embodiment, a computer implemented method of performing dynamic load balancing in simulating a circuit includes identifying a plurality of simulation tasks to be performed, determining estimated processing durations corresponding to performance of the plurality of simulation tasks, distributing the plurality of simulation tasks to a plurality of processors according to the estimated processing duration of each simulation task, and performing the plurality of simulation tasks at the plurality of processors in parallel. | 10-16-2014 |
20140317635 | COMPUTER SYSTEM AND DIVIDED JOB PROCESSING METHOD AND PROGRAM - Each execution computer measures a load on each execution computer, generates a job request including information about the number of executable divided jobs based on the measured load, and sends the generated job request to a management computer which manages each execution computer. The management computer receives the job request and assigns as many divided jobs as the number of divided jobs designated by the job request to each execution computer. | 10-23-2014 |
20140325524 | MULTILEVEL LOAD BALANCING - Example embodiments relate to multilevel load balancing. In example embodiments, a system may maintain a system-level queue of jobs. The system may maintain a pool of active processing nodes. Each active processing node in the pool may pull jobs from the system-level queue at an arrival rate for the particular active processing node. Each active processing node may determine a node-level utilization that indicates the particular active processing node's capacity to process jobs at the arrival rate. Each active processing node may adjust the arrival rate based on the node-level utilization. The system may determine a system-level utilization based the number of active processing nodes in the pool and average processing rates of the active processing nodes in the pool. Each average processing rate may indicate the time it takes the particular active processing node to process jobs once pulled from the system-level queue. | 10-30-2014 |
20140331236 | POLYMORPHIC HETEROGENEOUS MULTI-CORE ARCHITECTURE - Methods and architecture for dynamic polymorphic heterogeneous multi-core processor operation are provided. The method for dynamic heterogeneous polymorphic processing includes the steps of receiving a processing task comprising a plurality of serial threads. The method is performed in a processor including a plurality of processing cores, each of the plurality of processing cores being assigned to one of a plurality of core clusters and each of the plurality of core clusters capable of dynamically forming a coalition comprising two or more of its processing cores. The method further includes determining whether each of the plurality of serial threads requires more than one processing core, and sending a go-into-coalition-mode-now instruction to ones of the plurality of core clusters for handling ones of the plurality of serial threads that require more than one processing core. | 11-06-2014 |
20140344830 | EFFICIENT PROCESSOR LOAD BALANCING USING PREDICATION FLAGS - A system and methods embodying some aspects of the present embodiments for efficient load balancing using predication flags are provided. The load balancing system includes a first processing unit, a second processing unit, and a shared queue. The first processing unit is in communication with a first queue. The second processing unit is in communication with a second queue. The first and second queues are each configured to hold a packet. The shared queue is configured to maintain a work assignment, wherein the work assignment is to be processed by either the first or second processing unit. | 11-20-2014 |
20140366035 | VEHICLE ELECTRONIC CONTROL DEVICE AND DATA-RECEIVING METHOD - A vehicle electronic control device having a first microcomputer and a second microcomputer connected to an in-vehicle network. The first microcomputer includes a process load level detecting unit that detects a process load level of the first microcomputer, a table in which the process load level is associated with data identification information, and a reception data reducing unit that, in a case where the process load level becomes equal to or greater than a first threshold level, stops receiving one or more data which the first microcomputer has received before the process load level becomes greater than or equal to the first threshold value. The second microcomputer includes a process load level estimating unit that estimates the process load level of the first microcomputer, a substitute data receiving unit that receives data, which the first microcomputer stops receiving, from the in-vehicle network in a case where the process load level estimated by the process load level estimating unit becomes greater than or equal to a second threshold value, and a data transmitting unit that transmits the data received by the substitute data receiving unit to the first microcomputer at a communication timing of serial communication. | 12-11-2014 |
20140380333 | DETECTION APPARATUS, NOTIFICATION METHOD, AND COMPUTER PRODUCT - A coprocessor stores to local memory, a driver execution start time, for each execution start of drivers. If a CPU call process is executed during the execution of driver A, the coprocessor calculates the difference of the execution start time and the current time, for drivers B and C. Taking driver C as an example, the coprocessor adds to the difference calculated for the driver C, a processing time required for the CPU call process of driver A and a processing time required for a normal process of driver B. The coprocessor determines whether respective addition results for driver C comply with respective time constraints. If it is determined that an addition result for the driver C cannot comply with the time constraint, and the coprocessor sends an execution request for driver C to another coprocessor. | 12-25-2014 |
20150020078 | THREAD SCHEDULING ACROSS HETEROGENEOUS PROCESSING ELEMENTS WITH RESOURCE MAPPING - A system, method, and program product for scheduling processes of a workload on a plurality of hardware threads configured in a plurality of processing elements of a multithreading parallel computing system for processing thereby. Process dimensions for each process are determined based on processing attributes associated with each process, and a place and route algorithm is utilized to map the processes to a processor space representative of the processing resources of the computing system based at least in part on the process dimensions to thereby distribute the processes of the workload. | 01-15-2015 |
20150026697 | SYSTEM OVERHEAD-BASED AUTOMATIC ADJUSTING OF NUMBER OF RUNNING PROCESSORS WITHIN A SYSTEM - Data processing system efficiency is improved by automatically determining whether to adjust for a next time interval a number N of processors running within the system for processing a workload. The automatically determining includes obtaining a measure of operating system overhead by evaluating one or more characteristics of processor time of the N processors consumed within the system for a time interval, and obtaining a measure of system utilization of the N processors running within the system for processing the workload for the time interval. The automatically determining further includes automatically ascertaining whether to adjust the number N of processors running within the system for the next time interval to improve system efficiency using the obtained measure of operating system overhead and the obtained measure of system utilization of the N processors. | 01-22-2015 |
20150026698 | METHOD AND SYSTEM FOR WORK PARTITIONING BETWEEN PROCESSORS WITH WORK DEMAND FEEDBACK - A method according to one embodiment includes the operations of loading binary code comprising a top level task into memory on a first processor, the top level task having an associated range; determining if the top level task is divisible into a plurality of sub-tasks based on the range; for each of the sub-tasks, determining if a given sub-task is divisible into a plurality of sub-sub-tasks; and if the given sub-task is indivisible, executing the given sub-task; otherwise, if the given sub-task is divisible, dividing the given sub-task into the plurality of sub-sub-tasks. | 01-22-2015 |
20150033239 | PREDICTION OF IMPACT OF WORKLOAD MIGRATION - A method, system and product for predicting impact of workload migration. The method comprising: obtaining a utilization pattern of a workload that is being executed on a first platform; generating a synthetic workload that is configured to have the utilization pattern when executed on the first platform; executing the synthetic workload on a second platform; and identifying a change in performance between execution of the synthetic workload on the first platform and between execution of the synthetic workload on the second platform in order to provide a prediction of an impact of migrating the workload from the first platform to the second platform. | 01-29-2015 |
20150033240 | MEASURING METHOD, A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, AND INFORMATION PROCESSING APPARATUS - A measuring method of a processing load of a processor, the method includes measuring a first processing load of the processor in executing of a first thread included in a program at a first frequency, the first processing load is equal to or higher than a first threshold, and measuring a second processing load of the processor in executing of a second thread included in the program at a second frequency lower than the first frequency, the second processing load is lower than the first threshold. | 01-29-2015 |
20150033241 | SYSTEM AND METHOD FOR MANAGING A HYBRID COMPUTE ENVIRONMENT - Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon. | 01-29-2015 |
20150040138 | Routing Workloads Based on Relative Queue Lengths of Dispatchers - Mechanisms for distributing workload items to a plurality of dispatchers are provided. Each dispatcher is associated with a different computing system of a plurality of computing systems and workload items comprise workload items of a plurality of different workload types. A capacity value for each combination of workload type and computing system is obtained. For each combination of workload type and computing system, a queue length of a dispatcher associated with the corresponding computing system is obtained. For each combination of workload type and computing system, a dispatcher's relative share of incoming workloads is computed used on the queue length for the dispatcher associated with the computing system. In addition, incoming workload items are routed to a dispatcher, in the plurality of dispatchers, based on the calculated dispatcher's relative share for the dispatcher. | 02-05-2015 |
20150040139 | Reducing computational load of processing measurements of affective response - Systems and methods for reducing the computational load of processing measurements of affective response of a user to content. A content emotional response analyzer (content ERA) receives a segment of content, analyzes it, and outputs an indication regarding whether a value related to a predicted emotional response to the segment reaches a predetermined threshold. Based on the indication, a controller selects a processing level, from among at least first and second processing levels, for a processor to process measurements of affective response. The first level may be selected when the value does not reach the predetermined threshold, while the second level may be selected when the value reaches it. The processor is configured to utilize significantly fewer computation cycles to process data operating at the first processing level, compared to the number of computation cycles it utilizes to process data operating at the second processing level. | 02-05-2015 |
20150052536 | DATA PROCESSING METHOD USED IN DISTRIBUTED SYSTEM - Provided is a data processing method which can increase data processing speed without adding a new node to a distributed system. The data processing method may include: calculating a conversion number of cores corresponding to a number of processing blocks included in a graphics processing unit (GPU) of a node of a distributed system; calculating a adding up number of cores by adding up a number of cores included in a central processing unit (CPU) of the node of the distributed system and the conversion number of cores; splitting job data allocated to the node of the distributed system into a number of job units data equal to the adding up number of cores; and allocating a number of job units data equal to the number of cores included in the CPU to the CPU of the node of the distributed system and a number of job units data equal to the conversion number of cores to the GPU of the node of the distributed system. | 02-19-2015 |
20150058863 | LOAD BALANCING OF RESOURCES - Embodiments presented herein techniques for balancing a multidimensional set of resources of different types within a distributed resources system. Each host computer providing the resources publishes a status on current resource usage by guest clients. Upon identifying a local imbalance, the host computer determines a source workload to migrate to or from the resources container to minimize the variance in resource usage. Additionally, when placing a new resource workload, the host computer selects a resources container that minimizes the variance to further balance resource usage. | 02-26-2015 |
20150058864 | MANAGEMENT AND SYNCHRONIZATION OF BATCH WORKLOADS WITH ACTIVE/ACTIVE SITES OLTP WORKLOADS - A method for managing a plurality of workloads executing on both a primary system and on a secondary system, and synchronizing both a plurality of software data and a plurality of hardware data stored on the primary system with the secondary system is provided. The method may include receiving a region switch request and stopping the execution of the plurality of workloads on the primary system; suspending the replication of the plurality of software and hardware data stored on the primary system with the plurality of software and hardware data stored on the secondary system; and switching the replication of the plurality of software data and the plurality of hardware data that occurs from the primary system to the secondary system to occur from the secondary system to the primary system. The method may further include activating the execution of and synchronizing the plurality of workloads on the secondary system. | 02-26-2015 |
20150067696 | SYSTEM AND METHOD FOR MANAGING WORKLOAD PERFORMANCE ON BILLED COMPUTER SYSTEMS - In a system and method for managing mainframe computer usage, preferred values for service class defined performance goals are determined to optimize workload performance in service classes across a logical partition. A method for managing mainframe computer system usage can include receiving a performance optimization goal for workload performance in a service class, the service class having a defined performance goal. Achievement of the performance optimization goal is assessed, and a preferred value for the defined performance goal is determined based on assessing achievement of the performance optimization goal. Workload criticality can be taken into account, and automatic changes to the performance goal authorized. | 03-05-2015 |
20150106823 | Mobile Coprocessor System and Methods - Embodiments include apparatuses, systems, and methods mobile coprocessing. A connection is established between a mobile device and an auxiliary computing device. The mobile device implements a CPU abstraction layer and a virtual CPU between a software stack and a CPU of the mobile device. The abstraction layer allows for the mobile device to offload tasks to the auxiliary computing device while the software stack interacts with the abstraction layer as if the tasks are being executed by the CPU of the mobile device. The mobile device of allocates tasks to the auxiliary computing device based on various parameters, including properties of the auxiliary computing device, metrics of the connection, and priorities of the tasks. | 04-16-2015 |
20150113541 | ELECTRONIC DEVICE CAPABLE OF MANAGING INFORMATION TECHNOLOGY DEVICE AND INFORMATION TECHNOLOGY DEVICE MANAGING METHOD - An IT device managing method is applied for an electronic device. The electronic device can communicate with a number of IT devices. The method includes the following steps. Periodically obtaining a temperature of each IT device, determining a load of each IT device corresponding to the obtained temperature and comparing each determined load with a preset value to determine the load of which IT device is greater than or equal to the preset value and determine the load of which IT device is less than the preset value. Obtaining a job currently run by the IT device that has a load greater than the preset value, and control the IT device to stop running the job. Controlling one IT device that has a load less than the preset value to run the job obtained from the IT device that has a load greater than or equal to the preset value. | 04-23-2015 |
20150121396 | TIME SLACK APPLICATION PIPELINE BALANCING FOR MULTI/MANY-CORE PLCS - A method for performing time-slack pipeline balancing for multi/many-core programmable logic controllers includes performing ( | 04-30-2015 |
20150128149 | Managing Fairness In Task Bundling Of A Queue - Methods and systems for managing a queue are disclosed. In one aspect, an example method can comprise accessing at least a portion of a queue comprising a plurality of tasks. Each task of the plurality of tasks can be associated with a property, and the property associated with each task can comprise a respective value. An exclusion value can be determined based on a distribution of the respective values. A group of tasks that comprises respective values of the property that do not match the exclusion value can be selected from the queue, and the selected group of tasks can be processed. | 05-07-2015 |
20150128150 | DATA PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - A system uses a plurality of nodes to perform a first process on an input data set and a second process on a result of the first process. In response to specification of an input data set including a first segment and a second segment on which the first process was previously performed, the system selects, from the plurality of nodes, a first node and a second node storing at least a part of the result of the first process previously performed on the second segment. The first node performs the first process on the first segment. The second node performs the second process on at least a part of the result of the first process on the first segment transferred from the first node, and at least the part of the result, which is stored in the second node, of the first process on the second segment. | 05-07-2015 |
20150135191 | Compiler System, Method and Software for a Resilient Integrated Circuit Architecture - The exemplary embodiments provide a compiler for a reconfigurable integrated circuit having reconfigurable computational elements with a plurality of contexts. An exemplary compiler generates a compilation comprising a designation of a first type of reconfigurable computational element, the data input linkage or the data output linkage for a first action, and a portion of a first configuration for the first type of reconfigurable computational element comprising a first task identifier and the first action identifier. The reconfigurable integrated circuit utilizes the first task identifier and a run status designation in enabling and disabling corresponding contexts for execution by the reconfigurable computational elements. The first configuration, typically generated in a binding process, further comprises a first input data source address from the first data input linkage or a first output data destination address from the first data output linkage. | 05-14-2015 |
20150135192 | INFORMATION PROCESSING DEVICE, METHOD FOR PROCESSING INFORMATION, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM - There is provided an information processing device includes a task executor and a controller. The task executor executes one or more second tasks that are generated by execution of a first task. The controller that controls the task executor such that the number of tasks to be executed in parallel is adjusted on the basis of a usage degree representing a degree of usage of resource in the information processing device. | 05-14-2015 |
20150135193 | STREAMING EVENT DATA COLLECTION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed data management. One of the methods includes receiving a plurality of feeds of streaming event data and routing feed data from each of the feeds to a respective channel of a plurality of channels, each of the channels being configured to store feed data until the feed data is consumed by a data sink, including routing feed data from a first feed to a first channel. A load metric for the first channel is determined to exceed a threshold. In response, a second channel is allocated for the first feed and feed data is redirected from the first feed to the second channel instead of the first channel. | 05-14-2015 |
20150150021 | SYSTEM AND METHOD FOR MANAGING WORKLOAD PERFORMANCE ON BILLED COMPUTER SYSTEMS - In a system and method for managing mainframe computer usage, preferred values for service class defined performance goals are determined to optimize workload performance in service classes across a logical partition. A method for managing mainframe computer system usage can include receiving a performance optimization goal for workload performance in a service class, the service class having a defined performance goal. Achievement of the performance optimization goal is assessed, and a preferred value for the defined performance goal is determined based on assessing achievement of the performance optimization goal. Workload criticality can be taken into account, and automatic changes to the performance goal authorized. | 05-28-2015 |
20150293780 | Method and System for Reconfigurable Virtual Single Processor Programming Model - A non-transitory computer-readable storage medium storing a set of instructions that are executable by a processor. The set of instructions, when executed by one or more processors of a multi-processor computing system, causes the one or more processors to perform operations including initiating a first processor of the multi-processor computing system with an operating system image of an operating system, the operating system image including a predetermined object map, initiating a second processor of the multi-processor computing system with the operating system image, placing a plurality of system objects with corresponding processors according to the predetermined object map, receiving a triggering event causing a change to the predetermined object map and relocating one of the system objects to a different one of the processors based on the change to the predetermined object map. | 10-15-2015 |
20150301853 | SYSTEMS AND METHODS FOR PHYSICAL AND LOGICAL RESOURCE PROFILING, ANALYSIS AND BEHAVIORAL PREDICTION - Methods and/or systems for performing workload analysis within an arrangement of interconnected computing devices, such as a converged infrastructure, are disclosed. A prediction system may generate a workload associated with physical and/or logical components of the converged infrastructure that are utilized to execute a client resource. The prediction system may monitor the utilization behavior of the various logical and/or physical components associated with the workload over a particular period of time to generate a workload profile. Subsequently, the prediction system may execute a prediction workload analysis algorithm that accesses the workload profile to identify optimal physical resources in the converged infrastructure that may be available to execute other workloads. | 10-22-2015 |
20150301862 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 10-22-2015 |
20150301869 | LOAD BALANCING WITH GRANULARLY REDISTRIBUTABLE WORKLOADS - In one embodiment, a computer-implemented method includes receiving a plurality of tasks to be assigned to a plurality of subgroups of virtual servers. A first plurality of the tasks is assigned to a first subgroup, where the first subgroup includes two or more virtual servers. For each of the first plurality of tasks assigned to the first subgroup, a virtual server is selected within the first subgroup, and the task is assigned to the selected virtual server. A first virtual server is migrated, by a computer processor, from the first subgroup of virtual servers to a second subgroup of virtual servers, if at least one predetermined condition is met, where the migration maintains in the first subgroup at least one of the first plurality of tasks assigned to the first subgroup. | 10-22-2015 |
20150339159 | SYSTEMS, METHODS, AND MEDIA FOR ONLINE SERVER WORKLOAD MANAGEMENT - Methods, using a hardware processor, for online server workload management are provided, comprising: receiving information regarding client device requests; determining, using a hardware processor, a workload distribution for the requests based on electricity cost and carbon footprint of one or more data centers using Lyapunov optimization; sending the workload distribution to the one or more data centers; and causing servers in the one or more data center to be active or inactive based on the workload distribution. Systems are provided, comprising at least one hardware processor configured to: receive information regarding client device requests; determine a workload distribution for the requests based on electricity cost and carbon footprint of one or more data centers using Lyapunov optimization; send the workload distribution to the one or more data centers; and cause servers in the one or more data center to be active or inactive based on the workload distribution. | 11-26-2015 |
20150339162 | Information Processing Apparatus, Capacity Control Parameter Calculation Method, and Program - An information processing apparatus calculates parameters corresponding to the data processing structure of a web system. An extraction unit | 11-26-2015 |
20150347183 | IDENTIFYING CANDIDATE WORKLOADS FOR MIGRATION - Techniques for identifying a candidate workload which may be a suitable candidate for migration from a first location to a second location are described herein. A set of suitability measurements associated with a computer system resource operating in the first location is received, the set of suitability measurements including, for example, resource usage values for one or more resources associated with the workload. Based at least in part on one or more statistical calculations on the set of suitability measurements exceeding one or more thresholds, recommendations are made about whether to migrate the workload from the first location to the second location. | 12-03-2015 |
20150363291 | OPTIMIZING THE NUMBER OF SHARED PROCESSES EXECUTING IN A COMPUTER SYSTEM - A system optimizes a number of shared server processes executing on a processor. The system creates, in a memory, a data array for storing a plurality of performance metric values, each associated with a number of shared server processes. The system selects a value for an optimized number of shared server processes according to a first procedure based on the performance metric, observes a performance metric associated with the selected optimized number, and stores, in the data array, the observed performance metric. The system repeats the selecting, observing and storing until at least a predetermined number of contiguous data values are stored in the data array. The system selects the value for the optimized number according to a second procedure based on a slope of the performance metric. The system observes the performance metric associated with the selected optimized number, and stores, in the data array, the observed performance metric. | 12-17-2015 |
20150370601 | OPTIMIZING RUNTIME PERFORMANCE OF AN APPLICATION WORKLOAD BY MINIMIZING NETWORK INPUT/OUTPUT COMMUNICATIONS BETWEEN VIRTUAL MACHINES ON DIFFERENT CLOUDS IN A HYBRID CLOUD TOPOLOGY DURING CLOUD BURSTING - A method, system and computer program product for optimizing runtime performance of an application workload. Network input/output (I/O) operations between virtual machines of a pattern of virtual machines servicing the application workload in a private cloud are measured over a period of time and depicted in a histogram. A score is generated for each virtual machine or group of virtual machines in the pattern of virtual machines based on which range in the ranges of I/O operations per seconds (IOPS) depicted in the histogram has the largest sample size and the number of virtual machines in the same pattern that are allowed to be in the public cloud. In this manner, the runtime performance of the application workload is improved by minimizing the network input/output communications between the two cloud environments by migrating those virtual machine(s) or group(s) of virtual machines with a score that exceeds a threshold value. | 12-24-2015 |
20150370609 | THREAD SCHEDULING ACROSS HETEROGENEOUS PROCESSING ELEMENTS WITH RESOURCE MAPPING - A method for scheduling processes of a workload on a plurality of hardware threads configured in a plurality of processing elements of a multithreading parallel computing system for processing thereby. Process dimensions for each process are determined based on processing attributes associated with each process, and a place and route algorithm is utilized to map the processes to a processor space representative of the processing resources of the computing system based at least in part on the process dimensions to thereby distribute the processes of the workload. | 12-24-2015 |
20150378790 | SYSTEM AND METHOD FOR MANAGING A HYBRID COMPUTE ENVIRONMENT - Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon. | 12-31-2015 |
20160004571 | SYSTEM AND METHOD FOR LOAD BALANCING IN A DISTRIBUTED SYSTEM BY DYNAMIC MIGRATION - A system and method for load balancing between components of a distributed data grid. The system and method support dynamic data migration of selected data partitions in response to detection of hot spots in the data grid which degrade system performance. In embodiments, the system and method relies upon analysis of per-partition performance statistics for both the identification of data nodes which would benefit from data migration and the selection of data nodes for migration. Tuning of the data migration thresholds and method provides for optimizing throughput of the data grid to avoid degradation of performance resulting from load-induced hot spots. | 01-07-2016 |
20160011903 | COMPUTER SYSTEM, COMPUTER SYSTEM MANAGEMENT METHOD AND PROGRAM | 01-14-2016 |
20160011912 | PROCESS SCHEDULING AND EXECUTION IN DISTRIBUTED COMPUTING ENVIRONMENTS | 01-14-2016 |
20160026504 | ASYNCHRONOUS DISPATCHER FOR APPLICATION FRAMEWORK - The described technology is directed towards an asynchronous dispatcher including control logic that manages a queue set, including to dequeue and execute work items from the queue on behalf of application code executing in a program. The dispatcher yields control to the program to allow the program and application code to be responsive with respect to user interface operations. | 01-28-2016 |
20160026514 | STATE MIGRATION FOR ELASTIC VIRTUALIZED COMPONENTS - A capability for supporting an elastic virtualized component that is stateful is provided by supporting state migration for the elastic virtualized component. The elastic virtualized component may support a virtualized network function or any other suitable virtualized function. The elastic virtualized component includes a component load balancer and a set of component instances configured to provide functions of the elastic virtualized component. The elastic virtualized component may be configured to support migration of state information of the component instances following elasticity events in which the capacity of the elastic virtualized component changes (e.g., in response to growth events in which the number of component instances of which the elastic virtualized component is composed increases, in response to degrowth events in which the number of component instances of which the elastic virtualized component is composed decreases, or the like). | 01-28-2016 |
20160034318 | SYSTEM AND METHOD FOR STAGING IN A CLOUD ENVIRONMENT - A method and system for staging in a cloud environment defines a default stage for integration flows. An integration flow is defined by (a) stages including (i) a live stage to follow the default stage, (ii) additional stages between the default and live stages, and (b) endpoint definitions for the live and additional stages. In response to an instruction to promote the integration flow, the integration flow is load balanced by allocating each stage to execution environment(s). Then, the integration flow is run in the execution environment(s). The load balancing includes, for each stage, (i) retrieving a list of execution environments which are available for execution of stages, (ii) selecting execution environment(s) on which to execute the stage and updating the list of available execution environments to indicate that the selected execution environment(s) is allocated, and (iii) storing the selected execution environment(s) as specific to the stage. | 02-04-2016 |
20160041845 | METHOD AND APPARATUS FOR EXECUTING SOFTWARE IN ELECTRONIC DEVICE - An apparatus includes a calculation processing unit configured to perform a calculation in the electronic device, a device manager configured to controls a speed of the calculation processing unit and output load factor information, one or more user-level application programs with a respective load factor limit, configured to request for load factor limit information of the calculation processing unit and calculation of a load with a load factor limit, and a service quality manager configured to receive the load factor limit information and the load with the load factor limit from the user-level application programs with the load factor limit, receive load factor information of the calculation processing unit from the device manager, generate a calculation parameter so that a load factor of the calculation processing unit is within the load factor limit information, and output the load with the load factor limit and the generated calculation parameter. | 02-11-2016 |
20160050145 | TABLE-BASED LOAD BALANCING FOR BONDED NETWORK INTERFACES - Systems and methods for table-based load balancing implemented by bonded network interfaces. An example method may comprise: receiving, by a bonded interface of a computer system, a data link layer frame; identifying a network interface controller (NIC) of the bonded interface associated, by a load balancing table, with a source Media Access Control (MAC) address of the data link layer frame, wherein the load balancing table comprises a plurality of load balancing entries, each load balancing entry mapping a source MAC address to an identifier of a NIC comprised by the bonded interface; and transmitting the data link layer frame via the identified NIC. | 02-18-2016 |
20160055033 | RUNTIME FOR AUTOMATICALLY LOAD-BALANCING AND SYNCHRONIZING HETEROGENEOUS COMPUTER SYSTEMS WITH SCOPED SYNCHRONIZATION - Sharing tasks among compute units in a processor can increase the efficiency of the processor. When a compute unit does not have a task in its task memory to perform, donating tasks from other compute units can prevent the compute unit from being idle while there is task in other parts of the processor. It is desirable to share tasks among compute units that are within defined scopes of the processor. Compute units may share tasks by allowing other compute units to access their private memory, or by donating tasks to a shared memory. | 02-25-2016 |
20160055038 | SELECTING VIRTUAL MACHINES TO BE MIGRATED TO PUBLIC CLOUD DURING CLOUD BURSTING BASED ON RESOURCE USAGE AND SCALING POLICIES - A method, system and computer program product for selecting virtual machines to be migrated to a public cloud. The current resource usage for virtual machine instances running in the private cloud is determined. Furthermore, any scaling policies attached to the virtual machine instances running in the private cloud are obtained. Additional resource usages for any of the virtual machine instances with a scaling policy are computed for when these virtual machine instances are scaled out. A cost of running a virtual machine instance in the public cloud is then determined using its current resource usage as well as any additional resource usage if a scaling policy is attached to the virtual machine instance based on the cost for running virtual machine instances in a public cloud. If the cost is less than a threshold cost, then the virtual machine instance is selected to be migrated to the public cloud. | 02-25-2016 |
20160070601 | LOAD BALANCE APPARATUS AND METHOD - An apparatus predicts time-series variations in resource usage for logical structures for a future time period (a schedule period) on the basis of a history representing the history of resource usage by the logical structures. The apparatus attempts to select a plurality of arrangement candidates for which resource usage in each of a plurality of physical machines is equal to or less than a criterion for each of a plurality of time segments comprising the schedule period. The apparatus computes a migration cost of migrating the logical structures between physical machines for an arrangement according to a holistic arrangement plan for each of a plurality of holistic arrangement plans. Each of the plurality of holistic arrangement plans is a combination of a plurality of selected arrangement candidates corresponding to each of the plurality of time segments. | 03-10-2016 |
20160077877 | INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD - An information processing system characterized by the provision of an optimal-load arrangement means and a load-computation execution means wherein: said optimal-load arrangement means contains a load analysis means, a load distribution means, and program information; the load-computation execution means contains a hardware processing means and a software computation means; the program information includes resource information and information pertaining to data to be processed and the content of the processing to be performed thereon; the load analysis means has the ability to perform community assignment in which, of the data to be processed, data in regions having heavy loads and communication volumes that can be reduced is assigned to a hardware community and data in other regions is assigned to a software community; and the load distribution means divides up the data to be processed such that the data assigned to the hardware community is processed by the hardware processing means and the data assigned to the software community is processed by the software computation means. | 03-17-2016 |
20160077886 | GENERATING WORKLOAD WINDOWS - A method for generating workload windows includes incrementing access counters for each block of a storage system during execution of a workload accessing the storage system. The method also includes determining an average input-output (IO) rate of the storage system based on the access counters. The method further includes determining whether to generate a new workload window based on the average IO rate, an expiring timer, and a predetermined range from an X value to a Y value. The X value is equal to a low threshold of the average IO rate, and the Y value is equal to a high threshold of the average IO rate. The method also includes generating the new workload window based on the determination. | 03-17-2016 |
20160077887 | Safe consolidation and migration - A method, apparatus and computer program product for program migration, the method comprising: receiving a target host and an application to be migrated to a target host; estimating a target load of the application to be migrated; generating a synthetic application which simulates a simulated load, the simulated load being smaller than the target load; loading the synthetic application to the target host; monitoring behavior of the target host, the synthetic application, or a second application executed thereon; subject to the behavior being satisfactory: if the simulated load is smaller than the target load, then repeating said generating, said loading and said monitoring, wherein said loading is repeated with increased load; and otherwise migrating the application to the target. | 03-17-2016 |
20160103714 | SYSTEM, METHOD OF CONTROLLING A SYSTEM INCLUDING A LOAD BALANCER AND A PLURALITY OF APPARATUSES, AND APPARATUS - A system includes a load balancer; apparatuses; a control apparatus configured to execute a process including: selecting, from among the apparatuses, one or more first apparatuses as each processing node for processing data distributed by the load balancer, selecting, from among the apparatuses, one or more second apparatuses as each inputting and outputting node for inputting and outputting data processed by the each processing node, collecting load information from the one or more first apparatuses and the one or more second apparatuses, changing a number of the one or more first apparatuses or a number of the one or more second apparatuses based on the load information, and setting one or more third apparatuses not selected as the processing node and the inputting and outputting node from among the apparatuses based on the changing into a deactivated state. | 04-14-2016 |
20160110226 | Workload Partitioning Procedure for Null Message-Based PDES - An embodiment of the invention includes applying a first partition to a plurality of LPs, wherein a particular LP is assigned to a first set of LPs. A second partition is applied to the LPs, wherein the particular LP is assigned to an LP set different from the first set. For both the first and second partitions, lookahead values and transit times are determined for each of the LPs and related links. For the first partition, a first system progression rate is computed using a specified function with the lookahead values and transit times determined for the first partition. For the second partition, a second system progression rate is computed using the specified function with the lookahead values and transit times determined for the second partition. The first and second system progression rates are compared to determine which is the lowest. | 04-21-2016 |
20160117198 | LOAD DISTRIBUTION APPARATUS, LOAD DISTRIBUTION METHOD, STORAGE MEDIUM, AND EVENT-PROCESSING SYSTEM - This invention implements appropriate load distribution in an event-processing system, that includes: a plurality of event generators that generate events and transmit the events to an allocation apparatus, and a plurality of allocation apparatuses that receive events from one or a plurality of event generators and transmit the received events to a processing apparatus. The load distribution apparatus includes an acquiring unit that is configured to acquire a reception status, or a transmission status, these status representing information about receiving or transmitting of the events. The load distribution apparatus also includes an updating unit that is configured to update the allocation apparatus specified for the specific event generator to another allocation apparatus, on the basis of the reception status or the transmission status, so that a load applied to the allocation apparatus is leveled among the plurality of allocation apparatuses. | 04-28-2016 |
20160147575 | PRIORITIZING AND DISTRIBUTING WORKLOADS BETWEEN STORAGE RESOURCE CLASSES - A method includes storing a plurality of workloads in a first disk resource associated with a high end disk classification. The method further includes determining a corresponding activity level for each of the plurality of workloads. The method also includes classifying each of the plurality of workloads into a first set indicative of high-priority workloads and a second set indicative of low-priority workloads based on whether the corresponding activity level is greater than a threshold activity level. The method further includes determining whether a second disk resource associated with a low end disk classification can accommodate storage of a first particular workload in the second set based on an available storage capacity of the second disk resource. The method additionally includes migrating the first particular workload from the first disk resource to the second disk resource. | 05-26-2016 |
20160154677 | Work Stealing in Heterogeneous Computing Systems | 06-02-2016 |
20160154681 | DISTRIBUTED JOBS HANDLING | 06-02-2016 |
20160154682 | MULTI-APPLICATION WORKFLOW INTEGRATION | 06-02-2016 |
20160154683 | DYNAMIC THREAD STATUS RETRIEVAL USING INTER-THREAD COMMUNICATION | 06-02-2016 |
20160162334 | CONCURRENT WORKLOAD DEPLOYMENT TO SYNCHRONIZE ACTIVITY IN A DESIGN PALETTE - A system and method for iteratively deploying a workload pattern are provided. The system and method determines a current set of requirements for at least one piece of the workload pattern that is initiated in a designer and generates a stability metric for at least one of the current set of requirements. The system and method further compares the stability metric to an acceptance threshold and calculates an estimated time to deploy the at least one piece of the workload pattern based on the comparing of the stability metric to the acceptance threshold. | 06-09-2016 |
20160162339 | CONCURRENT WORKLOAD DEPLOYMENT TO SYNCHRONIZE ACTIVITY IN A DESIGN PALETTE - A system and method for iteratively deploying a workload pattern are provided. The system and method determines a current set of requirements for at least one piece of the workload pattern that is initiated in a designer and generates a stability metric for at least one of the current set of requirements. The system and method further compares the stability metric to an acceptance threshold and calculates an estimated time to deploy the at least one piece of the workload pattern based on the comparing of the stability metric to the acceptance threshold. | 06-09-2016 |
20160188376 | Push/Pull Parallelization for Elasticity and Load Balance in Distributed Stream Processing Engines - The stream processing engine uses the Actor programming paradigm for defining the application in terms of a graph built with processing elements (PEs) that use a hash based partitioning of data, where events (key, value) are pushed towards the next element in the operator, and in case of an overloaded PE the method changes to a Producer/Consumer Model where new workers pull events from a buffer queue in order to release the amount of traffic in the overloaded PE. The programmer defines a sequential version of the PE and other parallel version that recovers the events from a buffer and, if the operator is stateless sends the result to the next PE, or if the operator is stateful sends the result to an aggregator PE before moving to the next stage of the pipeline process. Strategies for triggering changes in the graph are defined in an administrator module to provide the right amount of elasticity and load balance in the distributed stream processing engine using queues analysis of the monitoring module. | 06-30-2016 |
20160188378 | Method of Facilitating Live Migration of Virtual Machines - Embodiment pertain to facilitation of live migration of a virtual machine in a network system. The network system includes a first host, a second host, a first appliance for providing service to the first host, a second appliance for providing service to the second host, and a third appliance. At least one virtual machine is disposed on the first host and has an ongoing first network flow. The first appliance has generated state information about the first network flow. During the migration of the at least one virtual machine to the second host, the third appliance obtains a copy of the state information about the first network flow; and the third appliance takes over from the first appliance to serve the first network flow during the migration of the at least one virtual machine, until the first network flow is terminated. | 06-30-2016 |
20160188653 | UPDATING PROGRESSION OF PERFORMING COMPUTER SYSTEM MAINTENANCE - A method, computer program product, and computer system for updating progression of performing computer system management. A computer system receives a log-on of a change implementer onto a managed computer system and searches a change request on a managing computer system. In response to that the change request is found, the computer system receives from the change implementer a command with a current date and time and matches the command to one or more tasks within the change request. In response to determining that the command matches the one or more tasks, the computer system updates start dates and times of the one or more tasks. And, in response to that the one or more tasks are completed, the computer system updates stop dates and times of the one or more tasks. | 06-30-2016 |
20160203030 | LOAD CALCULATION METHOD, LOAD CALCULATION PROGRAM, AND LOAD CALCULATION APPARATUS | 07-14-2016 |
20160253210 | Cellular with Multi-Processors | 09-01-2016 |
20160378551 | ADAPTIVE HARDWARE ACCELERATION BASED ON RUNTIME POWER EFFICIENCY DETERMINATIONS - Systems and methods may provide for making a power efficiency determination at runtime based on one or more runtime usage notifications and scheduling a workload for execution on a hardware accelerator if the power efficiency determination indicates that execution of the workload on the hardware accelerator will be more efficient than execution of the workload on a host processor. Additionally, the workload may be scheduled for execution on the host processor if the power efficiency determination indicates that execution of the workload on the host processor will be more efficient than execution of the workload on the hardware accelerator. In one example, making the power efficiency determination includes applying one or more configurable rules to at least one of the one or more runtime usage notifications. | 12-29-2016 |
20160378552 | AUTOMATIC SCALING OF COMPUTING RESOURCES USING AGGREGATED METRICS - A computing resource monitoring service receives a plurality of measurements for a metric associated with an auto-scale group. Each measurement is associated with metadata for the measurement, which specifies attributes for the measurement. The computing resource monitoring service determines, for each measurement and based at least in part on the metadata, a fully qualified metric identifier for the measurement. The service partitions the plurality of measurements into a plurality of logical partitions associated with one or more in-memory datastores. The service transmits the measurements from the plurality of logical partitions to the one or more datastores for storage of the measurements. These measurements are provided to one or more computing resource managers for the auto-scale group to enable automatic scaling of computing resources of the group based at least in part on the measurements. | 12-29-2016 |
20160378557 | TASK ALLOCATION DETERMINATION APPARATUS, CONTROL METHOD, AND PROGRAM - A distributed system ( | 12-29-2016 |
20160378565 | METHOD AND APPARATUS FOR REGULATING PROCESSING CORE LOAD IMBALANCE - Briefly, methods and apparatus to rebalance workloads among processing cores utilizing a hybrid work donation and work stealing technique are disclosed that improve workload imbalances within processing devices such as, for example, GPUs. In one example, the methods and apparatus allow for workload distribution between a first processing core and a second processing core by providing queue elements from one or more workgroup queues associated with workgroups executing on the first processing core to a first donation queue that may also be associated with the workgroups executing on the first processing core. The method and apparatus also determine if a queue level of the first donation queue is beyond a threshold, and if so, steal one or more queue elements from a second donation queue associated with workgroups executing on the second processing core. | 12-29-2016 |
20160378566 | RUNTIME FUSION OF OPERATORS - The streams environment includes a plurality of operators coupled with processing elements including a first processing element coupled with a first operator instructed with a first programming instructions, and a second processing element coupled with a second operator instructed with a second programming instructions. A workload of the first processing element and a workload of the second processing element are measured. A first threshold of the workload of the first processing element, and second threshold of the workload of the second processing element are determined. The first programming instructions and the second programming instructions are compared to determine if the first operator and the second operator are susceptible to fusion. The first operator is de-coupled and fused to the second processing element, in response to determining the first threshold and the determination that the first operator and the second operator are susceptible to fusion. | 12-29-2016 |
20160378568 | Scriptable Dynamic Load Balancing in Computer Systems - The described embodiments include a system for executing a load using a first processor and a second processor in a computer system. During operation, a load balancer executing on the first processor obtains one or more attributes of a load to be executed on the computer system. Next, the load balancer applies a set of configurable rules to the one or more attributes to select a processor from the first and second processors for executing the load. Finally, the system executes the load on the selected processor. | 12-29-2016 |
20170235603 | DISTRIBUTED LOAD PROCESSING USING FORECASTED LOCATION-BASED INTERNET OF THINGS DEVICE CLUSTERS | 08-17-2017 |
20170235604 | DISTRIBUTED LOAD PROCESSING USING CLUSTERS OF INTERDEPENDENT INTERNET OF THINGS DEVICES | 08-17-2017 |
20180024869 | GUIDED LOAD BALANCING OF GRAPH PROCESSING WORKLOADS ON HETEROGENEOUS CLUSTERS | 01-25-2018 |