Patent application number | Description | Published |
20090119301 | SYSTEM AND METHOD FOR MODELING A SESSION-BASED SYSTEM WITH A TRANSACTION-BASED ANALYTIC MODEL - According to an embodiment of the present invention, a method for deriving an analytic model for a session-based system is provided. The method comprises receiving, by a model generator, client-access behavior information for the session-based system, wherein the session-based system comprises a plurality of interdependent transaction types. The method further comprises deriving, by the model generator, from the received client-access behavior information, a stateless transaction-based analytic model of the session-based system, wherein the derived transaction-based analytic model models resource requirements of the session-based system for servicing a workload. According to certain embodiments, the derived transaction-based analytic model is used for performing capacity analysis of the session-based system. | 05-07-2009 |
20090307347 | Using Transaction Latency Profiles For Characterizing Application Updates - One embodiment is a method that determines transaction latencies occurring at an application server and a database server in a multi-tier architecture. The method then analyzes the transaction latencies at the application server with Central Processing Unit (CPU) utilization during a monitoring window to determine whether a change in transaction performance at the application server results from an update to an application. | 12-10-2009 |
20100082290 | DETECTING AN ERROR IN A PREDICTION OF RESOURCE USAGE OF AN APPLICATION IN A VIRTUAL ENVIRONMENT - Described herein is a method for detecting an error in a prediction of resource usage of an application running in a virtual environment, comprising: providing a plurality of benchmark sets, executing the plurality of benchmark sets in a native hardware system in which the application natively resides, executing the plurality of benchmark sets in the virtual environment, collecting first traces of first resource utilization metrics in the native hardware system based on the execution of each of the plurality of benchmark sets in the native hardware system, collecting second traces of second resource utilization metrics in the virtual environment based on the execution of each of the plurality of benchmark sets in the virtual environment, generating at least one initial prediction model that maps the first traces of first resource utilization metrics to the second traces of second resource utilization metrics, computing a plurality of mean squared errors (MSE's) based on the at least one initial prediction model, each of the MSE's is further based on and associated with the collected first and second traces for a different one of the plurality of benchmark sets, and determining whether to use the initial prediction model to predict a resource usage of the application running in the virtual environment based on the plurality of MSEs. | 04-01-2010 |
20100082319 | PREDICTING RESOURCE USAGE OF AN APPLICATION IN A VIRTUAL ENVIRONMENT - Described herein is a system for predicting resource usage of an application running in a virtual environment. The system comprises a first hardware platform implementing a native hardware system in which an application natively resides and executes, the native hardware system operating to execute a predetermined set of benchmarks that includes at least one of: a computation-intensive workload, a network-intensive workload, and a disk-intensive workload; a second hardware platform implementing a virtual environment therein, the virtual environment operating to execute the predetermined set of benchmarks; a third hardware platform operating to collect first resource usage traces from the first hardware platform and second resource usage traces from the second hardware platform; wherein the third hardware platform further operating to generate at least one prediction model that predicts a resource usage of the application executing in the virtual environment based on the collected first and second resource usage traces. | 04-01-2010 |
20100082320 | ACCURACY IN A PREDICTION OF RESOURCE USAGE OF AN APPLICATION IN A VIRTUAL ENVIRONMENT - Described herein is a system for improving accuracy in a prediction of resource usage of an application running in a virtual environment. The system comprises a first hardware platform implementing a native hardware system in which an application natively resides and executes, the native hardware system operating to execute a predetermined set of benchmarks that includes at least one of a network-intensive workload and a disk-intensive workload, a second hardware platform implementing a virtual environment therein, the virtual environment operating to execute the predetermined set of benchmarks, and a third hardware platform operating to collect first resource usage traces from the first hardware platform that result from the execution of the predetermined set of benchmarks in the native hardware system and second resource usage traces from the second hardware platform that result from the execution of the predetermined set of benchmarks in the virtual environment. The third hardware platform further operating to perform a linear regression computation to generate at least one prediction model that predicts a resource usage of the application executing in the virtual environment based on the collected first and second resource usage traces. | 04-01-2010 |
20100082321 | SCALING A PREDICTION MODEL OF RESOURCE USAGE OF AN APPLICATION IN A VIRTUAL ENVIRONMENT - Described herein is a method for scaling a prediction model of resource usage of an application in a virtual environment, comprising: providing a predetermined set of benchmarks, wherein the predetermined set of benchmarks includes at least one of: a computation-intensive workload, a network-intensive workload, and a disk-intensive workload; executing the predetermined set of benchmarks in a first native hardware system in which the application natively resides; executing the predetermined set of benchmarks in the virtual environment; generating at least one first prediction model that predicts a resource usage of the application running in the virtual environment based on the executions of the predetermined set of benchmarks in the first native hardware system and the virtual environment; determining a resource usage of the application running in a second native hardware system in which the application also natively resides; generating at least one second prediction model based on a scaling of the at least one first prediction model by a predetermined constant; and predicting a resource usage of the application running in the virtual environment based on the resource usage of the application running in the second native hardware system and the at least one second prediction model. | 04-01-2010 |
20100082322 | OPTIMIZING A PREDICTION OF RESOURCE USAGE OF AN APPLICATION IN A VIRTUAL ENVIRONMENT - Described herein is a method for optimizing a prediction of resource usage of an application running in a virtual environment, comprising: providing a predetermined set of benchmarks, wherein the predetermined set of benchmarks; executing the predetermined set of benchmarks in a native hardware system in which the application natively resides; executing the predetermined set of benchmarks in the virtual environment; collecting first traces of first resource utilization metrics in the native hardware system based on the execution of the predetermined set of benchmarks in the native hardware system; collecting second traces of second resource utilization metrics in the virtual environment based on the execution of the predetermined set of benchmarks in the virtual environment; generating a first prediction model and a second prediction model; generating a third prediction model that maps all of the first traces of the selected first metric to the second traces of resource utilization metrics; comparing the first and second prediction models against the third prediction model; and predicting a resource usage of the application running in the virtual environment with either a) a combination of the first and second prediction models or b) the third prediction model based on the comparing. | 04-01-2010 |
20100083248 | OPTIMIZING A PREDICTION OF RESOURCE USAGE OF MULTIPLE APPLICATIONS IN A VIRTUAL ENVIRONMENT - Described herein is a method for optimizing a prediction of resource usage of multiple applications running in a virtual environment, comprising: providing a predetermined set of benchmarks; executing the predetermined set of benchmarks in a native hardware system in which the application natively resides; executing the predetermined set of benchmarks in the virtual environment; collecting first traces of first resource utilization metrics in the native hardware system based on the execution of the predetermined set of benchmarks in the native hardware system; collecting second traces of second 10 resource utilization metrics in the virtual environment based on the execution of the predetermined set of benchmarks in the virtual environment; generating a first prediction model that maps a first selected set of the first traces of a selected one of the first resource utilization metrics to the second traces of resource utilization metrics; generating a second prediction model that maps a second different selected set of the first traces of the selected first resource utilization metric to the second traces of resource utilization metrics; collecting first application traces of resource utilization metrics in the native hardware system based on an execution of a first application in the native hardware system; collecting second application traces of resource utilization metrics in the native hardware system based on an execution of a second application in the native hardware system; aggregating the first application traces of the first application and the second application traces of the second application into combined application traces of resource utilization metrics; and predicting a combined resource usage of the first and second applications running in the virtual environment by applying the first and second prediction models to the combined application traces of resource utilization metrics. | 04-01-2010 |
20100094592 | Using Application Performance Signatures For Characterizing Application Updates - One embodiment is a method that determines application performance signatures occurring at an application server in a multi-tier architecture. The method then analyzes the application performance signatures to determine whether a change in transaction performance at the application server results from a modification to an application. | 04-15-2010 |
20100094992 | Capacity Planning Of Multi-tiered Applicatons From Application Logs - One embodiment collects performance data for an application server that processes transactions received from a client computer to a database server. An application log is created from the performance data and used for capacity planning in a multi-tiered architecture. | 04-15-2010 |
20100100401 | System And Method For Sizing Enterprise Application Systems - Embodiments of the present invention recite a system and computer-implemented method for sizing enterprise-application systems. In one embodiment of the present invention, a ratio of a plurality of pre-defined benchmarks is determined. The workload of the ratio-of pre-defined benchmarks corresponds to a desired workload of an enterprise application system. The ratio of the plurality of pre-defined benchmarks is then used as a second benchmark for testing the enterprise application system. | 04-22-2010 |
20100115095 | AUTOMATICALLY MANAGING RESOURCES AMONG NODES - A system for managing resources automatically among nodes includes a node controller configured to dynamically manage allocation of node resources to individual workloads, where each of the nodes is contained in one of a plurality of pods. The system also includes a pod controller configured to manage live migration of workloads between nodes within one of the plurality of pods, where the plurality of pods are contained in a pod set. The system further includes a pod set controller configured to manage capacity planning for the pods contained in the pod set. The node controller, the pod controller and the pod set controller are interfaced with each other to enable the controllers to meet common service policies in an automated manner. The node controller, the pod controller and the pod set controller are also interfaced with a common user interface to receive service policy information. | 05-06-2010 |
20100250480 | IDENTIFYING SIMILAR FILES IN AN ENVIRONMENT HAVING MULTIPLE CLIENT COMPUTERS - To identify similar files in an environment having multiple client computers, a first client computer receives, from a coordinator computer, a request to find files located at the first client computer that are similar to at least one comparison file, wherein the request has also been sent to other client computers by the coordinator computer to request that the other client computers also find files that are similar to the at least one comparison file. In response to the request, the first client computer compares signatures of the files located at the first client computer with a signature of the at least one comparison file to identify at least a subset of the files located at the first client computer that are similar to the at least one comparison file according to a comparison metric. The first client computer sends, to the coordinator computer, a response relating to the comparing. | 09-30-2010 |
20100324869 | MODELING A COMPUTING ENTITY - To model a computing entity, information relating to transactions associated with the computing entity is received. The received information forms a collection of information. The collection is segmented into a plurality of segments, and at least one anomalous segment is identified. A model of the computing entity is built. | 12-23-2010 |
20110082837 | BACKUP SIMULATION FOR BACKING UP FILESYSTEMS TO A STORAGE DEVICE - Embodiments are directed to methods and apparatus that backup filesystems to a storage device. A backup simulation is used to determine a number of agents to backup the filesystems. | 04-07-2011 |
20110082972 | BACKING UP FILESYSTEMS TO A STORAGE DEVICE - One embodiment is a method that backups up filesystems to a storage device. Filesystems having a longer previous backup time are backed up before filesystems having a shorter previous backup time. | 04-07-2011 |
20110202504 | BACKING UP OBJECTS TO A STORAGE DEVICE - One embodiment is a method that backups up objects to a storage device. A number of objects that are concurrently backed up to the storage device is limited. | 08-18-2011 |
20110295811 | CHANGING A NUMBER OF DISK AGENTS TO BACKUP OBJECTS TO A STORAGE DEVICE - A method executes a simulation to determine backup times to backup objects to storage devices using a number of concurrent disk agents that are assigned to each of the storage devices. The number of concurrent disk agents is changed during the backup of the objects to the storage devices. | 12-01-2011 |
20110296249 | SELECTING A CONFIGURATION FOR AN APPLICATION - There is provided a computer-implemented method for selecting from a plurality of full configurations of a storage system an operational configuration for executing an application. An exemplary method comprises obtaining application performance data for the application on each of a plurality of test configurations. The exemplary method also comprises obtaining benchmark performance data with respect to execution of a benchmark on the plurality of full configurations, one or more degraded configurations of the full configurations and the plurality of test configurations. The exemplary method additionally comprises estimating a metric for executing the application on each of the plurality of full configurations based on the application performance data and the benchmark performance data. The operational configuration may be selected from among the plurality full configurations based on the metric. | 12-01-2011 |
20120131583 | ENHANCED BACKUP JOB SCHEDULING - Systems and methods of enhanced backup job scheduling are disclosed. An example method may include determining a number of jobs (n) in a backup set, determining a number of tape drives (m) in the backup device, and determining a number of concurrent disk agents (maxDA) configured for each tape drive. The method may also include defining a scheduling problem based on n, m, and maxDA. The method may also include solving the scheduling problem using an integer programming (IP) formulation to derive a bin-packing schedule that minimizes makespan (S) for the backup set. | 05-24-2012 |
20120136971 | SYSTEM AND METHOD FOR DETERMINING HOW MANY SERVERS OF AT LEAST ONE SERVER CONFIGURATION TO BE INCLUDED AT A SERVICE PROVIDER'S SITE FOR SUPPORTING AN EXPECTED WORKLOAD - A method comprises receiving, into a capacity planning system, workload information representing an expected workload of client accesses of streaming media files from a site. The method further comprises the capacity planning system determining, for at least one server configuration, how many servers of the at least one server configuration to be included at the site for supporting the expected workload in a desired manner. | 05-31-2012 |
20120198466 | DETERMINING AN ALLOCATION OF RESOURCES FOR A JOB - A job profile describes characteristics of a job. A performance parameter is calculated based on the job profile, and using a value of the performance parameter, an allocation of resources is determined to assign to the job to meet a performance goal associated with a job. | 08-02-2012 |
20120296852 | DETERMINING WORKLOAD COST - A method of determining a workload cost is provided herein. The method includes determining a direct consumption of a resource pool by a workload. The method also includes determining a burstiness for the workload and the resource pool. The burstiness comprises a difference between a peak consumption of the resource pool by the workload, and the direct consumption of the resource pool. The method further includes determining an unallocated amount of the resource pool. Additionally, the method includes determining the workload cost based on the direct consumption, the burstiness, and the unallocated amount of the resource pool. | 11-22-2012 |
20130167151 | JOB SCHEDULING BASED ON MAP STAGE AND REDUCE STAGE DURATION - A plurality of job profiles is received. Each job profile describes a job to be executed, and each job includes map tasks and reduce tasks. An execution duration for a map stage including the map tasks and an execution duration for a reduce stage including the reduce tasks of each job is estimated. The jobs are scheduled for execution based on the estimated execution duration of the map stage and the estimated execution duration of the reduce stage of each job. | 06-27-2013 |
20130268940 | AUTOMATING WORKLOAD VIRTUALIZATION - A system, and a corresponding method enabled by and implemented on that system, automatically calculates and compares costs for hosting workloads in virtualized or non-virtualized platforms. The system allows a service user (i.e., a customer) to decide how best to have workloads hosted by apportioning costs that are least sensitive to workload placement decisions and by providing robust and repeatable cost estimates. The system compares the costs of hosting a workload in virtualized and non-virtualized environments; separates workloads into categories including those that should be virtualized and those that should not, and determines the amount of physical resources to cost-effectively host a set of workloads. | 10-10-2013 |
20130268941 | DETERMINING AN ALLOCATION OF RESOURCES TO ASSIGN TO JOBS OF A PROGRAM - A performance model is used to calculate a performance parameter based on characteristics of a collection of jobs that make up a program, a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program. | 10-10-2013 |
20130290538 | EVALUATION OF CLOUD COMPUTING SERVICES - At least one embodiment is for a method for estimating resource costs required to process an workload to be completed using at least two different cloud computing models. Historical trace data of at least one completed workload that is similar to the workload to be completed is received by the computer. The processing of the completed workload is simulated using a t-shirt cloud computing model and a time-sharing model. The t-shirt and time-sharing resource costs are estimated based on their respective simulations. The t-shirt and resource costs are then compared. | 10-31-2013 |
20130290972 | WORKLOAD MANAGER FOR MAPREDUCE ENVIRONMENTS - A method of managing workloads in MapReduce environments with a system. The system receives job profiles of respective jobs, wherein each job profile describes characteristics of map and reduce tasks. The map tasks produce intermediate results based on the input data, and the reduce tasks produce an output based on the intermediate results. The jobs are ordered according to performance goals into a hierarchy. A minimum quantity of resources is allocated to each job to achieve its performance goal. A plurality of spare resources are allocated to at least one of the jobs. A new job profile having a new performance goal is then received. Next, it is determined whether the new performance goal can be met without deallocating spare resources. Spare resources are re-allocated form the other jobs to the new job to achieve its performance goal without compromising the performance goals of the other jobs. | 10-31-2013 |
20130290976 | SCHEDULING MAPREDUCE JOB SETS - Determining a schedule of a batch workload of MapReduce jobs is disclosed. A set of multi-stage jobs for processing in a MapReduce framework is received, for example, in a master node. Each multi-stage job includes a duration attribute, and each duration attribute includes a stage duration and a stage type. The MapReduce framework is separated into a plurality of resource pools. The multi-stage jobs are separated into a plurality of subgroups corresponding with the plurality of pools. Each subgroup is configured for concurrent processing in the MapReduce framework. The multi-stage jobs in each of the plurality of subgroups are placed in an order according to increasing stage duration. For each pool, the multi-stage jobs in increasing order of stage duration are sequentially assigned from either a front of the schedule or a tail of the schedule by stage type. | 10-31-2013 |
20130318538 | ESTIMATING A PERFORMANCE CHARACTERISTIC OF A JOB USING A PERFORMANCE MODEL - A job profile is received ( | 11-28-2013 |
20130339972 | DETERMINING AN ALLOCATION OF RESOURCES TO A PROGRAM HAVING CONCURRENT JOBS - A performance model for a collection of jobs that make up a program is used to calculate a performance parameter based on a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. The performance model considers overlap of concurrent jobs. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program. | 12-19-2013 |
20140019987 | SCHEDULING MAP AND REDUCE TASKS FOR JOBS EXECUTION ACCORDING TO PERFORMANCE GOALS - Allocations of resources are determined for jobs that have map tasks and reduce tasks. The jobs are ordered according to performance goals of the jobs. The tasks of the jobs are scheduled for execution according to the ordering and the allocations of resources for the respective jobs. | 01-16-2014 |
20140040573 | DETERMINING A NUMBER OF STORAGE DEVICES TO BACKUP OBJECTS IN VIEW OF QUALITY OF SERVICE CONSIDERATIONS - Storage device libraries, machine readable media, and methods are provided for determining a number of storage devices to backup objects in view of quality of service considerations. An example of a storage device library that determines the number of storage devices to backup objects includes a plurality of storage devices and a controller to control backup of the objects to an assigned number of the storage devices. The controller determines the assigned number of the storage devices before the backup of the objects based upon assigned parameters for backup of the objects that include a time window and a number of concurrent disk agents per storage device. | 02-06-2014 |
20140089727 | ESTIMATING A PERFORMANCE PARAMETER OF A JOB HAVING MAP AND REDUCE TASKS AFTER A FAILURE - A job profile includes characteristics of a job to be executed, where the characteristics of the job profile relate to map tasks and reduce tasks of the job, and where the map tasks produce intermediate results based on input data, and the reduce tasks produce an output based on the intermediate results. In response to a failure in a system, numbers of failed map tasks and reduce tasks of the job based on a time of the failure are computed, and numbers of remaining map tasks and reduce tasks are computed. A performance model is provided, and a performance parameter of the job is estimated using the performance model. | 03-27-2014 |
20140215471 | CREATING A MODEL RELATING TO EXECUTION OF A JOB ON PLATFORMS - At least one benchmark is determined. The at least one benchmark is run on first and second computing platforms to generate platform profiles. Based on the generated platform profiles, a model is generated that characterizes a relationship between a MapReduce job executing on the first platform and the MapReduce job executing on the second platform, wherein the MapReduce job includes map tasks and reduce tasks. | 07-31-2014 |
20140215487 | OPTIMIZING EXECUTION AND RESOURCE USAGE IN LARGE SCALE COMPUTING - A method for tuning workflow settings in a distributed computing workflow comprising sequential interdependent jobs includes pairing a terminal stage of a first job and a leading stage of a second, sequential job to form an optimization pair, in which data segments output by the terminal stage of the first job comprises data input for the leading stage of the second job. The performance of the optimization pair is tuned by determining, with a computational processor, an estimated minimum execution time for the optimization pair and increasing the minimum execution time to generate an increased execution time. The method further includes calculating a minimum number of data segments that still permit execution of the optimization pair within the increased execution time. | 07-31-2014 |
20140359624 | DETERMINING A COMPLETION TIME OF A JOB IN A DISTRIBUTED NETWORK ENVIRONMENT - Determining a completion time of a job in a distributed network environment, the method includes determining completion times of a map task and a reduce task of a job and executing at least one test to collect a training dataset that characterizes the completion times of the map task and the reduce task. | 12-04-2014 |