Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Resource allocation

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

718102000 - Process scheduling

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20110179423MANAGING LATENCIES IN A MULTIPROCESSOR INTERCONNECT - In a computing system having a plurality of transaction source nodes issuing transactions into a switching fabric, an underserviced node notifies source nodes in the system that it needs additional system bandwidth to timely complete an ongoing transaction. The notified nodes continue to process already started transactions to completion, but stop the introduction of new traffic into the fabric until such time as the underserviced node indicates that it has progressed to a preselected point.07-21-2011
20130031561Scheduling Flows in a Multi-Platform Cluster Environment - Techniques for scheduling multiple flows in a multi-platform cluster environment are provided. The techniques include partitioning a cluster into one or more platform containers associated with one or more platforms in the cluster, scheduling one or more flows in each of the one or more platform containers, wherein the one or more flows are created as one or more flow containers, scheduling one or more individual jobs into the one or more flow containers to create a moldable schedule of one or more jobs, flows and platforms, and automatically converting the moldable schedule into a malleable schedule.01-31-2013
20130031560Batching and Forking Resource Requests In A Portable Computing Device - In a portable computing device having a node-based resource architecture, resource requests are batched or otherwise transactionized to help minimize inter-processing entity messaging or other messaging or provide other benefits. In a resource graph defining the architecture, each node or resource of the graph represents an encapsulation of functionality of one or more resources controlled by a processor or other processing entity, each edge represents a client request, and adjacent nodes of the graph represent resource dependencies. A single transaction of resource requests may be provided against two or more of the resources. Additionally, this single transaction may become forked so that parallel processing among a client issuing the single transaction and the resources handling the requests of the single transaction may occur.01-31-2013
20130031559METHOD AND APPARATUS FOR ASSIGNMENT OF VIRTUAL RESOURCES WITHIN A CLOUD ENVIRONMENT - A virtual resource assignment capability is disclosed. The virtual resource assignment capability is configured to support provisioning of virtual resources within a cloud environment. The provisioning of virtual resources within a cloud environment includes receiving a user virtual resource request requesting provisioning of virtual resources within the cloud environment, determining virtual resource assignment information specifying assignment of virtual resources within the cloud environment, and provisioning the virtual resources within the cloud environment using the virtual resource assignment information. The assignment of the requested virtual resources within the cloud environment includes assignment of the virtual resource to datacenters of the cloud environment in which the virtual resources will be hosted and, more specifically, to the physical resources within the datacenters of the cloud environment in which the virtual resources will be hosted. The virtual resources may include virtual processor resources, virtual memory resources, and the like. The physical resources may include processor resources, storage resources, and the like (e.g., physical resources of blade servers of racks of datacenters of the cloud environment).01-31-2013
20090013325RESOURCE ALLOCATION METHOD, RESOURCE ALLOCATION PROGRAM AND RESOURCE ALLOCATION APPARATUS - A resource allocation method, a resource allocation program, and a resource allocation apparatus in which a request reception server subjects an inputted SQL to a syntax analysis, extracts at least one SQL process from the SQL, calculates a resource cost of a database required by the BES to perform the SQL process for each of process types contained in the SQL process, decides an allocation ratio for allocating the resource of a request executing server to a virtualized server in accordance with a resource cost ratio required by each of the BES to execute the SQL process, and requests for execution of the respective BES on the virtualized server to which the resource has been allocated so as to execute the SQL process.01-08-2009
20110209156METHODS AND APPARATUS RELATED TO MIGRATION OF CUSTOMER RESOURCES TO VIRTUAL RESOURCES WITHIN A DATA CENTER ENVIRONMENT - In one embodiment, a processor-readable medium can be configured to store code representing instructions to be executed by a processor. The code can include code to receive an indicator that a set of virtual resources has been identified for quarantine at a portion of a data center. The code can also include code to execute, during at least a portion of a quarantine time period, at least a portion of a virtual resource from the set of virtual resources at a quarantined portion of hardware of the data center that is dedicated to execute the set of virtual resources in response to the indicator and not execute virtual resources associated with non-quarantine operation.08-25-2011
20120174115RUNTIME ENVIRONMENT FOR VIRTUALIZING INFORMATION TECHNOLOGY APPLIANCES - A system for virtualizing information technology (IT) appliances can include an IT appliance hosting facilities software. The IT appliance hosting facilities software can be implemented at a layer of abstraction above a virtual machine host, which is implemented in a layer of abstraction above a hardware layer of a computing system. The IT appliance hosting facilities software can include programmatic code functioning as virtualized hardware upon which a set of IT appliance software modules are able to concurrently run. The IT appliance hosting facilities software can provide caching, application level security, and a standardized framework for running the IT appliance software modules, which are configured in conformance with the standardized framework.07-05-2012
20120174112APPLICATION RESOURCE SWITCHOVER SYSTEMS AND METHODS - Registry information systems and methods are presented. In one embodiment, an application resource switchover method comprises receiving a switchover indication wherein the switchover indication includes an indication to switchover execution of at least one service of an application running on a primary system resource to running on a secondary system resource; performing a switchover preparation process, wherein the switchover preparation process includes automatically generating a switchover plan including indications of switchover operations for performance of a switchover process; and performing the switchover process in which the at lease one of the application services is brought up on the secondary system resource in accordance with the plan of switchover operations. In one embodiment, automatically generating a plan of switchover operations includes analyzing the switchover indication, wherein the analyzing includes determining a type of switchover corresponding to the switchover indication. There can be a variety of switchover types (e.g., a migration switchover, a recovery switchover, etc.).07-05-2012
20130047164METHOD OF SCHEDULING JOBS AND INFORMATION PROCESSING APPARATUS IMPLEMENTING SAME - A computer produces a first schedule of jobs including ongoing jobs and pending jobs which is to cause a plurality of computing resources to execute the pending jobs while preventing suspension of the ongoing jobs running on the computing resources. The computer also produces a second schedule of the jobs which allows the ongoing jobs to be suspended and rescheduled to cause the computing resources to execute the suspended jobs and pending jobs. Based on the produced first and second schedules, the computer calculates an advantage factor representing advantages to be obtained by suspending jobs, as well as a loss factor representing losses to be caused by suspending jobs. The computer chooses either the first schedule or the second schedule, based on a comparison between the advantage factor and loss factor.02-21-2013
20090193427MANAGING PARALLEL DATA PROCESSING JOBS IN GRID ENVIRONMENTS - Method, system, and computer program product for managing parallel data processing jobs in grid environments are provided. A request to deploy a parallel data processing job in a grid environment is received. A plurality of resource nodes in the grid environment are dynamically allocated to the parallel data processing job. A configuration file is automatically generated for the parallel data processing job based on the allocated resource nodes. The parallel data processing job is then executed in the grid environment using the generated configuration file.07-30-2009
20090193426SYSTEM AND METHOD FOR DESCRIBING APPLICATIONS FOR MANAGEABILITY AND EFFICIENT SCALE-UP DEPLOYMENT - Systems, methods and computer storage media for operating a scalable computing platform are provided. A service description describing a requested service is received. Upon receiving the service description a determination of the required resources and the available resources is made. An instance description is produced. The resources required to sustain the deployment of the service are mapped to the available resources of the computing platform so the service may be deployed. The instance description is amended with each deployment of the service to allow for sustained deployment of the service.07-30-2009
20090193425METHOD AND SYSTEM OF MANAGING RESOURCE LEASE DURATION FOR ON-DEMAND COMPUTING - A method and system of managing resource lease duration for on-demand computing is provided. The system can include one or more resources having a metric capturing tool; and a provisioning manager in communication with the one or more resources. The provisioning manager can receive a request for at least one resource from the requester. The provisioning manager can provision the at least one resource from the one or more resources. The metric capturing tool can communicate one or more metrics associated with performance of the at least one resource to the provisioning manager. The provisioning manager can determine a lease modifier based at least in part on the one or more metrics. The provisioning manager can adjust a lease duration for the at least one resource based at least in part on the lease modifier.07-30-2009
20100011366Dynamic Resource Allocation - A computer-implemented method includes detecting an actual workload representative of a pattern of access of a plurality of items of content; comparing the actual workload against a prescriptive workload to determine an occurrence of a substantial deviation from the prescriptive workload; and upon determining the occurrence of the substantial deviation, revising the prescriptive workload based at least in part on the actual workload. The plurality of items is stored on resources of a storage environment according to one of a plurality of resource allocation arrangements. The prescriptive workload including a plurality of categories, each category being associated with a respective one of the plurality of resource allocation arrangements.01-14-2010
20110197197WIDGET FRAMEWORK, REAL-TIME SERVICE ORCHESTRATION, AND REAL-TIME RESOURCE AGGREGATION - A method to optimize calls to a service by components of an application running on an application server is provided. The method includes receiving a first call and a second call, the first call made to a service by a first one of a plurality of components included in the application, and the second call made to the service by a second one of the plurality of components; selecting one of a plurality of optimizations, the plurality of optimizations including orchestrating the first call and the second call into a third call to the service; and, in response to the selecting of the orchestrating of the first call and the second call into the third call as the one of the plurality of optimizations, orchestrating the first call and the second call into the third call.08-11-2011
20100083272MANAGING POOLS OF DYNAMIC RESOURCES - Computer systems attempt to manage resource pools of a dynamic number of similar resources and work tasks in order to optimize system performance. Work requests are received into the resource pool having a dynamic number of resources instances. An instance-throughput curve is determined that relates a number of resource instances in the resource pool to throughput of the work requests. A slope of a point on the instance-throughput curve is estimated with stochastic gradient approximation. The number of resource instances for the resource pool is selected when the estimated slope of the instance-throughput curve is zero.04-01-2010
20090125910SYSTEM-GENERATED RESOURCE MANAGEMENT PROFILES - A method for computer control of collaborating devices enables automatic generation of resource management profiles to coordinate resource allocation within the collaborating device. The method includes utilization of a graphical user interface to select an initial resource management profile and instruct the device to automatically generate a resource profile. Timing is specified for creation of the automatically generated optimized resource profile. The optimized resource profile is developed from statistics maintained, collected, and interpreted about the demand for resources within each component of the collaborating device. An operator may elect to automatically invoke the most recently generated optimized profile after a specified period of collaborating device idleness or to invoke it upon an instruction from the operator. The optimized resource profile may be saved for future use or discarded.05-14-2009
20130086592CORRELATION OF RESOURCES - A filter driver arranged to be executed on a processor of a terminal. The filter driver, when executed, is arranged to (i) receive a request for a first resource relating to a device installed in the terminal; (ii) determine if the requested first resource is appropriate for the device; and (iii) provide a second resource if the first resource is inappropriate for the device.04-04-2013
20110202927Apparatus, Method and System for Aggregating Computing Resources - A system for executing applications designed to run on a single SMP computer on an easily scalable network of computers, while providing each application with computing resources, including processing power, memory and others that exceed the resources available on any single computer. A server agent program, a grid switch apparatus and a grid controller apparatus are included. Methods for creating processes and resources, and for accessing resources transparently across multiple servers are also provided.08-18-2011
20110202926Computer System Performance by Applying Rate Limits to Control Block Tenancy - Embodiments of the invention are provided to enable fair and balanced allocation of control blocks to support processing of requests received from a client machine. The server is configured with tools to manage an account balance of control block availability for each service class. The account balance is periodically adjusted based upon usage, tenancy, deficits, and passage of time. Processing of one or more tasks in a service class is support when the credit value in the service class account is equal to or greater than the entry cost estimated for the request.08-18-2011
20110202925OPTIMIZED CAPACITY PLANNING - A computer implemented method, system and/or program product determine capacity planning of resources allocation for an application scheduled to execute on a virtual machine from a set of multiple applications by computing a mean associated with a pool of pre-defined resources utilization over a time interval; computing a variance associated with the pool of pre-defined resources utilization over the same time interval; identifying a set of resource to execute the scheduled application from the pool of pre-defined resources, wherein the pool of pre-defined resources is created from a pre-defined Service Level Agreement (SLA); and allocating a set of fixed resources from the pool of pre-defined resources to execute the application based on the mean resource utilization.08-18-2011
20120180065METHODS AND APPARATUS FOR DETECTING DEADLOCK IN MULTITHREADING PROGRAMS - A method of detecting deadlock in a multithreading program is provided. An invocation graph is constructed having a single root and a plurality of nodes corresponding to one or more functions written in code of the multithreading program. A resource graph is computed in accordance with one or more resource sets in effect at each node of the invocation graph. It is determined whether cycles exist between two or more nodes of the resource graph. A cycle is an indication of deadlock in the multithreading program.07-12-2012
20120180064CENTRALIZED PLANNING FOR REAL-TIME SELF TUNING OF PLANNED ACTIONS IN A DISTRIBUTED ENVIRONMENT - Automatic programming, scheduling, and control of planned activities at “worker nodes” in a distributed environment are provided by a “real-time self tuner” (RTST). The RTST provides self-tuning of controlled interoperation among an interconnected set of distributed components (i.e., worker nodes) including, for example, home appliances, security systems, lighting, sensor networks, medical electronic devices, wearable computers, robotics, industrial controls, wireless communication systems, audio nets, distributed computers, toys, games, etc. The RTST acts as a centralized “planner” that is either one of the nodes or a dedicated computing device. A set of protocols allow applications to communicate with the nodes, and allow one or more nodes to communicate with each other. Self-tuning of the interoperation and scheduling of tasks to be performed at each node uses an on-line sampling driven statistical model and predefined node “behavior patterns” to predict and manage resource requirements needed by each node for completing assigned tasks.07-12-2012
20080256547Method and System For Managing a Common Resource In a Computing System - The invention, in one embodiment, provides a method for acquiring and releasing a lock over a common resource in a computing system. After a lock has been acquired over a common resource. A determination (10-16-2008
20080256546Method for Allocating Programs - In one embodiment, a method for allocating programs to resources suited to operating conditions thereof comprises generating composition management information for a plurality of resources based on management information relating to performance and capacity of each of the resources. The composition management information includes identification information for the resources used by a plurality of programs. The method further comprises searching for and locating the composition management information of a resource identified by the identification information for each of the programs, based on the composition management information of the resources, and generating program information which associates composition management information of each of the programs with the composition management information of the located resource; and outputting information indicating that a resource abnormality has occurred with one of the programs, in cases where the composition management information of the resource which is associated with the program in the program information corresponds to one or more rules for detecting a resource abnormality in the program.10-16-2008
20080256545Systems and methods of managing resource utilization on a threaded computer system - Embodiments of the invention relate generally to incremental computing. Specifically, embodiments of the invention include systems and methods for the concurrent processing of multiple, incremental changes to a data value while at the same time monitoring and/or enforcing threshold values for that data value. Embodiments of the invention also include systems and methods of managing utilization of a resource of a computer system having a number of threads.10-16-2008
20120246660OPTIMIZED MULTI-COMPONENT CO-ALLOCATION SCHEDULING WITH ADVANCED RESERVATIONS FOR DATA TRANSFERS AND DISTRIBUTED JOBS - Disclosed are systems, methods, computer readable media, and compute environments for establishing a schedule for processing a job in a distributed compute environment. The method embodiment comprises converting a topology of a compute environment to a plurality of endpoint-to-endpoint paths, based on the plurality of endpoint-to-endpoint paths, mapping each replica resource of a plurality of resources to one or more endpoints where each respective resource is available, iteratively identifying schedule costs associated with a relationship between endpoints and resources, and committing a selected schedule cost from the identified schedule costs for processing a job in the compute environment.09-27-2012
20100077403Middleware for Fine-Grained Near Real-Time Applications - A centralized scheduling server for scheduling fine-grained near real-time applications includes network ports, a central managing application, functional library(ies) and service processes. One port communicates with processing nodes over a private computer network. Processing nodes include processing node report processor node status to the server and execute scheduled tasks. The other port communicates with user devices through a public network. The central managing application manages fine-grained near real-time application. The functional library provides middleware core functionality. The service processes include: a resource manager, a submitter to place tasks on a task queue; and a dispatcher to dispatch tasks to processing nodes. A work flow process runs an optimized scheduling algorithm.03-25-2010
20100077402Variable Scaling for Computing Elements - Various systems, methods, and computing units are provided for variable scaling of computing elements. In one representative embodiment, a method comprises: receiving a plurality of computing resource levels; and providing one of the plurality of computing resource levels to each of a plurality of computing elements, each computing element having an associated output, the provided voltage level based upon associated output significance.03-25-2010
20100077401AUTOMATED IDENTIFICATION OF COMPUTING SYSTEM RESOURCES BASED ON COMPUTING RESOURCE DNA - Computing resource DNA associated with a computing resource of a computing system can be received. The computing resource DNA can include one or more computing resource DNA elements representing identifying characteristics of the computing resource. A set of one or more potential matches for the received computing resource DNA can be ascertained from a set of reference data. When one or more potential matches exist, a confidence factor can be calculated for each potential match. The set of potential matches can then be refined. An optimum match for the computing resource DNA can be determined from the set of refined potential matches. The computing resource DNA can then be identified as a representation of the computing resource associated with the optimum match.03-25-2010
20130081043Resource allocation using entitlement hints - An embodiment of an information handling apparatus can comprise an entitlement vector operable to specify resources used by at least one object of a plurality of a plurality of objects, and logic operable to issue a hint instruction based on the entitlement vector for usage in scheduling the resources.03-28-2013
20130081047MANAGING A WORKLOAD OF A PLURALITY OF VIRTUAL SERVERS OF A COMPUTING ENVIRONMENT - An integrated hybrid system is provided. The hybrid system includes compute components of different types and architectures that are integrated and managed by a single point of control to provide federation and the presentation of the compute components as a single logical computing platform.03-28-2013
20130081046ANALYSIS OF OPERATOR GRAPH AND DYNAMIC REALLOCATION OF A RESOURCE TO IMPROVE PERFORMANCE - An operator graph analysis mechanism analyzes an operator graph corresponding to an application for problems as the application runs, and determines potential reallocations from a reallocation policy. The reallocation policy may specify potential reallocations depending on whether one or more operators in the operator graph are compute bound, memory bound, communication bound, or storage bound. The operator graph analysis mechanism includes a resource reallocation mechanism that can dynamically change allocation of resources in the system at runtime to address problems detected in the operator graph. The operator graph analysis mechanism thus allows an application represented by an operator graph to dynamically evolve over time to optimize its performance at runtime.03-28-2013
20130081045APPARATUS AND METHOD FOR PARTITION SCHEDULING FOR MANYCORE SYSTEM - An apparatus for performing partition scheduling in a manycore environment. The apparatus may perform partition scheduling based on a priority and in this instance, may perform partition scheduling to minimize the number of idle cores. The apparatus may include a partition queue to manage a partition scheduling event; a partition scheduler including a core map to store hardware information of each of the plurality of cores; and a partition manager to perform partition scheduling with respect to the plurality of cores in response to the partition scheduling event, using the hardware information.03-28-2013
20130036424RESOURCE ALLOCATION IN PARTIAL FAULT TOLERANT APPLICATIONS - A method for allocating a set of components of an application to a set of resource groups includes the following steps performed by a computer system. The set of resource groups is ordered based on respective failure measures and resource capacities associated with the resource groups. An importance value is assigned to each of the components. The importance value is associated with an affect of the component on an output of the application. The components are assigned to the resource groups based on the importance value of each component and the respective failure measures and resource capacities associated with the resource groups. The components with higher importance values are assigned to resource groups with lower failure measures and higher resource capacities. The application may be a partial fault tolerant (PFT) application that comprises PFT application components. The resource groups may comprise a heterogeneous set of resource groups (or clusters).02-07-2013
20100043009Resource Allocation in Multi-Core Environment - Embodiments of the presently claimed invention automatically and systematically schedule jobs in a computer system thereby optimizing job throughput while simultaneously minimizing the amount of time a job waits for access to a shareable resource in the system. Such embodiments may implement a methodology that continuously pre-conditions the profile of requests submitted to a job scheduler such that the resulting schedule for the dispatch of those jobs results in optimized use of available computer system resources. Through this methodology, the intersection of the envelope of available computer system shareable resources may be considered in the context of the envelope of requested resources associated with the jobs in the system input queue. By using heuristic policies, an arrangement of allocations of available resources against requested resources may be determined thereby maximizing resource consumption on the processing system.02-18-2010
20130081044Task Switching and Inter-task Communications for Multi-core Processors - The invention provides hardware based techniques for switching processing tasks of software programs for execution on a multi-core processor. Invented techniques involve a hardware logic based controller for assigning, adaptive to program processing loads, tasks for processing by cores of a multi-core fabric as well as configuring a set of multiplexers to appropriately interconnect cores of the fabric and program task specific segments at fabric memories, to arrange efficient inter-task communication as well as transferring of activating and de-activating task memory images among the multi-core fabric. The invention thereby provides an efficient, hardware-automated runtime operating system for multi-core processors, minimizing any need to use processing capacity of the cores for traditional operating system software functions. Additionally, such low overhead hardware based operating system for multi-core processors provides significant cost-efficiency and performance advantages, including data processing throughput maximization across all programs dynamically sharing a given multi-core processor, and hardware based security.03-28-2013
20130139172CONTROLLING THE USE OF COMPUTING RESOURCES IN A DATABASE AS A SERVICE - A method and apparatus controls use of a computing resource by multiple tenants in DBaaS service. The method includes intercepting a task that is to access a computer resource, the task being an operating system process or thread; identifying a tenant that is in association with the task from the multiple tenants; determining other tasks of the tenant that access the computing resource; and controlling the use of the computing resource by the task, so that the total amount of usage of the computing resource by the task and the other tasks does not exceed the limit of usage of the computing resource for the tenant.05-30-2013
20090172689ADAPTIVE BUSINESS RESILIENCY COMPUTER SYSTEM FOR INFORMATION TECHNOLOGY ENVIRONMENTS - Programmatically adapting an Information Technology (IT) environment to changes associated with business applications of the IT environment. The programmatically adapting is performed in the context of the business application. The changes can reflect changes in the IT environment, changes to the business application, changes to the business environment and/or failures within the environment, as examples.07-02-2009
20130042252Processing resource allocation within an integrated circuit - An integrated circuit 02-14-2013
20130042253RESOURCE MANAGEMENT SYSTEM, RESOURCE MANAGEMENT METHOD, AND RESOURCE MANAGEMENT PROGRAM - A resource management system is provided which calculates a safety rate in such a manner that the amount of resources satisfying an SLA does not become excessive. Excess rate calculation means 02-14-2013
20120084786JOB EXECUTING SYSTEM, JOB EXECUTING DEVICE AND COMPUTER-READABLE MEDIUM - An image forming device includes a monitoring service performing unit and a service process performing instructing unit. The monitoring service performing unit acquires operation state information including index data that represents a service processing function mounted in the corresponding server and an operation state of the corresponding server from each server by starting a monitoring service when an accepted job is performed. The service process performing instructing unit instructs a low-load server to start a corresponding service processing function when the load on a server in which the service processing function used for executing the job is mounted is determined to be high. The server acquires the corresponding service processing function from the server in which the corresponding service processing function is mounted when being instructed to start an operation and thereafter performs the corresponding service process in accordance with the performance instruction transmitted from the image forming device.04-05-2012
20130139171METHOD AND APPARATUS FOR GENERATING METADATA FOR DIGITAL CONTENT - A method and an apparatus for generating metadata for digital content are described, which allow to review the generated metadata already in course of ongoing generation of metadata. The metadata generation is split into a plurality of processing tasks, which are allocated to two or more processing nodes. The metadata generated by the two or more processing nodes is gathered and visualized on an output unit.05-30-2013
20130139174DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node.05-30-2013
20100043008Scalable Work Load Management on Multi-Core Computer Systems - Embodiments of the presently claimed invention minimize the effect of Amdahl's Law with respect to multi-core processor technologies. This scheme is asynchronous across all of the cores of a processing system and is completely independent of other cores and other work units running on those cores. This scheme occurs on an as needed and just in time basis. As a result, the constraints of Amdahl's Law do not apply to a scheduling algorithm and the design is linearly scalable with the number of processing cores with no degradation due to the effects of serialization.02-18-2010
20100043007MOBILE APPARATUS, A METHOD OF CONTROLLING A RATE OF OCCUPATION OF A RESOURCE OF A CPU - Provided is a mobile apparatus capable of stably executing an animating process even if an interrupting process occurs during execution of the animating process. The device includes a single CPU configured to execute the animating process at least including reproduction and recording of animated images in parallel with execution of a process other than the animating process and a resource control unit configured to control, in the case that an interruptive event occurs while the CPU is executing the animating process and the CPU executes the interrupting process simultaneously with occurrence of the interruptive event, the rate of occupation of a CPU resource allocated to execution of the interrupting process.02-18-2010
20100043006SYSTEMS AND METHODS FOR A CONFIGURABLE DEPLOYMENT PLATFORM WITH VIRTUALIZATION OF PROCESSING RESOURCE SPECIFIC PERSISTENT SETTINGS - Methods and systems for deploying a processing resource in a configurable platform are described. A methods includes providing a specification that describes a configuration of a processing area network, the specification including (i) a number of processors for the processing area network (ii) a local area network topology defining interconnectivity and switching functionality among the specified processors of the processing area network, and (iii) a storage space for the processing area network. The specification further includes processing resource specific persistent settings. The method further includes allocating resources from the configurable platform to satisfy deployment of the specification, programming interconnectivity between the allocated resources and processing resources to satisfy the specification, and deploying the specification to a processing resource within the configurable deployment platform in response to software commands. The specification is used to generate the software commands to configure the platform and deploy processing resources corresponding to the specification.02-18-2010
20100043005SYSTEM RESOURCE MANAGEMENT MODERATOR PROTOCOL - A method, system, and computer program product for managing system resources within a data processing system. A resource management moderator (RMM) utility assigns a priority to each application within a group of management applications, facilitated by a RMM protocol. When a request for control of a particular resource is received, the RMM utility compares the priority of the requesting application with the priority of the controlling application. Control of the resource is ultimately given to the management application with the greater priority. If the resource is not under control of an application, control of the resource may be automatically granted to the requester. Additionally, the RMM utility provides support for legacy applications via a “manager of managers” application. The RMM utility registers the “manager of managers” application with the protocol and enables interactions (to reconfigure and enable legacy applications) between the “manager of managers” application and legacy applications.02-18-2010
20090158291METHOD FOR ASSIGNING RESOURCE OF UNITED SYSTEM - A method of assigning a resource of a united system in which a plurality of single systems are complexly operated includes: determining a multi-user diversity order based on the quantity of users existing within the system; determining a cost function using the determined multi-user diversity order; and assigning a resource based on the determined cost function. Therefore, a state of each system and user requirements can be fully reflected and a resource can be efficiently managed within a united system in which several systems are complexly operated.06-18-2009
20090158290System and Method for Load-Balancing in Server Farms - A system and method for receiving a server request, determining whether one of a plurality of servers scheduled to receive the server request is available, wherein the availability of the one of the servers scheduled to receive the request is based on a first stored value and a second stored value, incrementing the second stored value by a predetermined amount when the one of the servers is unavailable and directing the server request to another one of the plurality of servers based on the first and second stored values.06-18-2009
20090158289WORKFLOW EXECUTION PLANS THROUGH COMPLETION CONDITION CRITICAL PATH ANALYSIS - Optimizing workflow execution. A method includes identifying a completion condition. The completion condition is specified as part of the overall workflow. The method further includes identifying a number of activities that could be executed to satisfy the completion condition. One or more activities from the number of activities is ordered into an execution plan and assigned system resources based on an analysis of activities in the number of activities and the completion condition.06-18-2009
20100107174SCHEDULER, PROCESSOR SYSTEM, AND PROGRAM GENERATION METHOD - A scheduler for conducting scheduling for a processor system including a plurality of processor cores and a plurality of memories respectively corresponding to the plurality of processor cores includes: a scheduling section that allocates one of the plurality of processor cores to one of a plurality of process requests corresponding to a process group based on rule information; and a rule changing section that, when a first processor core is allocated to a first process of the process group, changes the rule information and allocates the first processor core to a subsequent process of the process group, and that restores the rule information when a second processor core is allocated to a final process of the process group.04-29-2010
20100107173Distributing resources in a market-based resource allocation system - Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. In one exemplary embodiment, a plurality of requests for electricity are received from a plurality of end-use consumers. The requests indicate a requested quantity of electricity and a consumer-requested index value indicative of a maximum price a respective end-use consumer will pay for the requested quantity of electricity. A plurality of offers for supplying electricity are received from a plurality of resource suppliers. The offers indicate an offered quantity of electricity and a supplier-requested index value indicative of a minimum price for which a respective supplier will produce the offered quantity of electricity. A dispatched index value is computed at which electricity is to be supplied based at least in part on the consumer-requested index values and the supplier-requested index values.04-29-2010
20100107172System providing methodology for policy-based resource allocation - A system providing methodology for policy-based resource allocation is described. In one embodiment, for example, a system for allocating computer resources amongst a plurality of applications based on a policy is described that comprises: a plurality of computers connected to one another through a network; a policy engine for. specifying a policy for allocation of resources of the plurality of computers amongst a plurality of applications having access to the resources; a monitoring module at each computer for detecting demands for the resources and exchanging information regarding demands for the resources at the plurality of computers; and an enforcement module at each computer for allocating the resources amongst the plurality of applications based on the policy and information regarding demands for the resources.04-29-2010
20100107171COMPUTING TASK CARBON OFFSETING - Methods, systems, services and program products are provided for implementing carbon offset computing. During performance of a specified computing task data concerning resource consumption regarding that specified computing task is gathered and stored. Upon completion of the specified computing task, the amount of carbon offset required to compensate for resource consumption associated with performance of the completed specified computing task is calculated based upon stored or known resource consumption data. The calculated amount of carbon offset information may be transmitted to a carbon offset function provider, and a carbon offset function provider implements the specified amount of carbon offset based upon the calculated amounts communicated for the completed specified computing task.04-29-2010
20090119672Delegation Metasystem for Composite Services - A delegation metasystem for composite services is described, where a composite service is a service which calls other services during its operation. In an embodiment, the composite service is defined using generic descriptions for any services (and their access control models) which may be called by the composite service during operation. At run time, these generic descriptions and potentially other factors, such as the user of the composite service, are used to select actual available services which may be called by the composite service and access rights for the selected services are delegated to the composite service. These access rights may subsequently be revoked when the composite service terminates.05-07-2009
20090307701INFORMATION PROCESSING METHOD AND APPARATUS USING THE SAME - A processor processes the task A and the task B sequentially, wherein the task A performs an application to generate data that should be output to or input from an HDD, and the task B controls a data input and output request to the HDD controller.12-10-2009
20130047163Systems and Methods for Detecting and Tolerating Atomicity Violations Between Concurrent Code Blocks - The system and methods described herein may be used to detect and tolerate atomicity violations between concurrent code blocks and/or to generate code that is executable to detect and tolerate such violations. A compiler may transform program code in which the potential for atomicity violations exists into alternate code that tolerates these potential violations. For example, the compiler may inflate critical sections, transform non-critical sections into critical sections, or coalesce multiple critical sections into a single critical section. The techniques described herein may utilize an auxiliary lock state for locks on critical sections to enable detection of atomicity violations in program code by enabling the system to distinguish between program points at which lock acquisition and release operations appeared in the original program, and the points at which these operations actually occur when executing the transformed program code. Filtering and analysis techniques may reduce false positives induced by the transformations.02-21-2013
20100095300Online Computation of Cache Occupancy and Performance - Methods, computer programs, and systems for managing thread performance in a computing environment based on cache occupancy are provided. In one embodiment, a computer implemented method assigns a thread performance counter to threads being created to measure the number of cache misses for the threads. The thread performance counter is deduced in one embodiment based on performance counters associated with each core in a processor. The method further calculates a self-thread value as the change in the thread performance counter of a given thread during a predetermined period, and an other-thread value as the sum of all the changes in the thread performance counters for all threads except for the given thread. Further, the method estimates a cache occupancy for the given thread based on a previous occupancy for the given thread, and the calculated shelf-thread and other-thread values. The estimated cache occupancy is used to assign computing environment resources to the given thread. In another embodiment, cache miss-rate curves are constructed for a thread to help analyze performance tradeoffs when changing cache allocations of the threads in the system.04-15-2010
20120167111RESOURCE DEPLOYMENT BASED ON CONDITIONS - Architecture that facilitates the package partitioning of application resources based on conditions, and the package applicability based on the conditions. An index is created for a unified lookup of the available resources. At build time of an application, the resources are indexed and determined to be applicable based on the conditions. The condition under which the resource is applicable is then used to automatically partition the resource into an appropriate package. Each resource package then becomes applicable under the conditions in which the resources within it are applicable, and is deployed to the user if the user merits the conditions (e.g., an English user will receive an English package of English strings, but not a French package). Before the application is run, the references to the resources are merged and can be used to do appropriate lookup of what resources are available.06-28-2012
20090044194MULTITHREADED LOCK MANAGEMENT - Apparatus, systems, and methods may operate to construct a memory barrier to protect a thread-specific use counter by serializing parallel instruction execution. If a reader thread is new and a writer thread is not waiting to access data to be read by the reader thread, the thread-specific use counter is created and associated with a read data structure and a write data structure. The thread-specific use counter may be incremented if a writer thread is not waiting. If the writer thread is waiting to access the data after the thread-specific use counter is created, then the thread-specific use counter is decremented without accessing the data by the reader thread. Otherwise, the data is accessed by the reader thread and then the thread-specific use counter is decremented. Additional apparatus, systems, and methods are disclosed.02-12-2009
20100005471PRIORITIZED RESOURCE SCANNING - A method for prioritized scanning of resources within an Information Technology (IT) infrastructure includes prioritizing resources by likelihood of each resource being relevant to a target problem and scanning resources that have a higher likelihood of being relevant to the target problem before scanning resources that have a lower likelihood of being relevant to the target problem. A system for prioritized scanning of an IT infrastructure includes a resource list, the resource list identifying at least a portion of resources within the IT infrastructure; a plurality of tags, each of the plurality of tags being associated with a the resource, the plurality of tags being configured to monitor the resources identified in the resource list and generate an output, the output being related to a likelihood that the resources contain information related to a problem within the IT infrastructure; and a scanning program configured to scan resources with a higher likelihood of containing information related to the problem before scanning resources with a lower likelihood of containing information relating to the problem.01-07-2010
20090313632GENERATING RESOURCE CONSUMPTION CONTROL LIMITS - A resource consumption control method and system. The method includes deploying by a computing system, a portlet/servlet. The computing system receives monitor data associated with a first resource consumed by the first portlet/servlet during the deploying. The monitor data comprises a maximum resource consumption rate value for the portlet/servlet and a mean resource consumption rate value for the portlet/servlet. The computing system generates a resource consumption rate limit value for the first portlet/servlet based on the monitor data. The computing system generates action data comprising an action to be executed if the resource consumption rate limit value is exceeded by a consumption rate value for the portlet/servlet. The computing system transmits the resource consumption rate limit value and the action data to the portlet/servlet. The resource consumption rate limit value and the action data are stored with the portlet/servlet.12-17-2009
20090307706Dynamically Setting the Automation Behavior of Resources - Embodiments provide a method of dynamically setting the automation behavior of resources via switching between an active mode and a passive mode. One embodiment is a method that includes placing a first computing resource into a first desired state and an active behavioral mode and placing a second computing resource having a relationship to the first resource into the first desired state when a first request for the first resource that specifies the first desired state is received. The method also includes placing the first computing resource into a standby state and a passive behavioral mode and not placing the second computing resource into the first desired state.12-10-2009
20090307705SECURE MULTI-PURPOSE COMPUTING CLIENT - A method includes, in a computer that runs multiple operating environments using hardware resources, defining and managing an allocation policy of the hardware resources, which eliminates effects from operations performed in one of the operating environments on the operations performed in another of the operating environments. The hardware resources are assigned to the multiple operating environments in accordance with the allocation policy, so as to isolate the multiple operating environments from one another.12-10-2009
20090307704MULTI-DIMENSIONAL THREAD GROUPING FOR MULTIPLE PROCESSORS - A method and an apparatus that determine a total number of threads to concurrently execute executable codes compiled from a single source for target processing units in response to an API (Application Programming Interface) request from an application running in a host processing unit are described. The target processing units include GPUs (Graphics Processing Unit) and CPUs (Central Processing Unit). Thread group sizes for the target processing units are determined to partition the total number of threads according to a multi-dimensional global thread number included in the API request. The executable codes are loaded to be executed in thread groups with the determined thread group sizes concurrently in the target processing units.12-10-2009
20090307703Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage temperature of the nodes during execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions.12-10-2009
20090307702SYSTEM AND METHOD FOR DISCOVERING AND PROTECTING ALLOCATED RESOURCES IN A SHARED VIRTUALIZED I/O DEVICE - A system includes a virtualized I/O device coupled to one or more processing units. The virtualized I/O device includes a storage for storing a resource discovery table, and programmed I/O (PIO) configuration registers corresponding to hardware resources. A system processor may allocate the plurality of hardware resources to one or more functions, and to populate each entry of the resource discovery table for each function. The processing units may execute one or more processes. Given processing units may further execute OS instructions to allocate space for an I/O mapping of a PIO configuration space in a system memory, and to assign a function to a respective process. Processing units may execute a device driver instance associated with a given process to discover allocated resources by requesting access to the resource discovery table. The virtualized I/O device protects the resources by checking access requests against the resource discovery table.12-10-2009
20120192199RESOURCE ALLOCATION DURING WORKLOAD PARTITION RELOCATION - A method of relocating a workload partition (WPAR) from a departure logical partition (LPAR) to an arrival LPAR determines an amount of a resource allocated to the relocating WPAR on the departure LPAR and allocates to the relocating WPAR on the arrival LPAR an amount of the resource substantially equal to the amount of the resource allocated to the relocating WPAR on the departure LPAR.07-26-2012
20120192198Method and System for Memory Aware Runtime to Support Multitenancy in Heterogeneous Clusters - The invention solves the problem of sharing many-core devices (e.g. GPUs) among concurrent applications running on heterogeneous clusters. In particular, the invention provides transparent mapping of applications to many-core devices (that is, the user does not need to be aware of the many-core devices present in the cluster and of their utilization), time-sharing of many-core devices among applications also in the presence of conflicting memory requirements, and dynamic binding/binding of applications to/from many-core devices (that is, applications do not need to be statically mapped to the same many-core device for their whole life-time).07-26-2012
20110016472IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - A job generation unit generates, from a source program, a job to be executed by any of a plurality of processing resources. The job generation unit calculates job characteristic information that allows estimation of an index value capable of indicating the amount of heat generated in the processing resources due to execution of the job, and appends the job characteristic information to the job. This makes it possible to estimate a temperature rise in a processing resource to which the job is allocated, by using a method that facilitates implementation in a system in which a scheduler allocates a job to a plurality of processing resources.01-20-2011
20090094608METHOD AND APPARATUS FOR MOVING A SOFTWARE APPLICATION IN A SYSTEM CLUSTER - In one aspect, the invention is directed to a method for shutting down a first instance of an application and starting up a second instance of the application. The first instance of the application has associated therewith at least one first-instance support resource. The second instance of the application has associated therewith at least one second-instance support resource. The method includes: 04-09-2009
20130074090DYNAMIC OPERATING SYSTEM OPTIMIZATION IN PARALLEL COMPUTING - A method for dynamic optimization of thread assignments for application workloads in an simultaneous multi-threading (SMT) computing environment includes monitoring and periodically recording an operational status of different processor cores each supporting a number of threads of the thread pool of the SMT computing environment and also operational characteristics of different workloads of a computing application executing in the SMT computing environment. The method further can include identifying by way of the recorded operational characteristics a particular one of the workloads demonstrating a threshold level of activity. Finally, the method can include matching a recorded operational characteristic of the particular one of the workloads to a recorded status of a processor core best able amongst the different processor cores to host execution in one or more threads of the particular one of the workloads and directing the matched processor core to host execution of the particular one of the workloads.03-21-2013
20130074095HANDLING AND REPORTING OF OBJECT STATE TRANSITIONS ON A MULTIPROCESS ARCHITECTURE - Techniques are described for managing states of an object using a finite-state machine. The states may be used to indicate whether an object has been added, removed, requested or updated. Embodiments of the invention generally include dividing a process into at least two threads where a first thread changes the state of the object while the second thread performs the processing of the data found in the object. While the second thread is processing the data, the first thread may receive additional updates and change the states of the objects to inform the second thread that it should process the additional updates when the second thread becomes idle.03-21-2013
20130074094EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits.03-21-2013
20130074093Optimized Memory Configuration Deployed Prior to Execution - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed prior to runtime. A compiler or other pre-execution system may detect a memory allocation boundary and decorate the code. During execution, the decorated code may be used to look up memory allocation and management settings from a database or to deploy optimized settings that may be embedded in the decorations.03-21-2013
20130074092Optimized Memory Configuration Deployed on Executing Code - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed at runtime. An execution environment may capture a memory allocation boundary, look up the boundary in a configuration file, and apply the settings when the settings are available. When the settings are not available, a default set of settings may be used. The execution environment may deploy the optimized settings without modifying the executing code.03-21-2013
20130074091TECHNIQUES FOR ENSURING RESOURCES ACHIEVE PERFORMANCE METRICS IN A MULTI-TENANT STORAGE CONTROLLER - Techniques for ensuring performance metrics are met by resources in a multi-tenant storage controller are presented. Each resource of the multi-tenant storage controller is tracked on a per tenant bases. Usage limits are enforced on per resource and per tenant bases for the multi-tenant storage controller.03-21-2013
20130061236SYSTEM AND METHOD FOR REDUCING POWER REQUIREMENTS OF MICROPROCESSORS THROUGH DYNAMIC ALLOCATION OF DATAPATH RESOURCES - There is provided a system and methods for segmenting datapath resources such as reorder buffers, physical registers, instruction queues and load-store queues, etc. in a microprocessor so that their size may be dynamically expanded and contracted. This is accomplished by allocating and deallocating individual resource units to each resource based on sampled estimates of the instantaneous resource needs of the program running on the microprocessor. By keeping unused datapath resources to a minimum, power and energy savings are achieved by shutting off resource units that are not needed for sustaining the performance requirements of the running program. Leakage energy and switching energy and power are reduced using the described methods.03-07-2013
20130061235METHOD AND SYSTEM FOR MANAGING PARALLEL RESOURCE REQUESTS IN A PORTABLE COMPUTING DEVICE - A method and system for managing parallel resource requests in a portable computing device (“PCD”) are described. The system and method includes generating a first request from a first client, the first request issued in the context of a first execution thread. The first request may be forwarded to a resource. The resource may acknowledge the first request and initiate asynchronous processing. The resource may process the first request while allowing the first client to continue processing in the first execution thread. The resource may signal completion of the processing of the first request and may receive a second request. The second request causes completion of the processing of the first request. The completion of the processing of the first request may include updating a local representation of the resource to a new state and invoking any registered callbacks. The resource may become available to service the second request, and may process the second request.03-07-2013
20090271797INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND MEDIUM STORING INFORMATION PROCESSING PROGRAM STORED THEREON - An information processing apparatus including at least one first processing unit that manages a resource and at least one second processing unit that accesses the resource, wherein the second processing unit stores a table in which an identifier identifying the resource is associated with the resource, and when accessing the resource, refers to the table and requests the first processing unit to allocate the identifier associated with the resource to the resource.10-29-2009
20120227054SYSTEM AND METHOD OF INTERFACING A WORKLOAD MANAGER AND SCHEDULER WITH AN IDENTITY MANAGER - A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers.09-06-2012
20120227053DISTRIBUTED RESOURCE MANAGEMENT IN A PORTABLE COMPUTING DEVICE - In a portable computing device having a node-based resource architecture, a first or distributed node controlled by a first processor but corresponding to a second or native node controlled by a second processor is used to indirectly access a resource of the second node. In a resource graph defining the architecture each node represents an encapsulation of functionality of one or more resources, each edge represents a client request, and adjacent nodes represent resource dependencies. Resources defined by a first graph are controlled by the first processor but not the second processor, while resources defined by a second graph are controlled by the second processor but not the first processor. A client request on the first node may be received from a client under control of the first processor. Then, a client request may be issued on the second node in response to the client request on the first node.09-06-2012
20120227052Task launching on hardware resource for client - A system includes a client management component, a monitor component, and a hardware resource component, each of which is implemented in hardware. The client management component chooses a selected client from one or more clients for which a given task is to be fulfilled by a selected hardware resource of one or more hardware resources. The monitor component receives the given task and an identifier of the selected client from the client management component and monitors completion of the given task for the selected client by the selected hardware resource. The hardware resource management receives the given task from the monitor component, chooses the selected hardware resource that is to fulfill the given task, and launches the given task on the selected hardware resource.09-06-2012
20120227051Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling.09-06-2012
20110023047CORE SELECTION FOR APPLICATIONS RUNNING ON MULTIPROCESSOR SYSTEMS BASED ON CORE AND APPLICATION CHARACTERISTICS - Techniques for scheduling an application program running on a multiprocessor computer system are disclosed. Example methods include but are not limited to analyzing first, second, third, and fourth core components for any within-die process variation, determining an operating state of the first, second, third and fourth core components, selecting optimum core components for each component type with the aid of bloom filters, statically determining which core component types are used by the application program, and scheduling the application program to run on a core having an optimum core component for a core component type used by the application program.01-27-2011
20110023045Targeted communication to resource consumers - A method of communicating to a consumer is disclosed. The consumer's usage of a resource is compared to a relevant cohort's usage of the resource. Based at least in part on a result of the comparison, a message is selected to be provided to the consumer.01-27-2011
20130067485Method And Apparatus For Providing Isolated Virtual Space - Various embodiments provide a method and apparatus of creating an application isolated virtual space without the need to run multiple OSs. Application isolated virtual spaces are created by an Operating System (OS) utilizing a resource manager. The resource manager isolates applications from each other by re-writing the network stack and the I/O subsystem of the conventional OS kernel to have multiple isolated network stack/virtual I/O views of the physical resources managed by the OS. Isolated network stacks and virtual I/O views identify the resources allocated to an application's isolated virtual space and are mapped to applications via an isolating identifier.03-14-2013
20090241123METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING WORK IN A STREAM-ORIENTED COMPUTER SYSTEM WITH CONFIGURABLE NETWORKS - A method, apparatus, and computer program product for scheduling stream-based applications in a distributed computer system with configurable networks are provided. The method includes choosing, at a highest temporal level, jobs that will run, an optimal template alternative for the jobs that will run, network topology, and candidate processing nodes for processing elements of the optimal template alternative for each running job to maximize importance of work performed by the system. The method further includes making, at a medium temporal level, fractional allocations and re-allocations of the candidate processing elements to the processing nodes in the system to react to changing importance of the work. The method also includes revising, at a lowest temporal level, the fractional allocations and re-allocations on a continual basis to react to burstiness of the work, and to differences between projected and real progress of the work.09-24-2009
20090235267CONSOLIDATED DISPLAY OF RESOURCE PERFORMANCE TRENDS - A consolidated representation of performance trends for a plurality of resources in a data processing system is generated. Recent performance measurement data for the plurality of resources is retrieved along with historical performance measurement data for the plurality of resources. For each resource, an associated performance trend is determined based on an analysis of the recent performance measurement data and the historical performance measurement data. A single consolidated graphical representation of the plurality of resources is generated based on the associated performance trends. Each resource in the plurality of resources may have a separate representation within the single consolidated graphical representation positioned within the single consolidated graphical representation based on a recent performance trend and an associated historical performance trend. The single consolidated graphical representation may be output for use by a user to identify areas of the data processing system requiring the user's attention.09-17-2009
20090235265METHOD AND SYSTEM FOR COST AVOIDANCE IN VIRTUALIZED COMPUTING ENVIRONMENTS - A method includes monitoring a utilization amount of resources within logical partitions (LPARs) of a plurality of servers and identifying a resource-strained server of the plurality of servers, wherein the resource-strained server includes a plurality of LPARs. Additionally, the method includes determining a migration of one or more LPARs of the plurality of LPARs of the resource-strained server and migrating the one or more LPARs of the resource-strained server to another server of the plurality of servers based on the determining to avoid an activation of capacity upgrade on demand (CUoD).09-17-2009
20090249350Resource Allocation Through Negotiation - Improved resource allocation methods which use negotiation are described. In an embodiment, a request for access to a resource by a service user is received and an available access slot is allocated, where the slot may be a time or a position in a queue. This allocated slot may or may not meet the service user's requirements and if this allocated time does not meet the service user's requirements, an access time which does meet the requirements but is already allocated to another service user is identified. A message is sent to the user device associated with the other service user requesting a change in allocated access time. If the change is accepted the allocated times are swapped between the two service users.10-01-2009
20090025005RESOURCE ASSIGNMENT SYSTEM - A method and system for assigning resources such as housing associated with an educational institution via communication network is disclosed. A user of a client computer sends a registration request defining registration data to a server facilitating a resource assignment service. The resource assignment service then determines the eligibility of users to use the service based on retrieved registration data, and assigns a randomly generated personal identification number (PIN) to eligible users. The resource assignment service can then assign a timeslot for eligible users to request a desired resource as a function of their assigned PINs. Users may then use the client computer to during their assigned timeslots to submit requests to the resource assignment service for desired resource assignments.01-22-2009
20090019447Adaptive Throttling System for Data Processing Systems - An adaptive throttling system for minimizing the impact of non-production work on production work in a computer system is provided. The adaptive throttling system throttles production work and non-production work to optimize production. The adaptive throttling system allows system administrators to specify a quantified limit on the performance impact of non-production or utility work on production work. The throttling rate of the utility is then automatically determined by a supervisory agent, so that the utilities' impact is kept within the specified limit. The adaptive throttling system adapts dynamically to changes in workloads so as to ensure that valuable system resources are well utilized and utility work is not delayed unnecessarily.01-15-2009
20090013324COMMUNICATION SYSTEM, INFORMATION PROCESSING SYSTEM, CONNECTION SERVER, PROCESSING SERVER, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM - A connection server (01-08-2009
20090013323SYNCHRONISATION - The invention provides a processor comprising an execution unit arranged to execute multiple program threads, each thread comprising a sequence of instructions, and a plurality of synchronisers for synchronising threads. Each synchroniser is operable, in response to execution by the execution unit of one or more synchroniser association instructions, to associate with a group of at least two threads. Each synchroniser is also operable, when thus associated, to synchronise the threads of the group by pausing execution of a thread in the group pending a synchronisation point in another thread of that group.01-08-2009
20090138888Generating Governing Metrics For Resource Provisioning - In a method of generating governing metrics, a high-level goal to be met in a provisioned system is identified. In addition, a low-level governing policy designed to facilitate achievement of the high-level goal is selected and properties relating to the selected low-level governing policy are identified. The identified properties are formulated to define governing metrics relevant to the selected low-level governing policy and the formulated governing metrics are outputted. The formulated governing metrics are configured to be used in at least one of evaluating and controlling resource provisioning in the provisioned system.05-28-2009
20090007132MANAGING PROCESSING RESOURCES IN A DISTRIBUTED COMPUTING ENVIRONMENT - Multiple timing availability chains can be created for individual processing resource in a common pool of resources. Each chain can include a plurality of time intervals each interval having a start time and an end time. Timing availability chains for individual processing resources in the pool of resources can be merged together based on a timing reference to create a pool timing availability chain based on the start times and end times for the intervals. Job plan execution can be simulated based on the pool timing availability chain. The pool chain can be utilized to simulate job execution and based on such simulation a job scheduler can improve the scheduling of jobs on a pool of resources. Other embodiments are also disclosed.01-01-2009
20090007131Automating the Life Cycle of a Distributed Computing Application - A system for automating the life cycle of a software application is provided. The software application utilizes computing resources distributed over a network. A representative system includes creating logic operable to create a task list which describes how at least one stage in the application life cycle is to be performed, and processing logic responsive to the creating logic, operable to process the task list to perform at least one stage in the application life cycle. The processing logic is integrated with a development environment, and the development environment is used to develop the software application.01-01-2009
20090007127SYSTEM AND METHOD FOR OPTIMIZING DATA ANALYSIS - There is provided an adaptive semi-synchronous parallel processing system and method, which may be adapted to various data analysis applications such as flow cytometry systems. By identifying the relationship and memory dependencies between tasks that are necessary to complete an analysis, it is possible to significantly reduce the analysis processing time by selectively executing tasks after careful assignment of tasks to one or more processor queues, where the queue assignment is based on an optimal execution strategy. Further strategies are disclosed to address optimal processing once a task undergoes computation by a computational element in a multiprocessor system. Also disclosed is a technique to perform fluorescence compensation to correct spectral overlap between different detectors in a flow cytometry system due to emission characteristics of various fluorescent dyes.01-01-2009
20090007125Resource Allocation Based on Anticipated Resource Underutilization in a Logically Partitioned Multi-Processor Environment - A method, apparatus and program product for allocating resources in a logically partitioned multiprocessor environment. Resource usage is monitored in a first logical partition in the logically partitioned multiprocessor environment to predict a future underutilization of a resource in the first logical partition. An application executing in a second logical partition in the logically partitioned multiprocessor environment is configured for execution in the second logical partition with an assumption made that at least a portion of the underutilized resource is allocated to the second logical partition during at least a portion of the predicted future underutilization of the resource.01-01-2009
20080295109METHOD AND APPARATUS FOR REUSING COMPONENTS OF A COMPONENT-BASED SOFTWARE SYSTEM11-27-2008
20080295108Minimizing variations of waiting times of requests for services handled by a processor11-27-2008
20080295106METHOD AND SYSTEM FOR IMPROVING THE AVAILABILITY OF A CONSTANT THROUGHPUT SYSTEM DURING A FULL STACK UPDATE11-27-2008
20100262973Method For Operating a Multiprocessor Computer System - The invention relates to a method for operating a multiprocessor computer system which has at least two microprocessors (10-14-2010
20100242047DISTRIBUTED PROCESSING SYSTEM, CONTROL UNIT, AND CLIENT - A distributed processing system includes a client that makes a request for execution of a service requested by a user, a processing element, a control unit connected with the client and the processing element. The control unit has control functions for controlling the distributed processing system, and the client has at least one control function that is same as one of the control functions of the control unit. With respect to at least one control function that both the control unit and the client have, at least one of the control function of the control unit and the control function of the client is selected to execute a control.09-23-2010
20110083134APPARATUS AND METHOD FOR MANAGING VIRTUAL PROCESSING UNIT - A method and apparatus for managing a virtual processor including resources for operating application through a real central processing unit, which includes determining a utilization of a plurality of real CPUs to which a plurality of virtual processors are divided to be allocated; and repartitioning the virtual processors and reallocating the repartitioned virtual processor to at least part of the real CPUs, when the utilization of any one of the real CPUs is at a threshold or less.04-07-2011
20120222037DYNAMIC REPROVISIONING OF RESOURCES TO SOFTWARE OFFERINGS - The disclosed embodiments provide a system that facilitates the maintenance and execution of a software offering. During operation, the system obtains a policy change associated with a service definition of the software offering. Next, the system updates one or more requirements associated with the software offering based on the policy change. Finally, the system uses the updated requirements to dynamically reprovision one or more resources for use by the software offering during execution of the software offering.08-30-2012
20100011364Data Storage in Distributed Systems - Systems, methods, and apparatus, including computer program products for receiving a content transfer request that includes a first set of provisioning attributes that characterizes one or more operational objectives of a first item of content; and processing the content transfer request to allocate resources of a storage environment to store the first item of content.01-14-2010
20130019248METHOD AND APPARATUS FOR MONITORING AND SHARING PERFORMANCE RESOURCES OF A PROCESSORAANM Yu; LeiAACI AustinAAST TXAACO USAAGP Yu; Lei Austin TX US - A method and apparatus are described for managing a plurality of performance monitoring resources residing in a plurality of cores of a processor. A plurality of resource queues are maintained. Each resource queue corresponds to a particular one of the performance monitoring resources, and detects conflicts in use of the particular performance monitoring resource by multiple users. The detected conflicts associated with the particular performance monitoring resource are then resolved. A dynamic resource scheduler is used to resolve the detected conflicts, and is driven by an advanced programmable interrupt controller (APIC) timer residing in a particular core of the processor to provide each item, in an items list of a resource queue associated with the particular performance monitoring resource, an equal opportunity to use the particular performance monitoring resource for a predetermined period of time.01-17-2013
20090007126SWAP CAP RESOURCE CONTROL FOR USE IN VIRTUALIZATION - A method of implementing virtualization involves an improved approach to virtual memory management. An operating system includes a kernel, a resource control framework, a virtual memory subsystem, and a virtualization subsystem. The virtualization subsystem is capable of creating separate environments that logically isolate applications from each other. The virtual memory subsystem utilizes swap space to manage a backing store for anonymous memory. The separate environments share physical resources including swap space. When a separate environment is configured, properties are defined. Configuring a separate environment may include specifying a swap cap that specifies a maximum amount of swap space usable by the separate environment. The resource control framework includes a swap cap resource control. The swap cap resource control is enforced by the kernel such that during operation of the separate environment, the kernel enforces the swap cap specified when the separate environment was configured.01-01-2009
20110283290ALLOCATING STORAGE SERVICES - A system and method are provided for allocating storage resources. An exemplary method comprises providing a storage service catalog that lists storage services available for use. The exemplary method also comprises allowing a user to select a subset of the storage services from among the storage services via a self-service software tool.11-17-2011
20120131591METHOD AND APPARATUS FOR CLEARING CLOUD COMPUTE DEMAND - Provided are systems and methods for simplifying cloud compute markets. A compute marketplace can be configured to determine, automatically, attributes and/or constraints associated with a job without requiring the consumer to provide them. The compute marketplace provides a clearing house for excess compute resources which can be offered privately or publically. The compute environment can be further configured to optimize job completion across multiple providers with different execution formats, and can also factor operating expense of the compute environment into the optimization. The compute marketplace can also be configured to monitor jobs and/or individual job partitions while their execution is in progress. The compute marketplace can be configured to dynamically redistribute jobs/job partitions across providers when, for example, cycle pricing changes during execution, providers fail to meet defined constraints, excess capacity becomes available, compute capacity becomes unavailable, among other options.05-24-2012
20110283289SYSTEM AND METHOD FOR MANAGING RESOURCES IN A PARTITIONED COMPUTING SYSTEM BASED ON RESOURCE USAGE VOLATILITY - A system and method for managing resources in a partitioned computing system using determined risk of resource saturation is disclosed. In one example embodiment, the partitioned computing system includes one or more partitions. A volatility of resource usage for each partition is computed based on computed resource usage gains/losses associated with each partition. A current resource usage of each partition is then determined. Further, a risk of resource saturation is determined by comparing the computed volatility of resource usage with the determined current resource usage of each partition. The resources in the partitioned computing system are then managed using the determined risk of resource saturation associated with each partition.11-17-2011
20110283293Method and Apparatus for Dynamic Allocation of Processing Resources - A method and apparatus for dynamic allocation of processing resources and tasks, including multimedia tasks. Tasks are queued, available processing resources are identified, and the available processing resources are allocated among the tasks. The available processing resources are provided with functional programs corresponding to the tasks. The tasks are performed using available processing resources to produce resulting data, and the resulting data is passed to an input/output device.11-17-2011
20110283291MOBILE DEVICE AND APPLICATION SWITCHING METHOD - An object is to switch executions of applications appropriately from one to another when a plurality of applications use a limited resource. A mobile device (11-17-2011
20080313638Network Resource Management Device - The present invention introduces a plurality of resource management devices (M12-18-2008
20080229319Global Resource Allocation Control - Improved workload management is provided by introducing a global resource allocation control mechanism in a service layer, which may be located above or within the host operating system. The mechanism arbitrates how, when, and by which application resources of all types are being consumed.09-18-2008
20120291042MINIMIZING RESOURCE LATENCY BETWEEN PROCESSOR APPLICATION STATES IN A PORTABLE COMPUTING DEVICE BY SCHEDULING RESOURCE SET TRANSITIONS - Resource state sets corresponding to the application states are maintained in memory. A request may be issued for a processor operating in a first application state corresponding to the first resource state set to transition to a second application state corresponding to the second resource state set. A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning. A process is begun by which the states of resources are switched from states indicated by the first resource state set to states indicated by the second resource state set. Scheduling the process to begin at a time that allows the process to be completed just in time for the resource states to be immediately available to the processor upon entering the second application state helps minimize adverse effects of resource latency.11-15-2012
20110302589METHOD FOR THE DETERMINISTIC EXECUTION AND SYNCHRONIZATION OF AN INFORMATION PROCESSING SYSTEM COMPRISING A PLURALITY OF PROCESSING CORES EXECUTING SYSTEM TASKS - An information processing system includes two processing cores. The execution of an application by the system includes the execution of application tasks and the execution of system tasks, and the system includes a micro-kernel executing the system tasks, which are directly linked to hardware resources. The processing system includes a computation part of the micro-kernel executing system tasks relating to the switching of the tasks on a first core, and a control part of the micro-kernel executing, on a second core, system tasks relating to the control of the task allocation order on the first core.12-08-2011
20110302591SYSTEM AND METHOD FOR DATA SYNCHRONIZATION FOR A COMPUTER ARCHITECTURE FOR BROADBAND NETWORKS - A computer architecture and programming model for high speed processing over broadband networks are provided. The architecture employs a consistent modular structure, a common computing module and uniform software cells. The common computing module includes a control processor, a plurality of processing units, a plurality of local memories from which the processing units process programs, a direct memory access controller and a shared main memory. A synchronized system and method for the coordinated reading and writing of data to and from the shared main memory by the processing units also are provided. A processing system for processing computer tasks is also provided. A first processor is of a first processor type and a number of second processors are of a second processor type. One of the second processors manages process scheduling of computing tasks by providing tasks to at least one of the first and second processors.12-08-2011
20110302590PROCESS ALLOCATION SYSTEM, PROCESS ALLOCATION METHOD, PROCESS ALLOCATION PROGRAM - Communication performance of inter-process communication in enhanced for the entire program processing. A process allocation system is provided with a processor which executes a process including a process for performing mutual inter-process communication and holding a logical process placement system, and a process allocation module for allocating each process to the processor, wherein the process allocation module is provided with an inter-processor communication capacity acquisition module for acquiring the communication performance of inter-processor communication which the processor performs with other different processor, a module for specifying the dimensional direction in which the communication traffic of inter-process communication is high in the logical process placement system, and a module for determining a processor having a higher communication performance of inter-processor communication as the allocation destination of a process which is set in the dimensional direction of higher inter-process communication traffic.12-08-2011
20120110591SCHEDULING POLICY FOR EFFICIENT PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT - A method for verifying software includes accessing a job queue, accessing a resource queue, and assigning a job from the job queue to a resource from the resource queue if an addition is made to the a job queue or to a resource queue. The job queue includes an indication of one or more jobs to be executed by a worker node, each job indicating a portion of a code to be verified. The resource queue includes an indication of a one or more worker nodes available to verify a portion of software. The resource is selected by determining the best match for the characteristics of the selected job among the resources in the resource queue.05-03-2012
20110289506MANAGEMENT OF COMPUTING RESOURCES FOR APPLICATIONS - The subject matter of this disclosure can be implemented in, among other things, a method. In these examples, the method includes receiving a resource request message to obtain access to a computing resource, and storing the resource request message in a data repository that stores a collection of resource request messages received from a group of applications executing on the computing device. The method may also include responsive to determining that the resource request message received from the first application has a highest priority of the collection of resource request messages, determining whether a second application currently has access to the computing resource, issuing a resource lost message to the second application to indicate that the second application has lost access to the computing resource, and issuing a resource request granted message to the first application, such that the first application obtains access to the computing resource.11-24-2011
20110289507RUNSPACE METHOD, SYSTEM AND APPARATUS - The present invention, known as runspace, relates to the field of computing system management, data processing and data communications, and specifically to synergistic methods and systems which provide resource-efficient computation, especially for decomposable many-component tasks executable on multiple processing elements, by using a metric space representation of code and data locality to direct allocation and migration of code and data, by performing analysis to mark code areas that provide opportunities for runtime improvement, and by providing a low-power, local, secure memory management system suitable for distributed invocation of compact sections of code accessing local memory. Runspace provides mechanisms supporting hierarchical allocation, optimization, monitoring and control, and supporting resilient, energy efficient large-scale computing.11-24-2011
20080276245Optimization with Unknown Objective Function - Nonlinear optimization is applied to resource allocation, as for example, buffer pool optimization in computer database software where only the marginal utility is known. The method for allocating resources comprises the steps of starting from an initial allocation, calculating the marginal utility of the allocation, calculating the constraint functions of the allocation, and applying this information to obtain a next allocation and repeating these steps until a stopping criteria is satisfied, in which case a locally optimal allocation is returned.11-06-2008
20110296428REGISTER ALLOCATION TO THREADS - A method, system, and computer usable program product for improved register allocation in a simultaneous multithreaded processor. A determination is made that a thread of an application in the data processing environment needs more physical registers than are available to allocate to the thread. The thread is configured to utilize a logical register that is mapped to a memory register. The thread is executed utilizing the physical registers and the memory registers.12-01-2011
20110296429SYSTEM AND METHOD FOR MANAGEMENT OF LICENSE ENTITLEMENTS IN A VIRTUALIZED ENVIRONMENT - A management system and method for a virtualized environment includes a computer entity having a usage limitation based on an entitlement. A resource manager, using a processor and programmed on and executed from a memory storage device, is configured to manage resources in a virtualized environment. An entitlement-usage module is coupled to the resource manager and is configured to track entitlement-related constraints in accordance with changes in the virtualized environment to permit the resource manager to make allocation decisions which include the entitlement-related constraints to ensure that the usage limitation is met for the computer entity.12-01-2011
20110296427Resource Allocation During Workload Partition Relocation - A method of relocating a workload partition (WPAR) from a departure logical partition (LPAR) to an arrival LPAR determines an amount of a resource allocated to the relocating WPAR on the departure LPAR and allocates to the relocating WPAR on the arrival LPAR an amount of the resource substantially equal to the amount of the resource allocated to the relocating WPAR on the departure LPAR.12-01-2011
20090288091Method and System Integrating Task Assignment and Resources Scheduling - A method and a system for integrating and solving simultaneously both task assignment and resources scheduling decision making problems, thereby providing an overall feasible and optimal solution. The method and the system may be used for integrated airline scheduling in which case the task assignment is fleet assignment, and the resources scheduling are aircraft routing with maintenance (maintenance routing) and crew scheduling (or crew pairing only). In a preferred embodiment, Benders decomposition is employed with Pareto-optimal cuts, where the Benders subproblem solution is sped-up without influencing Pareto-optimal cut generation. The cost savings achieved in comparison with traditional methods are estimated, so that the user can terminate the solution process when these cost savings are satisfactory enough. Important properties of the solution are stored enabling the user to efficiently re-solve the problem even in cases where it is different from the initial one.11-19-2009
20090300634Method and System for Register Management - A system and method of allocating registers in a register array to multiple workloads is disclosed. The method identifies an incoming workload as belonging to a first process group or a second process group, and allocates one or more target registers from the register array to the incoming workload. The register array is logically divided to a first ring and a second ring such that the first ring and the second ring have at least one register in common. The first process group is allocated registers in the first ring and the second process group is allocated registers in the second ring. Target registers in the first ring are allocated in order of sequentially decreasing register addresses and target registers in the second ring are allocated in order of sequentially increasing register addresses. Also disclosed are methods and systems for allocation of registers in an array of general purpose registers, methods and systems for allocation of registers to processes including shader processes in graphics processing units.12-03-2009
20110191781RESOURCES MANAGEMENT IN DISTRIBUTED COMPUTING ENVIRONMENT - A method, system and a computer program product for determining resources allocation in a distributed computing environment. An embodiment may include identifying resources in a distributed computing environment, computing provisioning parameters, computing configuration parameters and quantifying service parameters in response to a set of service level agreements (SLA). The embodiment may further include iteratively computing a completion time required for completion of the assigned task and a cost. Embodiments may further include computing an optimal resources configuration and computing at least one of an optimal completion time and an optimal cost corresponding to the optimal resources configuration. Embodiments may further include dynamically modifying the optimal resources configuration in response to at least one change in at least one of provisioning parameters, computing parameters and quantifying service parameters.08-04-2011
20080271036METHOD AND APPARATUS FOR ASSIGNING FRACTIONAL PROCESSING NODES TO WORK IN A STREAM-ORIENTED COMPUTER SYSTEM - An apparatus and method for making fractional assignments of processing elements to processing nodes for stream-based applications in a distributed computer system includes determining an amount of processing power to give to each processing element. Based on a list of acceptable processing nodes, a determination of fractions of which processing nodes will work on each processing element is made. To update allocations of the amount of processing power and the fractions, the process is repeated.10-30-2008
20110191782APPARATUS AND METHOD FOR PROCESSING DATA - A data processing apparatus and method for allocating data to processors, allowing the processors to process the data efficiently. The data processing apparatus may predict a result of processing data, based on a workload for the data, according to a number of processors, and may determine the number of processors to be allocated with the data, using the predicted processing result.08-04-2011
20100064293APPARATUS AND METHOD FOR MANAGING USER SCHEDULE - The present invention estimates a schedule of a user by collecting and analyzing information on a user-related work to be performed by accessing a schedule management program on the basis of corresponding user information when the user enters a region capable of using computing resources and executes a service application program that can perform the corresponding scheduled job through a virtual machine by automatically creating the virtual machine of a computing environment that can perform the estimated scheduled job. According to the present invention, a virtual machine is dynamically created so as to execute a work grasped as a work that the user must perform by analyzing a current schedule while access of the user and an application program for performing the corresponding work in the created virtual machine is automatically executed, such that user convenience is increased.03-11-2010
20100064292STORAGE DEVICE AND CONTROL METHOD THEREFOR - For betterment, by putting a virtual storage device into a suspend mode, physical resources are turned OFF on a virtual storage device basis. Moreover, control information and volume data of the virtual storage device are stored in any external volume, for example, and the resources that have been used by the virtual storage device are deallocated. At the time of resumption of operation, using any resources not in use, the virtual storage device is restored based on the control information in storage. When a change is made to a WWN on the side of a host, the storage device receives a WWN change notification from a management server, and makes settings again to a WWN table, thereby making it accessible from the host.03-11-2010
20130219404Computer System and Working Method Thereof - A computer system and operating method thereof are provided. The computer system comprises a central processing unit (08-22-2013
20100169891METHOD AND APPARATUS FOR LOCATING LOAD-BALANCED FACILITIES - A method and apparatus for providing a facility location plan for a network with a V-shaped facility cost are disclosed. For example, the method receives an event from a queue, wherein the event comprises an open event or a tight event. The method connects a plurality of adjacent clients to a facility, if the event comprises the open event, and adds a new client-facility edge to a graph comprising a plurality of client-facility edges, if the event comprises the tight event.07-01-2010
20110219381MULTIPROCESSOR SYSTEM WITH MULTIPLE CONCURRENT MODES OF EXECUTION - A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (IDs) are allocated to speculative threads from a pool of available numbers. The pool is divided into domains, with each domain being assigned to a mode of speculation. Modes of speculation include TM, TLS, and rollback. Allocation of the IDs is carried out with respect to a central state table and using hardware pointers. The IDs are used for writing different versions of speculative results in different ways of a set in a cache memory.09-08-2011
20090187915SCHEDULING THREADS ON PROCESSORS - A device, system, and method are directed towards managing threads and components in computer system with one or more processing units. A processor group has an associated hierarchical structure containing nodes that may correspond to processing units, hardware components, or abstractions. The processor group hierarchy may be used to assign one or more threads to one or more processing units, by traversing the hierarchy based on various factors. The factor may include load balancing, affinity, sharing of components, loads, capacities, or other characteristics of components or threads. A processor group hierarchy may be used in conjunction with a designated processor set.07-23-2009
20100077400TASK-OPTIMIZING CALENDAR SYSTEM - A calendar system schedules tasks and meetings or other appointments for a user. The system retrieves a work capacity, which is information regarding the working hours for the user. The system further retrieves a plurality of enhanced tasks for the user. The system then optimizes a schedule for the user based on the work capacity and the enhanced tasks.03-25-2010
20090100436PARTITIONING SYSTEM INCLUDING A GENERIC PARTITIONING MANAGER FOR PARTITIONING RESOURCES - The application discloses a generic partitioning manager for partitioning resources across one or more owner nodes. In illustrated embodiments described, the partitioning manager interfaces with the one or more owner nodes through an owner library. A lookup node or application interfaces with the partitioning manager through the lookup library to lookup address or locations of the partitioned resources. In illustrated embodiments, resources are partitioned via the partitioning manager in response to lease request messages from an owner library. In illustrated embodiments, the lease grant message includes a complete list of the leases for the owner node.04-16-2009
20090178050Control of Access to Services and/or Resources of a Data Processing System - In order to control access to resources of a data processing system, a priority code is determined for an access request to at least one resource. A comparison code for granting access to the at least one requested resource is determined concerning an alternative use of the resource. For a totality of resource requests to the data processing system, an extreme value for a sum is determined via products of a corresponding priority code and of a number of resource accesses which can be granted in each case, taking into account a maximum capability of a requested resource. For a resource request, it is checked whether the priority code and the comparison code show a predetermined mutual relation. Access is granted depending on the extreme value determined and on the result of the check.07-09-2009
20110219382METHOD, SYSTEM, AND APPARATUS FOR TASK ALLOCATION OF MULTI-CORE PROCESSOR - A system for task allocation of a multi-core processor is provided. The system includes a task allocator and a plurality of sub-processing systems. Each of the sub-processing systems comprises a state register, a processor core, and a buffer, the state register is configured to recognize state of the sub-processing systems, and transmit state information of the sub-processing systems to the task allocator, the state information comprises: a first state bit configured to indicate whether sub-processing systems are in Idle state; and a second state bit configured to indicate a specific state of the sub-processing systems. The task allocator is configured to allocate task to the sub-processing systems according to a priority determined by the state information sent by the state registers of the sub-processing systems.09-08-2011
20090150895SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SUPPORTING TRANSFORMATION TO A SHARED ON-DEMAND INFRASTRUCTURE - Systems, methods and computer program products for supporting transformation to a shared on-demand infrastructure. Exemplary embodiments include a method including identifying a CPU resource type (or, in general, other sharable resource) to analyze, calculating a number of servers in scope, Ns, collecting current resource usage data for systems in the scope, wherein the current resource data is provided by systems and performance management tools, identifying a Period P, counting a number of peaks (Np) in the Period, excluding adjacent spikes to each of the number of peaks, calculating an average of CPU usage, Um, which is generally provided by the usage collection tools, defining an amplitude Am, defining a value for % Ks, in the range of 0.2-0.3 (value suggested) and applying transformation formulas to obtain a minimum size of a resource pool, a size of a target environment and a resource saving.06-11-2009
20090150893HARDWARE UTILIZATION-AWARE THREAD MANAGEMENT IN MULTITHREADED COMPUTER SYSTEMS - A device, system, and method are directed towards managing threads in a computer system with one or more processing units, each processing unit having a corresponding hardware resource. Threads are characterized based on their use or requirements for access to the hardware resource. The threads are distributed among the processing units in a configuration that leaves at least one processing unit with threads that have an aggregate zero or low usage of the hardware resource. Power may be reduced or turned off to the instances of the hardware resource that have zero or low usage. Distribution may be based on one or more of a number of specifications or factors, such as user power management specifications, power usage, performance, and other factors.06-11-2009
20090178051METHOD FOR IMPLEMENTING DYNAMIC LIFETIME RELIABILITY EXTENSION FOR MICROPROCESSOR ARCHITECTURES - A method for implementing dynamic lifetime reliability extension for microprocessor architectures having a plurality of primary resources and a secondary resource pool of one or more secondary resources includes configuring a resource operational mode controller to selectively switch of the primary and secondary resources between an operational mode and a non-operational mode, wherein the non-operational mode corresponds to a lifetime extension process; configuring a resource mapper associated with the secondary resource pool and in communication with the resource operational mode controller to map a secondary resource placed into the operational mode to a corresponding primary resource placed into the non-operational mode; and configuring a transaction decoder to receive incoming transaction requests and direct the requests to one of a primary resource in the operational mode and a secondary resource in the operational mode, the secondary resource mapped to an associated primary resource placed in the non-operational mode.07-09-2009
20100251253PRIORITY-BASED MANAGEMENT OF SYSTEM LOAD LEVEL - Systems, methods, and computer program products are described herein for managing computer system resources. A plurality of modules (e.g., virtual machines or other applications) may be allocated across multiple computer system resources (e.g., processors, servers, etc.). Each module is assigned a priority level. Furthermore, a designated utilization level is assigned to each resource of the computer system. Each resource supports one or more of the modules, and prioritizes operation of the supported modules according to the corresponding assigned priority levels. Furthermore, each resource maintains operation of the supported modules at the designated utilization level.09-30-2010
20120110590EFFICIENT PARTIAL COMPUTATION FOR THE PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT - An electronic device includes a memory, a processor coupled to the memory, and one or more policies stored in the memory. The policies include a resource availability policy determining whether the processor should continue evaluating the software, and a job availability policy determining whether new jobs will be created for unexplored branches. The processor is configured to receive a job to be executed, evaluate the software, select a branch to explore and store an initialization sequence of one or more unexplored branches if a branch in the software is encountered, evaluate the job availability policy, decide whether to create a job for each of the unexplored branches based on the job availability policy, evaluate the resource availability policy, and decide whether to continue evaluating the software at the branch selected to explore based on the resource availability policy. The job indicates of a portion of software to be evaluated.05-03-2012
20120110589TECHNIQUE FOR EFFICIENT PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT THROUGH INTELLIGENT DYNAMIC LOAD BALANCING - A method for verifying software includes monitoring a resource queue and a job queue, determining whether the resource queue and the job queue contain entries, and if both the resource queue and the job queue contain entries, then applying a scheduling policy to select a job, selecting a worker node as a best match for the characteristics of the job among the resource queue entries, assigning the job to the worker node, assigning parameters to the worker node for a job creation policy for creating new jobs in the job queue while executing the job, and assigning parameters to the worker node for a termination policy for halting execution of the job. The resource queue indicates worker nodes available to verify a portion of code. The job queue indicates one or more jobs to be executed by a worker node. A job includes a portion of code to be verified.05-03-2012
20120110588UNIFIED RESOURCE MANAGER PROVIDING A SINGLE POINT OF CONTROL - An integrated hybrid system is provided. The hybrid system includes compute components of different types and architectures that are integrated and managed by a single point of control to provide federation and the presentation of the compute components as a single logical computing platform.05-03-2012
20110197196DYNAMIC JOB RELOCATION IN A HIGH PERFORMANCE COMPUTING SYSTEM - A method and apparatus is described for dynamic relocation of a job executing on multiple nodes of a high performance computing (HPC) systems. The job is dynamically relocated when the messaging network is in a quiescent state. The messaging network is quiesced by signaling the job to suspend execution at a global collective operation of the job where the messaging of the job is known to be in a quiescent state. When all the nodes have reached the global collective operation and paused, the job is relocated and execution is resumed at the new location.08-11-2011
20090150896POWER CONTROL METHOD FOR VIRTUAL MACHINE AND VIRTUAL COMPUTER SYSTEM - Provided is a method of controlling a virtual computer system in which a physical computer includes a plurality of physical CPUs that is switchable between a sleep state and a normal state, and a virtualization control unit divides the physical computer into a plurality of logical partitions to run a guest OS in each of the logical partitions and controls allocation of resources of the physical computer to the logical partitions, causes the virtualization control unit to: receive an operation instruction for operating the logical partitions; and if the operation instruction is for deleting a virtual CPU from one of the logical partitions, delete this virtual CPU from a table for managing virtual CPU-physical CPU allocation and put, if the deleting leaves no virtual CPUs allocated to one of the physical CPUs that has been allocated the deleted virtual CPU, this one of the physical CPUs into the sleep state.06-11-2009
20100122261APPLICATION LEVEL PLACEMENT SCHEDULER IN A MULTIPROCESSOR COMPUTING ENVIRONMENT - A multiprocessor computer system program scheduler comprises an application-level placement scheduler module that is operable to receive requests for resources in a multiprocessor computer system, operable to manage processing node resource availability data; operable to reserve processing node resources for specific applications based on the received requests for resources and the processing node resource availability data; and operable to reclaim processing node resources reserved for specific applications upon application termination.05-13-2010
20120240128Memory Access Performance Diagnosis - There is disclosed a solution for obtaining Memory Access Performance metrics in an electronic system comprising a Data Processing Unit, DPU and a synchronous memory device external to the DPU and coupled to the DPU through a memory bus. There is used mixed software and hardware dedicated resources, wherein at least a hardware part of the dedicated resources is comprised in the memory device.09-20-2012
20120240127MATCHING AN AUTONOMIC MANAGER WITH A MANAGEABLE RESOURCE - A method to match an autonomic manager with a manageable resource may include using a management style profile to match the autonomic manager with the manageable resource. The method may also include validating that the autonomic manager can manage the manageable resource using a defined management style of the autonomic manager.09-20-2012
20100122262Method and Apparatus for Dynamic Allocation of Processing Resources - A method and apparatus for dynamic allocation of processing resources and tasks, including multimedia tasks. Tasks are queued, available processing resources are identified, and the available processing resources are allocated among the tasks. The available processing resources are provided with functional programs corresponding to the tasks. The tasks are performed using the available processing resources to produce resulting data, and the resulting data is passed to an input/output device.05-13-2010
20120036514METHOD AND APPARATUS FOR A COMPILER AND RELATED COMPONENTS FOR STREAM-BASED COMPUTATIONS FOR A GENERAL-PURPOSE, MULTIPLE-CORE SYSTEM - A method and system of compiling and linking source stream programs for efficient use of multi-node devices. The system includes a compiler, a linker, a loader and a runtime component. The process converts a source code stream program to a compiled object code that is used with a programmable node based computing device having a plurality of processing nodes coupled to each other. The programming modules include stream statements for input values and output values in the form of sources and destinations for at least one of the plurality of processing nodes and stream statements that determine the streaming flow of values for the at least one of the plurality of processing nodes. The compiler converts the source code stream based program to object modules, object module instances and executables. The linker matches the object module instances to at least one of the multiple cores. The loader loads the tasks required by the object modules in the nodes and configure the nodes matched with the object module instances. The runtime component runs the converted program.02-09-2012
20120036513METHOD TO ASSIGN TRAFFIC PRIORITY OR BANDWIDTH FOR APPLICATION AT THE END USERS-DEVICE - A resource reservation method in a network, where the allocation of network bandwidth to each application connected to the network is determined by the end user is provided herewith.02-09-2012
20080271034RESOURCE ALLOCATION SYSTEM, RESOURCE ALLOCATION METHOD, AND RESOURCE ALLOCATION PROGRAM - Disclosed is a resource allocation system including a provisional allocation execution unit that executes provisional allocation for policies other than a policy corresponding to an accepted source request, a shared resource extraction unit that extracts a resource sharable between the policy and other policies, and a determination index calculation unit that calculates an index that depends on resource sharability, and determines an allocation destination so that a storage area is allocated on a storage device with a lower resource sharability in preference to other storage devices.10-30-2008
20090094609DYNAMICALLY PROVIDING A LOCALIZED USER INTERFACE LANGUAGE RESOURCE - Technologies are described herein for dynamically providing a localized user interface (“UI”) resource. A localization framework includes a resource manager, resource sets, and resource readers. The resource manager exposes an application programming interface (“API”) to application programs for requesting a localized UI resource from the resource manager. When the resource manager receives a request for a localized UI resource on the API, the resource manager queries the resource sets for the requested resource. If the first resource set is unable to provide the requested localized UI resource, another resource set may be queried. Multiple resource readers within each resource set may also be configured to provide flexibility in how UI resources are loaded and processed.04-09-2009
20120240126Partitioned Ticket Locks With Semi-Local Spinning - A partitioned ticket lock may control access to a shared resource, and may include a single ticket value field and multiple grant value fields. Each grant value may be the sole occupant of a respective cache line, an event count or sequencer instance, or a sub-lock. The number of grant values may be configurable and/or adaptable during runtime. To acquire the lock, a thread may obtain a value from the ticket value field using a fetch-and-increment type operation, and generate an identifier of a particular grant value field by applying a mathematical or logical function to the obtained ticket value. The thread may be granted the lock when the value of that grant value field matches the obtained ticket value. Releasing the lock may include computing a new ticket value, generating an identifier of another grant value field, and storing the new ticket value in the other grant value field.09-20-2012
20120240125System Resource Management In An Electronic Device - A system and method of managing resources of an electronic device are described. A solver of the electronic device may receive one or more resource requirements from one or more resource requesters executing on the electronic device. The solver determines a values for resource characteristic based on the received resource requirements and dependency information defining hierarchical dependency between resource characteristic values associated with resources of the electronic device. The determined values of the resource characteristics are then provided to the associated resources of the electronic device.09-20-2012
20120240124Performing An Operation Using Multiple Services - Some embodiments provide a method for distributing an operation for processing by a set of background services. The method automatically determines a number of background services for performing an operation. The method partitions the operation into several sub-operations. The method distributes the several sub-operations across the determined number of background services.09-20-2012
20100083268Method And System For Managing Access To A Resource By A Process Processing A Media Stream - Methods, systems and computer program products are described for managing access to a resource. In one aspect, a method includes detecting, during processing of a first media stream by a first process for presentation, an association between a concurrency policy and a shared resource shareable with a second process, and then listening for a message providing access to the shared resource based on an evaluation of the concurrency policy. In response to receiving a message providing access to the shared resource, the method includes accessing the shared resource.04-01-2010
20090055833System and method for performance monitoring - A system for monitoring a computer software system includes a first user actuated tuning knob for allocating space in memory for performance monitoring; a second user actuated tuning knob for a specifying time out value for in-flight units of work; and a transaction monitor responsive to the first and second user actuated tuning knobs for accumulating in synonym chain cells in the allocated space timing statistics for a plurality of in-flight units of work.02-26-2009
20100088707Mechanism for Application Management During Server Power Changes - The present disclosure provides, in some embodiments, a method for managing applications and resources. According to some embodiments, a power orchestrator may comprise (a) receiving information handling system resource status, (b) receiving one or more application registrations from one or more applications to be executed on the information handling system, (c) formulating a resource priority schedule using the received resource status and the one or more application registrations, (d) formulating a resource allocation schedule in accordance with the resource priority schedule, (e) communicating the resource allocation schedule to the one or more applications, and (f) allocating one or more resources to the one or more applications in accordance with the resource allocation schedule. A method may comprise, according to some embodiments, determining whether one or more of the one or more applications will submit a registration update and/or determining whether available resource(s) match demand and adjusting resource status to match demand.04-08-2010
20100083273METHOD AND MEMORY MANAGER FOR MANAGING MEMORY - A memory managing method and memory manager for a multi processing environment are provided. The memory manager adjusts the number of processors assigned to a consumer process and/or an assignment unit size of data to be consumed by the consumer process based on a condition of a shared queue which is shared by a producer process producing data and the consumer process consuming the data.04-01-2010
20100083271Resource Property Aggregation In A Multi-Provider System - The present invention provides for resource property aggregation. A set of new instances is received from one or more providers. For each new instance in the set of new instances, a determination is made as to whether the new instance represents a same resource as at least one other instance. Responsive to determining that the new instance represents the same resource as another instance, a set of properties associated with the new instance and with the at least one other instance are identified. Each property from the new instance is compared to an associated property in the at least one other instance using a set of precedence rules. At least one property value is identified from either the new instance or the at least one other instance. An aggregate instance is then generated that represents the resource using the identified property values.04-01-2010
20100083270RESOURCE CLASS BINDING FOR INDUSTRIAL AUTOMATION - An industrial control system is provided. The system includes a processing component to bind to a subset of resources from a set of potential industrial control resources. An attribute component defines a resource priority for the set of potential industrial control resources. A resource class component implements at least one instance of the potential industrial control resources, where the instance automatically selects the subset of resources in view of the resource priority.04-01-2010
20100083269ALGORITHM FOR FAST LIST ALLOCATION AND FREE - A computer implemented method, a data processing system, and a computer usable recordable-type medium having a computer usable program code serializing list insertion and removal. An atomic operation free atomic list primitive call from a kernel service is received for the insertion or removal of a list element from a linked list. The atomic operation free atomic list primitive is a restartible routine selected from the list consisting of cpuget_from_list, cpuput_onto_list, cpuget_all_from_list, and cpuput_chain_onto_list. A processor begins execution of the atomic operation free atomic list primitive. If an interrupt is received during execution of the atomic operation free atomic list primitive, the interrupt handler will recognize the address of the executing program at the time of the interrupt and will over-write that address in the machine state save area, so that when the interrupted program is resumed, the entire sequence will be run again from the beginning. If an interrupt is not received during execution of the atomic operation free atomic list primitive interrupt hander, the processor finishes execution of the atomic operation free atomic list primitive.04-01-2010
20090249351Round-Robin Apparatus and Instruction Dispatch Scheduler Employing Same For Use In Multithreading Microprocessor - An apparatus for selecting one of N requesters of a shared resource in a round-robin fashion is disclosed. One or more of the N requestors may be disabled from being selected in a selection cycle. The apparatus includes a first input that receives a first value specifying which of the N requestors was last selected. A second input receives a second value specifying which of the N requestors is enabled to be selected. A barrel incrementer, coupled to receive the first and second inputs, 1-bit left-rotatively increments the second value by the first value to generate a sum. Combinational logic, coupled to the barrel incrementer, generates a third value specifying which of the N requestors is selected next.10-01-2009
20090178046Methods and Apparatus for Resource Allocation in Partial Fault Tolerant Applications - Techniques are disclosed for allocation of resources in a distributed computing system. For example, a method for allocating a set of one or more components of an application to a set of one or more resource groups includes the following steps performed by a computer system. The set of one or more resource groups is ordered based on respective failure measures and resource capacities associated with the one or more resource groups. An importance value is assigned to each of the one or more components, wherein the importance value is associated with an affect of the component on an output of the application. The one or more components are assigned to the one or more resource groups based on the importance value of each component and the respective failure measures and resource capacities associated with the one or more resource groups, wherein components with higher importance values are assigned to resource groups with lower failure measures and higher resource capacities. The application may be a partial fault tolerant (PFT) application that comprises a set of one or more PFT application components. The set of one or more resource groups may comprise a heterogeneous set of resource groups (or clusters).07-09-2009
20090165011RESOURCE MANAGEMENT METHOD, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND PROGRAM - In an information processing system, a configuration information management apparatus stores an identifier of a resource of a management target apparatus and a resource address of the management target resource, in association with each other. The management target apparatus stores destination information and the identifier of the management target resource, in association with each other, the destination information used when the configuration information management apparatus receives event notification from the management target resource. The management target apparatus transmits the identifier of the resource and a current (the latest) resource address when transmitting the event notification of the management target resource to the configuration information managing apparatus. The configuration information managing apparatus receives the event notification, the identifier of the resource, and the current resource address and changes the stored resource address which is associated with the acquired identifier of the resource into the acquired current resource address.06-25-2009
20100100888Resource allocation - A technique for executing a segmented virtual machine (VM) is disclosed. A plurality of core VM's is implemented in a plurality of core spaces. Each core VM is associated with one of a plurality of shell VM's. Resources of the core spaces are allocated among the core VM's.04-22-2010
20100100885TRANSACTION PROCESSING FOR SIDE-EFFECTING ACTIONS IN TRANSACTIONAL MEMORY - A processing system includes a transactional memory, first and second resource managers, and a transaction manager for a concurrent program having a thread including an atomic transaction having a side-effecting action. The first resource manager is configured to enlist in the atomic transaction and manage a resource related to the side effecting action. The second resource manager is configured to enlist in the atomic transaction and manage the transaction memory. The transaction manager is coupled to the first and second resource managers and manager is configured to receive a vote from the first and second resource managers as to whether to commit the transaction. The side-effecting action is postponed until after the transaction commits or applied along with a compensating action to the side-effecting action.04-22-2010
20100100884LOAD BALANCING USING DISTRIBUTED PRINTING DEVICES - A system and method of distributing workflow in a document processing or other production environment determines a utilization percentage for each of a plurality of printing devices or other resources located in the production environment. For a first printing device, if the utilization percentage associated with the first printing device is below a threshold value, a request may be sent from the first printing device to a workflow distributor to obtain one or more unassigned jobs. If the request for the one or more unassigned jobs sent from the first printing device is received by the workflow distributor, the one or more unassigned jobs may be received at the first printing device.04-22-2010
20110202928RESOURCE MANAGEMENT METHOD AND EMBEDDED DEVICE - Provided is a resource management method in a system which individually limits a resource amount used by a software module (08-18-2011
20090138883METHOD AND SYSTEM OF MANAGING RESOURCES FOR ON-DEMAND COMPUTING - A method and system of managing resources for on-demand computing is provided. The system can include one or more pools having resources, and a provisioning manager in communication with the one or more pools. The provisioning manager can receive a request for a resource from the requestor and can obtain values for one or more categories associated with the resources. The values can be obtained for at least a portion of the resources. The one or more categories can be based on quantifiable properties associated with the resources. The provisioning manager can determine a priority score for each of the at least a portion of the resources. The provisioning manager can determine a resource from the at least a portion of the resources to be distributed to the requester, where the determination can be based at least in part on the priority score for the resource.05-28-2009
20090282416VITAL PRODUCT DATA COLLECTION DURING PRE-STANDBY AND SYSTEM INITIAL PROGRAM LOAD - A system for selectively recollecting vital product data during an initial program load at data processing system power on. In response to receiving an input to power on a data processing system, a resource location code array table is accessed within a set of selected tables for the data processing system based on machine type. The selected set of tables is located in firmware within a service processor. An entry for a resource in the resource location code array table is read to determine whether the entry includes a no recollect tag. Then, in response to determining that the entry for the resource in the resource location code array table does include a no recollect tag, vital product data for the resource is not recollected during the initial program load.11-12-2009
20090288094Resource Management on a Computer System Utilizing Hardware and Environmental Factors - A method for resource management on a computer system utilizing hardware and environmental information. A caller interacts with an application program interface to handle information requests with a persistent data storage device to combine information involving hardware resource information, environmental data and other system information, all both historical, present and predicted values. Application execution decisions may then made regarding hardware for the calling entity. The method may be implemented as a computer process.11-19-2009
20090288093MECHANISM TO BUILD DYNAMIC LOCATIONS TO REDUCE BRITTLENESS IN A TEAM ENVIRONMENT - Mechanisms to build dynamic locations to reduce brittleness in a team environment are provided. A project includes resources, each resource is assigned a key. Each key is mapped to a current location for its corresponding resource. The keys and locations are maintained in an index. Locations for the resources can change as desired throughout the lifecycle of the project and as changes occur the index is updated. When references are made within the project to the resources, the references are translated to the keys, if necessary. The keys are then used for accessing the index and dynamically acquiring the current locations for the resources at the time the references are made.11-19-2009
20090089788SYSTEM AND METHOD FOR HANDLING RESOURCE CONTENTION - In one aspect, the invention is directed to a method by which a user of a functional resource in a software environment can determine whether any other users are waiting to acquire control of the functional resource. The functional resource has associated therewith, a placeholder resource that is a placeholder for users waiting to acquire control of the functional resource. The method includes inquiring by the user of the functional resource whether the placeholder resource is available for exclusive control by the user of the functional resource. If the placeholder resource is available for exclusive control, then no other users are waiting for control of the functional resource and so the current user can keep control of it. If, however, the placeholder resource is not available, that indicates to the user of the functional resource that at least one other user is waiting for control of the functional resource and so the user of the functional resource may release control of the functional resource.04-02-2009
20090089790METHOD AND SYSTEM FOR COORDINATING HYPERVISOR SCHEDULING - A method for executing an application on a plurality of nodes, that includes synchronizing a first clock of a first node of the plurality of nodes and a second clock of a second node of the plurality of nodes, configuring a first hypervisor on the first node to execute a first application domain and a first privileged domain, wherein configuring the hypervisor comprises allocating a first number of cycles of the first clock to the first privileged domain, configuring a second hypervisor on the second node to execute a second application domain and a second privileged domain, wherein configuring the second hypervisor that includes allocating the first number of cycles of the first clock to the second privileged domain, and executing the application in the first application domain and the second application domain, wherein the first application domain and the second application domain execute semi-synchronously and the first privileged domain and the second privileged domain execute semi-synchronously.04-02-2009
20090089789Method to allocate inter-dependent resources by a set of participants - The object of the present invention is a method that allows a group of independent participants to coordinate decisions with respect to the allocation of interdependent resources, while maintaining certain privacy guarantees.04-02-2009
20090089787Method and System for Migrating Critical Resources Within Computer Systems - A method and system for migrating at least one critical resource during a migration of an operative portion of a computer system are disclosed. In at least some embodiments, the method includes (a) sending first information constituting a substantial copy of a first of the at least one critical resource via at least one intermediary between a source component and a destination component. The method further includes (b) transitioning a status of the destination component from being incapable of receiving requests to being capable of receiving requests, and (c) re-programming an abstraction block to include modified addresses so that at least one incoming request signal is forwarded to the destination component rather than to the source component.04-02-2009
20080209430SYSTEM, APPARATUS, AND METHOD FOR FACILITATING PROVISIONING IN A MIXED ENVIRONMENT OF LOCALES - A system, a computer program product, and a method capable of dynamically and flexibly support a plurality of locales upon provisioning are provided. A management server connected via a network to a plurality of processing resources each set with a locale includes a storage unit to store processing, a locale, and a set of instructions corresponding to the processing and the locale, and a selection unit to select a set of instructions associated with required processing and a required locale by referring to the storage unit, and it further includes a determination unit to dynamically determine the required processing and the processing resource by way of provisioning, and the storage unit stores the plurality of processing resources and each locale.08-28-2008
20080209431System and method for routing tasks to a user in a workforce - A routing system and method efficiently routes tasks to users who are members of a large and geographically diverse workforce. Generally, limited information is known about each user's skills and behavioral factors. Based on a profile containing the known information about a user, task is efficiently allocated and routed to a user by matching attributes of the task to the profile using a neural network and a stochastic model. Feedback is collected by the routing system based on the user's handling of the task and on whether a solution provided by the user was accepted. Over time, as more feedback is collected, the profile and/or the neural network are refined which allows for more efficient routing of future tasks.08-28-2008
20080209432COMPUTER IMPLEMENTED METHOD AND SYSTEM FOR SHARING RESOURCES AMONG HIERARCHICAL CONTAINERS OF RESOURCES - Computer implemented method, system and computer usable program code for sharing resources among a plurality of containers in a data processing system. A computer implemented method includes creating a shared container for at least one resource to be shared. Then the at least one resource to be shared is moved from an original container of the at least one resource to the shared container, and a link is created between the original container and the at least one resource to be shared in the shared resource container. A link can also be created between a subject resource container and a shared resource in the shared resource container to enable the subject resource container to access and use the shared resource. A shared resource can also be removed from the shared resource container and returned to an original resource container when sharing of the resource is no longer desired.08-28-2008
20080209429METHODS AND SYSTEMS FOR MANAGING RESOURCES IN A VIRTUAL ENVIRONMENT - An embodiment relates generally to a method of managing resources in a virtual environment. The method includes detecting an instantiation of a virtual machine and determining a delay value based on a unique identifier. The method also includes delaying an initiation of at least one support process for the virtual machine by the delay value.08-28-2008
20080209427Hardware Register Access Via Task Tag Id - A computer-based software task management system (08-28-2008
20110173628SYSTEM AND METHOD OF CONTROLLING POWER IN AN ELECTRONIC DEVICE - A method of utilizing a node power architecture (NPA) system, the method includes receiving a request to create a client, determining whether a resource is compatible with the request, and returning a client handle when the resource is compatible with the request.07-14-2011
20090276785System and Method for Managing a Storage Array - Systems and methods for managing a storage array are disclosed. A method may include segmenting each of a plurality of physical storage resources into a first storage area and a second storage area. The method may also include activating a first logical unit including each first storage area of the plurality of physical storage resources. The method may additionally include placing at least one designated physical resource of the plurality of physical storage resources in a powersave mode. The method may further include activating a second logical unit including the second storage areas of some of the plurality of physical storage resources but not the at least one designated physical storage resource. Moreover, the method may include storing data associated with a write operation intended for the at least one designated physical storage resource to the second logical unit.11-05-2009
20090276783Expansion and Contraction of Logical Partitions on Virtualized Hardware - A method, apparatus, and program product manage a plurality of resources of at least one logically partitioned computing system of the type that includes a plurality of logical partitions managed by a partition manager with an application level administrative console resident in a logical partition of the computing system. Each logical partition is allocated at least a portion of the plurality of resources. A user request to adjust the allocation of at least a portion of the resources using the administrative console is received. The resources of the logically partitioned computing system to adjust in order to satisfy the user request are determined using the application level administrative console. The application level administrative console accesses the partition manager through a resource allocation interface to adjust the determined resources of the logically partitioned computing system in order to satisfy the user request.11-05-2009
20090282415Method and Apparatus for Negotiation Management in Data Processing Systems - Techniques are disclosed for optimizing schedules used in implementing plans for performing tasks in data processing systems. For example, an automated method of negotiating for resources in a data processing system, wherein the data processing system comprises multiple sites, comprises a negotiation management component of a computer system at a given one of the sites performing the following steps. One or more tasks from at least one source of one or more plans are obtained. Each plan is annotated with one or more needed resources and one or more potential resource providers at one or more sites in the data processing system. An optimized resource negotiation schedule based on the one or more obtained tasks is computed. The schedule comprises an order in which resources are negotiated. In accordance with the optimized resource negotiation schedule, a request for each needed resource is sent to the one or more potential resource providers such that a negotiation process is performed between the negotiation management component and at least one of the potential resource providers.11-12-2009
20110271285MANAGING EXCLUSIVE ACCESS TO SYSTEM RESOURCES - Presented is a method of managing exclusive access to a resource. The method includes determining anticipated wait time, for a task to obtain exclusive access to a resource, and processing the task, depending on the anticipated wait time required to obtain exclusive access to the resource.11-03-2011
20090300635METHODS AND SYSTEMS FOR PROVIDING A MARKETPLACE FOR CLOUD-BASED NETWORKS - A cloud marketplace system can be configured to communicate with multiple cloud computing environments in order to ascertain the details for the resources and services provided by the cloud computing environments. The cloud marketplace system can be configured receive a request for information pertaining to the resources or services provided by or available in the cloud computing environments. The cloud marketplace system can be configured to generate a marketplace report detailing the resource and service data matching the request. The cloud marketplace system can be configured to utilize the resource and service data to provide migration services for virtual machines initiated in the cloud computing environments.12-03-2009
20080216082Hierarchical Resource Management for a Computing Utility - This invention provides for the hierarchical provisioning and management of a computing infrastructure which is used to provide computing services to the customers of the service provider that operates the infrastructure. Infrastructure resources can include those acquired from other service providers. The invention provides architecture for hierarchical management of computing infrastructures. It allows the dynamic provisioning and assignment of resources to computing environments. Customers can have multiple computing environments within their domain. The service provider shares its resources across multiple customer domains and arbitrates on the use of resources between and within domains. The invention enables resources to be dedicated to a specific customer domain or to a specific computing environment. Customers can specify acquisition and distribution policy which controls their use of resources within their domains.09-04-2008
20080216084MEASURE SELECTION PROGRAM, MEASURE SELECTION APPARATUS, AND MEASURE SELECTION METHOD - A combination of measures are selected to set a recovery time of a business to be equal to or shorter than a time objective when a predetermined event occurs. A dependency relationship is shown between an operation constituting the business and resources necessary to continue the operation. Scenario information holds the recovery time required for a recovery when the predetermined event occurs for each of the resources. Measure information holds measures for reducing the recovery time and effects of the respective measures for each of the resources. Paths connecting a highest node to a terminal node of the resources included in the operation element related information are extracted according to the dependency relationship; and the combination of measures are selected so that a recovery time sum of the respective resources is equal to or shorter than the time objective on all the paths extracted by the resource path extraction procedure.09-04-2008
20080216083MANAGING MEMORY RESOURCES IN A SHARED MEMORY SYSTEM - The memory used by individual users can be tracked and constrained without having to place all the work from individual users into separate JVMs. The net effect is that the ‘bursty’ nature of memory consumption by multiple users can be summed to result in a JVM which exhibits much less bursty memory requirements while at the same time allowing individual users to have relatively relaxed constraints.09-04-2008
20090119673Predicting and managing resource allocation according to service level agreements - Allocating computing resources comprises allocating an amount of a resource to an application program based on an established service level requirement for utilization of the resource by the application program, determining whether the application program's utilization of the resource exceeds a utilization threshold, and changing the allocated amount of the resource in response to a determination that the application program's utilization of the resource exceeds the utilization threshold. The utilization threshold is based on the established service level requirement and is different than the established service level requirement. Changing the allocation of the resource based on the utilization threshold allows allocating sufficient resources to the application program prior to a breach of a service level agreement for the application program.05-07-2009
20100005472TASK DECOMPOSITION WITH THROTTLED MESSAGE PROCESSING IN A HETEROGENEOUS ENVIRONMENT - Tasks for a business process can be decomposed into subtasks represented by messages. Message processing can be throttled in a heterogeneous environment. For example, message processing at subtask nodes can be individually throttled at the node level by controlling the number of instances of subtask processors for the subtask node. An infrastructure built with framework components can be used for a variety of business process tasks, separating business logic from the framework logic. Thus, intelligent scalability across platform types can be provided for large scale business processes with reduced development time and resources.01-07-2010
20090007130IMAGE FORMING APPARATUS, CONTROLLING METHOD, AND CONTROL PROGRAM - An image forming apparatus in which programs for controlling processes that are provided by the image forming apparatus are installed. The image forming apparatus includes means for managing the use amount of each program by use of a counter, means for recognizing the counter which corresponds to the identification information of the program and can manage the use amount of the program, means for correlating the program with the counter recognized by the recognizing means to manage the counter, means which can set an upper limit on the use amount of each program for the use amount managing means, and means for controlling the process by the image forming apparatus based on the upper limit of the use amount set by the setting means for each of the types of the programs.01-01-2009
20120297395SCALABLE WORK LOAD MANAGEMENT ON MULTI-CORE COMPUTER SYSTEMS - A system and method for managing the processing of work units being processed on a computer system having shared resources e.g. multiple processing cores, memory, bandwidth, etc. The system comprises a job scheduler for scheduling access to the shared resources for the work units, and an event trap for capturing resource related allocation events. The event trap is adapted to dynamically adjust the amount of availability associated with each shared resource identified by the resource related allocation event. The allocation event may define a resource release or a resource request. The event trap may increase the amount of availability for allocation events defining a resource release, and decrement the amount of availability for allocation events defining a resource request. The job scheduler allocates resources to the work units using a real time amount of availability of the shared resources in order to maximize a consumption of the shared resources.11-22-2012
20100146515Support of Non-Trivial Scheduling Policies Along with Topological Properties - A system and method for scheduling jobs in a multiprocessor machine is disclosed. The status of resources, including CPUs on node boards and associated shared memory in the multiprocessor machine is periodically determined. The status can indicate the resources available to execute jobs. This information is accumulated by the topology-monitoring unit and provided to the topology library. The topology library also receives a candidate host list from the scheduling unit which lists all of the resources available to execute the job being scheduled based on non-trivial scheduling. The topology library unit then uses this to generate a free map F indicative of the interconnection of the resources available to execute the job. The topology monitoring unit then matches the jobs to the resources available to execute the jobs, based on resource requirements including shape requirements indicative of interconnections of resources required to execute the job. The topology monitoring unit dispatches the job to the portion of the free map F which match the shape requirements of the job. If the topology library unit determines that no resources are available to execute the job, the topology library unit will return the job to the scheduling unit and the scheduling unit which will wait until the resources become available. The free map F may include resources which have been suspended or reserved in previous scheduling cycles, provided the job to be scheduled satisfies the predetermined criteria for execution of the job on the suspended, have a lower priority, or are reserved resources.06-10-2010
20100138840SYSTEM AND METHOD FOR ACCELERATING INPUT/OUTPUT ACCESS OPERATION ON A VIRTUAL MACHINE - A system and method for accelerating input/output (IO) access operation on a virtual machine, The method comprises providing a smart IO device that includes an unrestricted command queue (CQ) and a plurality of restricted CQs and allowing a guest domain to directly configure and control IO resources through a respective restricted CQ, the IO resources allocated to the guest domain. In preferred embodiments, the allocation of IO resources to each guest domain is performed by a privileged virtual switching element. In some embodiments, the smart IO device is a HCA and the privileged virtual switching element is a Hypervisor.06-03-2010
20080244609ASSURING RECOVERY OF TEMPORARY RESOURCES IN A LOGICALLY PARTITIONED COMPUTER SYSTEM - A capacity manager provides temporary resources on demand in a manner that assures the temporary resources may be recovered when the specified resource-time expires. Access to minimum resource specifications corresponding to the logical partitions is controlled to prevent the sum of all minimum resource specifications from exceeding the base resources on the system. By assuring the sum of minimum resource specifications for all logical partitions is satisfied by the base resources on the system, the temporary resources may always be recovered when required.10-02-2008
20080244595METHOD AND SYSTEM FOR CONSTRUCTING VIRTUAL RESOURCES - System for managing a life cycle of a virtual resource. One or more virtual resources are defined. The one or more defined virtual resources are created. The created virtual resources are instantiated. Then, a topology of a virtual resource is constructed using a plurality of virtual resources that are in at least one of a defined, a created, or an instantiated state.10-02-2008
20080244606Method and system for estimating resource provisioning - A method and system are described for estimating resource provisioning. An example method may include obtaining a workflow path including an external invocation node and respective groups of service nodes, node connectors, and hardware nodes, and including a directed ordered path indicating ordering of a flow of execution of services associated with the service nodes, from the external invocation node, to a hardware node, determining an indicator of a service node workload based on attribute values associated with a service node and an indicator of a propagated workload based on combining attribute values associated with the external invocation node and other service nodes or node connectors preceding the service node in the workflow path based on the ordering, and provisioning the service node onto a hardware node based on combining the indicator of the service node workload and an indicator of a current resource demand associated with the hardware node.10-02-2008
20080244597Systems and Methods for Recording Resource Association for Recording - Included are embodiments for determining an extension-to-channel mapping. At least one embodiment includes receiving first data associated with a communication from at least one communications device and receiving second data from a recording resource. Some embodiments include determining whether the at least one communications device is coupled to a recording resource. Some embodiments include matching the communications device to a recording resource and in response to matching, creating an association of the at least one communications device to the recording resource.10-02-2008
20080244598SYSTEM PARTITIONING TO PRESENT SOFTWARE AS PLATFORM LEVEL FUNCTIONALITY - Embodiments of apparatuses, methods for partitioning systems, and partitionable and partitioned systems are disclosed. In one embodiment, a system includes processors and a partition manager. The partition manager is to allocate a subset of the processors to a first partition and another subset of the processors to a second partition. The first partition is to execute first operating system level software and the second partition is to execute second operating system level software. The first operating system level software is to manage the processors in the first partition as resources individually accessible to the first operating system level software, and the second operating system level software is to manage the processors in the second partition as resources individually accessible to the second operating system level software. The partition manager is also to present the second partition, including the second operating system level software, to the first operating system level software as platform level functionality embedded in the system.10-02-2008
20080244596COMPUTER PROGRAM PRODUCT AND SYSTEM FOR DEFERRING THE DELETION OF CONTROL BLOCKS - A computer program product and system are disclosed for deferring the deletion of resource control blocks from a resource queue within an information management system that includes a plurality of short-term processes and a plurality of long-term processes when each of the long term processes has unset a ‘resource in use’ control flag for that long term process, a ‘request deletion’ flag has been set by the information management system, and a predetermined amount of time has elapsed.10-02-2008
20080244594VISUAL SCRIPTING OF WEB SERVICES FOR TASK AUTOMATION - Tasks are automated using assemblies of services. An interface component allows a user to collect services and to place selected services corresponding to a task to be automated onto a workspace. An analysis component performs an analysis of available data with regard to the selected services provided on the workspace and a configuration component automatically configures inputs of the selected services based upon the analysis of available data without intervention of the user. A dialog component is also provided to allow the user to contribute information to configure one or more of the inputs of the selected services. When processing is complete, an output component outputs a script that is executable to implement the task to be automated.10-02-2008
20080250416Linking of Scheduling Systems for Appointments at Multiple Facilities - Scheduling systems for scheduling appointments on multiple sites need to be linked, if such systems use different databases. The activity to be performed by the performing site during the appointment may be given by a requesting code, specific for the requesting site. If the activity can be performed at the requesting site, i.e. the requesting site and the performing site are identical, then this “requesting code” may define that one or more resources are required for performing the scheduled appointment at the requesting site. The availability of these resources can be fetched from one or more databases coupled to the requesting site. If the performing site is different from the requesting site, the requesting code used at the performing site for the activity may be different from the requesting code used at the requesting site, and different resources may be requested by the performing site. The availability of these different resources may be stored in one or more databases, different from the databases for resources at the requesting site. In the latter case, both the requesting site and the performing site keep records of the scheduled appointment e.g. in a respective database. If a person, for whom the appointment is made, is known at the requesting or performing site or both, person occupation checking may be done at either site or both.10-09-2008
20090055832SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR EVALUATNG A TEST OF AN ALTERNATIVE SYSTEM - A method for checking an alternative system test, the method includes: determining a relationship between (i) utilization of resources during an execution of a group of programs by a first system when operating in a non-testing mode and (ii) utilization of resources during an executive of an alternative system test by the alternative system; wherein the alternative system test comprises at least on program of the group of programs.02-26-2009
20080282253METHOD OF MANAGING RESOURCES WITHIN A SET OF PROCESSES - A workload management system where processes associated with a class have resource management strategies that are specific to that class is provided. The system includes more than one class, with at least one unique algorithm for executing a workload associated with each class. Each algorithm may comprise a strategy for executing a workload that is specific to that class and the algorithms of one class may be completely unrelated to the algorithms of another class. The workload management system allows workloads with different attributes to use system resources in ways that best benefit a workload, while maximizing usage of the system's resources and with minimized degradation to other workloads running concurrently.11-13-2008
20080282252HETEROGENEOUS RECONFIGURABLE AGENT COMPUTE ENGINE (HRACE) - A computing system (11-13-2008
20080288950Concurrent Management of Adaptive Programs - A method for concurrent management of adaptive programs is disclosed wherein changes in a set of modifiable references are initially identified. A list of uses of the changed references is next computed using records made in structures of the references. The list is next inserted into an elimination queue. Comparison is next made of each of the uses to the other uses to determine independence or dependence thereon. Determined dependent uses are eliminated and the preceding steps are repeated for all determined independent uses until all dependencies have been eliminated.11-20-2008
20080288951Method, Device And System For Allocating A Media Resource - A method and system for allocating a media resource and a device for controlling a media resource. The method for allocating a media resource includes: allocating the media resource processing devices for a resource operation request based on the stored ability information of the various media resource processing devices when the resource operation request is received; and updating the stored ability information of the media resource processing device dynamically. The device for controlling a media resource includes: a memory unit adapted to store the ability information of various media resource processing devices; an allocation unit adapted to allocate media resource processing devices for the resource operation request based on the ability information stored in the memory unit; a dynamic update unit adapted to update the ability information of the media resource processing device stored in the memory unit dynamically.11-20-2008
20080288949Interprocess Resource-Based Dynamic Scheduling System and Method - A method and system for scheduling tasks in a processing system. In one embodiment, the method comprises processing tasks from a primary work queue, wherein the tasks consume resources that are operable to be released. Whenever the volume of resources that have been consumed exceeds a threshold, the processor executes tasks from a secondary work queue for a period of time. The secondary work queue is comprised of tasks from the primary work queue that can release the resources; the secondary work queue can be sorted according to the volume of resources that can be released.11-20-2008
20080271032Data Processing Network - A grid type network comprising a grid controller for receiving data in the form of a queue from a database. The grid controller is arranged to divide the data into a plurality of batches and dispatch the batches between a plurality of terminals which may be registered with the grid controller. Each terminal is registered on the basis that it contains a processing unit which is usually in an idle state. The terminals are also provided with processing logic related to the processing to be carried out on the batches. The plurality of terminals perform the processing on the batches and on completion, the database is updated with processed data.10-30-2008
20080271030Kernel-Based Workload Management - A method for managing workload in a computing system comprises performing automated workload management arbitration for a plurality of workloads executing on the computing system, and initiating the automated workload management arbitration from a process scheduler in a kernel.10-30-2008
20120297396INTERCONNECT STRUCTURE TO SUPPORT THE EXECUTION OF INSTRUCTION SEQUENCES BY A PLURALITY OF ENGINES - A global interconnect system. The global interconnect system includes a plurality of resources having data for supporting the execution of multiple code sequences and a plurality of engines for implementing the execution of the multiple code sequences. A plurality of resource consumers are within each of the plurality of engines. A global interconnect structure is coupled to the plurality of resource consumers and coupled to the plurality of resources to enable data access and execution of the multiple code sequences, wherein the resource consumers access the resources through a per cycle utilization of the global interconnect structure.11-22-2012
20100005475INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device is configure so as to store an image that is to be retained in a main memory so that a processor can execute an application program, and after execution of the application program is terminated, execute the application program from a state at a time when the image is stored by reading out the stored image to the main memory.01-07-2010
20100005474Distribution of tasks among asymmetric processing elements - A technique to promote determinism among multiple clocking domains within a computer system or integrated circuit. In one embodiment, one or more execution units are placed in a deterministic state with respect to multiple clocks within a processor system having a number of different clocking domains.01-07-2010
20080244604Method for task and resource management - A method is disclosed for task and human resource management. In one embodiment, the method stores a plurality of first tasks, each first task including at least one first task skill. In addition, the method receives a search request, the search request including at least one search request skill. The method determines, based on the one or more first tasks, if one or more of the at least one first task skills corresponds to the at least one search request skill. In addition, the method determines one or more second tasks when it is determined that one or more of the at least one first task skills corresponds to the at least one search request skill. The one or more second tasks are determined from the plurality of first tasks. The method provides the determined one or more second tasks to a human resource. Further, the method receives a request from the human resource to be associated with at least one of the determined one or more second tasks, and associates the human resource with the at least one of the determined one or more second tasks.10-02-2008
20100005473System and method for controlling computing resource consumption - A method and a corresponding system, implemented as programming on a computer system, controls resource consumption in the computer system. The method includes the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources.01-07-2010
20080209428RESOURCE GOVERNOR CONFIGURATION MODEL - A database can have multiple requests applied at one time. Each of these requests requires a specific amount of server resources. There can be a differentiation of user-submitted workloads between each other. These workloads are a set of queries submitted by different users. Each query can have specific resource limits. In addition, each set can have specific resource limits.08-28-2008
20080276246SYSTEM FOR YIELDING TO A PROCESSOR - An apparatus and program product for coordinating the distribution of CPUs as among logically-partitioned virtual processors. A virtual processor may yield a CPU to precipitate an occurrence upon which its own execution may be predicated. As such, program code may dispatch the surrendered CPU to a designated virtual processor.11-06-2008
20100299672MEMORY MANAGEMENT DEVICE, COMPUTER SYSTEM, AND MEMORY MANAGEMENT METHOD - A memory management device includes a memory area, an allocator generating unit that generates a plurality of allocators, which allocates a memory resource of the memory area to a task, for respective rules of allocation/deallocation of the memory resource, and a task correlating unit that selects one of generated allocators based on an allocator specification that is different for each task by the task and sets such that the task is capable of using selected allocator.11-25-2010
20080244603Method for task and resource management - A method is disclosed for managing one or more tasks or human resources. In one embodiment, the method receives one or more first tasks. In addition, the method receives one or more first sets of skill information. Each of the one or more first sets of skill information includes at least one human resource skill and is associated with a human resource. The method further receives one or more second sets of skill information. Each of the one or more second sets of skill information includes at least one task skill and is associated with one of the one or more first tasks. Additionally, the method evaluates the received one or more first tasks, the received one or more first sets of skill information, and the received one or more second sets of skill information. Further, the method determines to request the human resource to add an associated human resource skill or increase an associated human resource skill level.10-02-2008
20080244605Method for task and resource management - A method is disclosed for task and human resource management. In one embodiment, the method determines a set of skill information. The set of skill information includes at least one task skill and is associated with a task. In addition, the method determines, from a set of one or more first human resources, one or more second human resources. The one or more second human resources have at least one human resource skill that corresponds to the at least one task skill. The method provides an indication of a task load for the determined one or.more second human resources, and associates the task to at least one of the one or more second human resources based on the at least one human resource skill, the at least one task skill, and the indication of the task load.10-02-2008
20080271035Control Device and Method for Multiprocessor - An multiprocessor control device according to an example of the invention comprises a selection unit which, on the basis of an execution schedule for tasks to be allocated to any one of processor elements, selects, for each of the processor elements, any one of a normal mode used in a task execution time, a first mode which is used when a task is not executed and in which a power consumption is reduced more than in the normal mode, and a second mode which is used when the task is not executed and which has a greater power consumption reducing effect but a longer mode switching time than the first mode, and a mode control unit which performs control according to the mode selected by the selection unit for each of the processor elements.10-30-2008
20090013326A SYSTEM AND METHOD FOR RESOURCE MANAGEMENT AND CONTROL - The present invention relates to complete system and method for centralized management, control and integration of different resources, including normally non-compatible systems. Said resources can be of arbitrary type—people, assets, information systems as well as other resources, including moving objects. The system comprises information systems and hardware enabling the gathering, processing and transmission of initial information from different resources in real-time or possibly later and control of said resources based on predefined or elaborated rules. The invention also allows to store and to use the information related to the location of resources. The present invention being centrally controlled and managed open information system with possibility of resource billing, belongs to the field of universal information systems.01-08-2009
20100146514TEST MANAGEMENT SYSTEM AND METHOD - An execution management method includes providing an execution plan, balancing an execution load across a plurality of servers, automatically interpreting the execution plan, and re-driving a failed test to another of the plurality of servers if the test case fails on an originally selected available server. The execution plan includes a plurality of test cases and criteria corresponding to the test cases. More than one of the plurality of test cases may be run on each of the plurality of servers at a same time in parallel. Each of the plurality of servers is run independently.06-10-2010
20100146513Software-based Thread Remapping for power Savings - On a multi-core processor that supports simultaneous multi-threading, the power state for each logical processor is tracked. Upon indication that a logical processor is ready to transition into a deep low power state, software remapping (e.g., thread-hopping) may be performed. Accordingly, if multiple logical processors, on different cores, are in a low-power state, they are re-mapped to same core and the core is then placed into a low power state. Other embodiments are described and claimed.06-10-2010
20090031320Storage System and Management Method Thereof - A storage system comprises a first storage apparatus having a volume for a host computer, a second storage apparatus connected to the first storage apparatus, and having a volume having a pair relationship with a first volume in the first storage apparatus, and a management apparatus connected to the first storage apparatus and the second storage apparatus. The management apparatus includes a user interface for setting an attribute of a function related to the volume of the first storage apparatus and an attribute of a function related to the volume of the second storage apparatus. The management apparatus compares the attribute of the function related to the first volume and the attribute of the function related to the second volume, and outputs the result of the comparison to the user interface.01-29-2009
20120198469Method for Managing Hardware Resources Within a Simultaneous Multi-Threaded Processing System - A method for managing hardware resources and threads within a data processing system is disclosed. Compilation attributes of a function are collected during and after the compilation of the function. The pre-processing attributes of the function are also collected before the execution of the function. The collected attributes of the function are then analyzed, and a runtime configuration is assigned to the function based of the result of the attribute analysis. The runtime configuration may include, for example, the designation of the function to be executed under either a single-threaded mode or a simultaneous multi-threaded mode. During the execution of the function, real-time attributes of the function are being continuously collected. If necessary, the runtime configuration under which the function is being executed can be changed based on the real-time attributes collected during the execution of the function.08-02-2012
20120198468METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment.08-02-2012
20120198466DETERMINING AN ALLOCATION OF RESOURCES FOR A JOB - A job profile describes characteristics of a job. A performance parameter is calculated based on the job profile, and using a value of the performance parameter, an allocation of resources is determined to assign to the job to meet a performance goal associated with a job.08-02-2012
20110209157RESOURCE ALLOCATION METHOD, PROGRAM, AND RESOURCE ALLOCATION APPARATUS - A resource allocation apparatus according to the present invention includes a system information acquisition unit configured to acquire program congestion pattern information indicating a group of programs executed concurrently on a system; and a resource allocation pattern determination unit configured to generate a plurality of resource allocation patterns for allocating the resource to a plurality of programs included in the group of programs indicated in the program congestion pattern information, and to calculate the total of amount of processing needed to execute the programs when the resource is allocated to the programs included in the group of programs by the generated resource allocation patterns, then to determine an optimal resource allocation pattern among the generated resource allocation patterns as a resource allocation pattern for the programs included in the group of programs based on the calculated total amount of processing.08-25-2011
20090138884STORAGE MANAGEMENT SYSTEM, A METHOD OF MONITORING PERFORMANCE AND A MANAGEMENT SERVER - A storage management system provides a capability of properly setting a performance monitoring threshold and monitoring a performance of a storage resource in the SAN environment with respect to the operation process being executed. The storage management system includes a management server, a storage device, a storage network, and a management server. The management server is arranged to have a performance information collecting unit for collecting the current performance value of a storage resource, a composition section determining unit for determining a composition section corresponding with a composition ratio of the operation processes, a threshold information storage unit for storing a performance monitoring threshold corresponding with the composition section with respect to one or more storage devices, and a performance determining unit for determining a performance of the storage resource based on the current performance value and the performance monitoring threshold.05-28-2009
20090178048SYSTEM AND METHOD FOR COMPOSITION OF STREAM PROCESSING SERVICE ENVIRONMENTS - A system and method for composing a stream servicing environment which considers all stakeholders includes identifying service component requirements needed for processing a data stream, and determining available service elements for processing the stream. Feasible service environments are constructed based upon the available service elements and the service component requirements. Efficiency measures are computed for each feasible service environment considering all stakeholders. A best service environment is determined based upon the efficiency measures.07-09-2009
20090165010Method and system for optimizing utilization of resources - A method, application tool and computer program product for the optimal utilization of the resources in an organization. The organization has various processes. Each process includes an allocated number of resources. However, with the variation in the workload in a process, there may be under- or over-utilization of resources. Therefore, cross-utilization of resources across the different processes may result in the optimal utilization of resources in the organization.06-25-2009
20090144742METHOD, SYSTEM AND COMPUTER PROGRAM TO OPTIMIZE DETERMINISTIC EVENT RECORD AND REPLAY - A method, system and computer-usable medium for managing task events during the scheduling period of a task executing on one of the CPUs of a multi-processor computer. Only events of specific portions of scheduling period are logged, wherein a first shared resource access has been granted for the task, this portion of scheduling period gathering all the non-deterministic events which cannot be replayed by simple task re-execution. Other independent non-deterministic event records are still logged as usual when they occur out of the portion of scheduling period for which a record has been created. This limits the number of logged events during recording session of an application and the frequency of events to transmit from the production machine to the replay machine.06-04-2009
20090144741RESOURCE ALLOCATING METHOD, RESOURCE ALLOCATION PROGRAM, AND OPERATION MANAGING APPARATUS - An operation managing apparatus totalizes necessary resource amount information every the service so as to acquire necessary resource amount information every BP, and identifies the necessary resource amount information every the BP with resource amount information which can be utilized with respect to each of the service executing apparatuses so as to retrieve such service executing apparatuses capable of providing resource amounts by which the necessary resource amount information every the BP is stored. When the service executing apparatuses are retrieved, the operation managing apparatus allocates a service to the retrieved service executing apparatuses, whereas when the service executing apparatuses are not retrieved, the operation managing apparatus allocates the services to plural sets of the service executing apparatuses.06-04-2009
20090138885Prevention of Deadlock in a Distributed Computing Environment - A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container.05-28-2009
20090138887Virtual machine monitor and multiprocessor sysyem - In order to provide an interface of acquiring physical position information of an I/O device on a virtual machine monitor having an exclusive allocation function of the I/O device and optimize allocation of a resource to a virtual server by using the acquired physical position information, a virtual machine monitor includes an interface of allocating a resource in accordance with a given policy (a parameter of determining to which a priority is given in distributing resources) for an I/O device, a CPU NO., and a memory amount request to guest OS. Further, the virtual machine monitor includes an interface of pertinently converting physical position information of the resource allocated by the virtual machine monitor to notice to guest OS.05-28-2009
20090138886Prevention of Deadlock in a Distributed Computing Environment - A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container.05-28-2009
20090049449METHOD AND APPARATUS FOR OPERATING SYSTEM INDEPENDENT RESOURCE ALLOCATION AND CONTROL - An apparatus and method for controlling resources in a computing system including receiving an allocation request for a resource; determining whether an allocation limit for the resource has been reached; and, restricting access to the resource upon determination that the allocation limit has been reached.02-19-2009
20090178047DISTRIBUTED ONLINE OPTIMIZATION FOR LATENCY ASSIGNMENT AND SLICING - A system and method for latency assignment in a system having shared resources for performing jobs including computing a new resource price at each resource and sending the new resource price to a task controller in a task path that has at least one job running in the task path. A path price is computed for each task path of the task controller, if there is a critical time specified for the task. New deadlines are determined for the resources in a task path based on the resource price and the path price. The new deadlines are sent to the resources where the at least one job is running to improve system performance.07-09-2009
20090064159SYSTEM AND METHOD FOR OPTIMIZING LOAD DISTRIBUTION ACROSS LOGICAL AND PHYSICAL RESOURCES IN A STORAGE SYSTEM - An apparatus, system and method to optimize load distribution across logical and physical resources in a storage system. An apparatus in accordance with the invention may include an availability module and an allocation module. The availability module may dynamically assign values to resources in a hierarchical tree structure. Each value may correspond to an availability parameter such as allocated volumes, current resource utilization, and historic resource utilization. The allocation module may serially process the values and allocate a load to a least busy resource in the hierarchical tree structure based on the assigned values.03-05-2009
20090064161DEVICE ALLOCATION UTILIZING JOB INFORMATION, STORAGE SYSTEM WITH A SPIN CONTROL FUNCTION, AND COMPUTER THEREOF - This invention provides a storage system coupled to a computer that executes data processing jobs by running a program, comprising: an interface; a storage controller; and disk drives. The storage controller is configured to: control spinning of disk in the disk drives; receive job information which contains an execution order of the job and a load attribute of the job from the computer before the job is executed; select a logical volume to which none of the storage areas are allocated when requested by the computer to provide a logical volume for storing a file that is used temporarily by the job to be executed; select which storage area to allocate to the selected logical volume based on at least one of the job execution order and the job load attribute; allocate the selected storage area to the selected logical volume; and notify the computer of the selected logical volume.03-05-2009
20090064157ASYNCHRONOUS DATA STRUCTURE PULL APPLICATION PROGRAMMING INTERFACE (API) FOR STREAM SYSTEMS - Provided are techniques for processing data items. A limit on the number of dequeue operations allowed in a current step of processing for a queue-like data structure is set, wherein the number of allowed dequeue operations limit at least one of an amount of CPU resources and an amount of memory resources to be used by an operator. The operator to perform processing is selected and the operator is activated by passing control to the operator, which then dequeues data constrained by the limits set. In response to receiving control back from the operator, the data structure size is examined to determine whether the operator made forward progress in that the operator enqueued or dequeued at least one data item.03-05-2009
20090064156COMPUTER PROGRAM PRODUCT AND METHOD FOR CAPACITY SIZING VIRTUALIZED ENVIRONMENTS - A computer system determines an optimal hardware system environment for a given set of workloads by allocating functionality from each workload to logical partitions, where each logical partition includes resource demands, assigning a priority weight factor to each resource demand, configuring potential hardware system environments, where each potential hardware system environment provides resource capacities, and computing a weighted sum of least squares metric for each potential hardware system environment.03-05-2009
20090260014APPARATUS, AND ASSOCIATED METHOD, FOR ALLOCATING PROCESSING AMONGST DATA CENTERS - Apparatus, and an associated method, for facilitating optimization of data center performance. An optimization decision engine is provided with information regarding energy credentials of the power generative facilities that power the respective data centers. The energy credential, or other energy indicia, information is used in an optimization decision. Responsive to the optimization decision, processing allocation is made.10-15-2009
20090055830METHOD AND SYSTEM FOR ASSIGNING LOGICAL PARTITIONS TO MULTIPLE SHARED PROCESSOR POOLS - A method and system for assigning logical partitions to multiple named processor pools. Sets of physical processors are assigned to predefined processor sets. Named processor pools with unique pool names are defined. The processor sets are assigned to the named processor pools so that each processor set is assigned to a unique named processor pool. A first set of logical partitions is assigned to a first named processor pool and a second set of logical partitions is assigned to a second named processor pool. A first processor set is assigned to the first named processor pool and a first set of physical processors is assigned to the first processor set. Similarly, a second processor set is assigned to the second named processor pool and a second set of physical processors is assigned to the second processor set.02-26-2009
20090254916ALLOCATING RESOURCES FOR PARALLEL EXECUTION OF QUERY PLANS - Computing resources can be assigned to sub-plans within a query plan to effect parallel execution of the query plan. For example, computing resources in a grid can be represented by nodes, and a shortest path technique can be applied to allocate machines to the sub-plans. Computing resources can be provisionally allocated as the query plan is divided into query plan segments containing one or more sub-plans. Based on provisional allocations to the segments, the computing resources can then be allocated to the sub-plans within respective segments. Multiprocessor computing resources can be supported. The techniques can account for data locality. Both pipelined and partitioned parallelism can be addressed. Described techniques can be particularly suited for efficient execution of bushy query plans in a grid environment. Parallel processing will reduce the overall response time of the query.10-08-2009
20090025004Scheduling by Growing and Shrinking Resource Allocation - A scheduler for computing resources may periodically analyze running jobs to determine if additional resources may be allocated to the job to help the job finish quicker and may also check if a minimum amount of resources is available to start a waiting job. A job may consist of many tasks that may be defined with parallel or serial relationships between the tasks. At various points during execution, the resource allocation of active jobs may be adjusted to add or remove resources in response to a priority system. A job may be started with a minimum amount of resources and the resources may be increased and decreased over the life of the job.01-22-2009
20110225593INTERFACE-BASED ENVIRONMENTALLY SUSTAINABLE COMPUTING - Implementation of interface-based environmentally sustainable computing is provided. A method includes retrieving usage characteristics of a process scheduled to execute on a computer system and determining an environmental impact of the process on the computer system by mapping the usage characteristics of the process to corresponding environmental costs of the usage characteristics. The method also includes implementing an action on the computer system in response to the environmental impact. The actions are pre-configured for administration based upon a threshold level of environmental impact associated with the process and/or user selection.09-15-2011
20110225592Contention Analysis in Multi-Threaded Software - A contention log contains data for contentions that occur during execution of a multi-threaded application, such as a timestamp of the contention, contention length, contending thread identity, contending thread call stack, and contended-for resource identity. After execution of the application ends, contention analysis data generated from the contention log shows developers information such as total number of contentions for particular resource(s), total number of contentions encountered by thread(s), a list of resources that were most contended for, a list of threads that were most contending, a plot of the number of contentions per time interval during execution of the application, and so on. A developer may pivot between details about threads and details about resources to explore relationships between thread(s) and resource(s) involved in contention(s). Other information may also be displayed, such as call stacks, program source code, and process thread ownership, for example.09-15-2011
20090025006SYSTEM AND METHOD FOR CONTROLLING RESOURCE REVOCATION IN A MULTI-GUEST COMPUTER SYSTEM - At least one guest system, for example, a virtual machine, is connected to a host system, which includes a system resource such as system machine memory. Each guest system includes a guest operating system (OS). A resource requesting mechanism, preferably a driver, is installed within each guest OS and communicates with a resource scheduler included within the host system. If the host system needs any one the guest systems to relinquish some of the system resource it currently is allocated, then the resource scheduler instructs the driver within that guest system's OS to reserve more of the resource, using the guest OS's own, native resource allocation mechanisms. The driver thus frees this resource for use by the host, since the driver does not itself actually need the requested amount of the resource. The driver in each guest OS thus acts as a hollow “balloon” to “inflate” or “deflate,” that is, reserve more or less of the system resource via the corresponding guest OS. The resource scheduler, however, remains transparent to the guest systems.01-22-2009
20110145832TECHNIQUES FOR ALLOCATING COMPUTING RESOURCES TO APPLICATIONS IN AN EMBEDDED SYSTEM - Techniques for allocating computing resources to tasks include receiving first data and second data. The first data indicates a limit for unblocked execution by a processor of a set of at least one task that includes instructions for the processor. The second data indicates a maximum use of the processor by the set. It is determined whether a particular set of at least one task has exceeded the limit for unblocked execution based on the first data. If it is determined that the particular set has exceeded the limit, then execution of the particular set by the processor is blocked for a yield time interval based on the second data. These techniques can guarantee that no time-critical tasks of an embedded system on a specific-purpose device are starved for processor time by tasks of foreign applications also executed by the processor.06-16-2011
20110145831MULTI-PROCESSOR SYSTEM, MANAGEMENT APPARATUS FOR MULTI-PROCESSOR SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM IN OR ON WHICH MULTI-PROCESSOR SYSTEM MANAGEMENT PROGRAM IS RECORDED - The invention achieves optimization of partition division by implementing resource distribution with a characteristic of a system into consideration so that the processing performance of the entire system is enhanced. To this end, a system management section in the invention calculates an optimum distribution of a plurality of resources to partitions based on distance information regarding the distance between a plurality of resources and data movement frequencies between the plural resources. The plural resources are distributed to the plural partitions through a plurality of partition management sections so that the optimum distribution state may be established.06-16-2011
20110145830JOB ASSIGNMENT APPARATUS, JOB ASSIGNMENT PROGRAM, AND JOB ASSIGNMENT METHOD - A job assignment apparatus includes: a correlation calculation unit to calculate a correlation between an execution time used for processing a program that depends on a computer resource operating at the start of an execution request job and an execution time used for processing a predetermined amount of data in the execution request job which operates immediately after completion of an operation of the program; a resource identification unit to identify the computer resource on which the execution request job depends on the basis of the correlation calculated by the correlation calculation unit; and a job assignment unit to assign the execution request job to one of execution servers connected to the job assignment apparatus so as to exclude simultaneous execution of a job that depends on the same computer resource as the computer resource identified by the resource identification unit and the execution request job.06-16-2011
20090199198MULTINODE SERVER SYSTEM, LOAD DISTRIBUTION METHOD, RESOURCE MANAGEMENT SERVER, AND PROGRAM PRODUCT - A multinode server system including application execution means. The application execution means includes several servers mutually connected, each of which processes one mesh obtained by dividing a virtual space. The virtual space is displayed as the result of processing of each mesh by the several servers. Resource management means detects load states of the servers, and changes allocation of the servers to process the meshes in accordance with the load states. Network means allow several clients to share the virtual space via a network. The servers processing the meshes are changed while giving priority to an adjacent mesh beyond a server border in response to the load states.08-06-2009
20090199196AUTOMATIC BASELINING OF RESOURCE CONSUMPTION FOR TRANSACTIONS - An application monitoring system determines the health of one or more resources used to process a transaction, business application, or other computer process. Performance data is generated in response to monitoring application execution and processed to determine and an actual and baseline value for resource usage data. Resource usage baseline data may be determined from previous resource usage data associated with a resource and particular transaction (a resource-transaction pair). The baseline values are compared to actual values to determine a deviation for the actual value. Deviation information for the time series data can be reported through an interface or some other manner.08-06-2009
20090199194Mechanism to Prevent Illegal Access to Task Address Space by Unauthorized Tasks - A method and data processing system for tracking global shared memory (GSM) operations to and from a local node configured with a host fabric interface (HFI) coupled to a network fabric. During task/job initialization, the system OS assigns HFI window(s) to handle the GSM packet generation and GSM packet receipt and processing for each local task. HFI processing logic automatically tags each GSM packet generated by the HFI window with a global job identifier (ID) of the job to which the local task is affiliated. The job ID is embedded within each GSM packet placed on the network fabric. On receipt of a GSM packet from the network fabric, the HFI logic retrieves the embedded job ID and compares the embedded job ID with the ID within the HFI window(s). GSM packets are forwarded to an HFI window only when the embedded job ID matches the HFI window's job ID.08-06-2009
20090199197Wake-and-Go Mechanism with Dynamic Allocation in Hardware Private Array - A wake-and-go mechanism is provided for a data processing system. When a thread is waiting for an event, rather than performing a series of get-and-compare sequences, the thread updates a wake-and-go array with a target address associated with the event. The wake-and-go mechanism may save the state of the thread in a hardware private array. The hardware private array may comprise a plurality of memory cells embodied within the processor or pervasive logic associated with the bus, for example. Alternatively, the hardware private array may be embodied within logic associated with the wake-and-go storage array.08-06-2009
20090199195Generating and Issuing Global Shared Memory Operations Via a Send FIFO - A method for issuing global shared memory (GSM) operations from an originating task on a first node coupled to a network fabric of a distributed network via a host fabric interface (HFI). The originating task generates a GSM command within an effective address (EA) space. The task then places the GSM command within a send FIFO. The send FIFO is a portion of real memory having real addresses (RA) that are memory mapped to EAs of a globally executing job. The originating task maintains a local EA-to-RA mapping of only a portion of the real address space of the globally executing job. The task enables the HFI to retrieve the GSM command from the send FIFO into an HFI window allocated to the originating task. The HFI window generates a corresponding GSM packet containing GSM operations and/or data, and the HFI window issues the GSM packet to the network fabric.08-06-2009
20090199192Resource scheduling apparatus and method - Embodiments of the invention are concerned with allocating resources to tasks and have particular application to situations where the availability of resources and the tasks to be performed change dynamically and the resources are mobile.08-06-2009
20090083747METHOD FOR MANAGING APPLICATION PROGRAMS BY UTILIZING REDUNDANCY AND LOAD BALANCE - A method for managing application programs includes: monitoring whether there is at least an application program which is unresponsive in a plurality of started application programs; and automatically restarting the application program which is unresponsive, and averagely allocating a system resource for the plurality of application programs according to a number of the plurality of application programs.03-26-2009
20120291043Minimizing Resource Latency Between Processor Application States In A Portable Computing Device By Using A Next-Active State Set - Resource state sets of a portable computing device are managed. A sleep set of resource states, an active set of resource states and a next-active set of resource states are maintained in memory. A request may be issued for a processor to enter into a sleep state or otherwise change from one application state corresponding to one resource state set to another application state corresponding to another application state set. This causes a controller to review a trigger set to determine if a shut down condition for the processor matches one or more conditions listed in the trigger set. If a trigger set matches a shut down condition, then switching states of one or more resources in accordance with the sleep set may be made by the controller. Providing a next-awake set of resource states that is immediately available to the processor upon a wake-up event helps minimize resource latency.11-15-2012
20090049448Grid Non-Deterministic Job Scheduling - The present invention is method for scheduling jobs in a grid computing environment without having to monitor the state of the resource on the gird comprising a Global Scheduling Program (GSP) and a Local Scheduling Program (LSP). The GSP receives jobs submitted to the grid and distributes the job to the closest resource. The resource then runs the LSP to determine if the resource can execute the job under the conditions specified in the job. The LSP either rejects or accepts the job based on the current state of the resource properties and informs the GSP of the acceptance or rejection. If the job is rejected, the GSP randomly selects another resource to send the job to using a resource table. The resource table contains the state-independent properties of every resource on the grid.02-19-2009
20110145829PERFORMANCE COUNTER INHERITANCE - A system for providing performance counter inheritance includes an operating system that receives a request of a first application to monitor performance of a second application, the request identifying an event to monitor during the execution of a task associated with the second application. The operating system causes a task counter corresponding to the event to be activated, and automatically activates a child task counter for each child task upon receiving a notification that execution of a corresponding child task is starting. Further, the operating system adds a value of each child task counter to a value of the task counter to determine a total counter value for the task, and provides the total counter value of the task to the first application.06-16-2011
20090064158MULTI-CORE RESOURCE UTILIZATION PLANNING - Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources.03-05-2009
20090055831Allocating Network Adapter Resources Among Logical Partitions - In an embodiment, a network adapter has a physical port that is multiplexed to multiple logical ports, which have default queues. The adapter also has other queues, which can be allocated to any logical port, and resources, which map tuples to queues. The tuples are derived from data in packets received via the physical port. The adapter determines which queue should receive a packet based on the received tuple and the resources. If the received tuple matches a resource, then the adapter stores the packet to the corresponding queue; otherwise, the adapter stores the packet to the default queue for the logical port specified by the packet. In response to receiving an allocation request from a requesting partition, if no resources are idle, a resource is selected for preemption that is already allocated to a selected partition. The selected resource is then allocated to the requesting partition.02-26-2009
20090083750METHOD AND APPARATUS FOR CONTROLLING MESSAGE TRAFFIC LICENSE - The present invention relates to a method and an apparatus for controlling message traffic licenses. The method includes: controlling message traffic through an ordinary license; judging whether the triggering conditions of using the first extended license are fulfilled, and, if the triggering conditions are fulfilled, using the first extended license to control the message traffic. The apparatus includes: a license management module, adapted to switch between the licenses according to the triggering conditions of the message traffic license; and a control module, adapted to control the message traffic by using the license selected by the license management module. The method and the apparatus for controlling message traffic licenses provided in an embodiment of the present invention perform hierarchical control on the short message traffic to overcome waste of system resources in the prior art caused by unitary setting of the maximum traffic and reduce the system resources occupied by invalid license traffic in the Short Message Service Center (SMSC).03-26-2009
20090083749RESTRICTING RESOURCES CONSUMED BY GHOST AGENTS - One aspect of the present invention can include a method for restricting resources consumed by ghost agents. The method can include the step of associating a ghost agent with a host. A resource utilization value can be ascertained for the ghost agent and the host combined. The ascertained resource utilization value can be compared with a usage threshold. A determination can be made as to whether operations of the ghost agent are to be executed based upon the previous comparison.03-26-2009
20090083748PROGRAM EXECUTION DEVICE - A resource information acquiring unit acquires processor resource information from outside. A program associating unit associates the processor resource information with a program. A processor resource allocating unit allocates processor resources to the program in accordance with the processor resource information when the program is executed.03-26-2009
20090064160Transparent lazy maintenance of indexes and materialized views - Described herein is a materialized view or index maintenance system that includes a task generator component that receives an indication that an update transaction has committed against a base table in a database system. The task generator component, in response to the update transaction being received, generates a maintenance task for one or more of a materialized view or an index that is affected by the update transaction. A maintenance component transparently performs the maintenance task when a workload of a CPU in the database system is below a threshold or when an indication is received that a query that uses the one or more of the materialized view or the index has been received.03-05-2009
20080263561Information processing apparatus, computer and resource allocation method - The present invention provides a new resource allocation technique that allows for each partition to surely and automatically, without using manpower, use a proper amount of resources in accordance with the load when a structure is employed in which the inside of a computer is divided into a plurality of partitions and each partition performs data processing using the allocated resources. Storage unit for storing schedule information describing what amount of resources is allocated to a time range of which period or what time is prepared for each partition. In consideration of the fact that the usage of resources can be often figured out in advance, the present invention obtains an amount of resources stored in association with the time range to which the current time belongs from the storage unit, and controls such that each partition uses the obtained amount of resources to perform data processing.10-23-2008
20080263560STRUCTURE FOR SECURING LEASED RESOURCES ON A COMPUTER - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is for securing of leased resources on a computer. The design structure includes a computer for securing resources may comprise at least one processor, a plurality of resources, wherein each resource is associated with configuration data and a programmable logic device connected to each of the plurality of resources. The programmable logic device may be configured for determining whether a resource is leased, reading un-encoded configuration data from a resource, and sending the configuration data to a first unit, if the resource is not leased. The programmable logic device may further be configured for reading encoded configuration data from a resource, decoding the configuration data, sending the configuration data that was decoded to a first unit, and logging use of the resource by the first unit, if the resource is leased.10-23-2008
20080263558METHOD AND APPARATUS FOR ON-DEMAND RESOURCE ALLOCATION AND JOB MANAGEMENT - The invention is a method and apparatus for on-demand resource planning for unified messaging services. In one embodiment, multiple clients are served by a single system, and existing system resources are allocated among all clients in a manner that optimizes system output and service provider profit without the need to increase system resources. In one embodiment, resource allocation and job scheduling are guided by individual service level agreements between the service provider and the clients that dictate minimum service levels that must be achieved by the system. Jobs are processed in a manner that at least meets the specified service levels, and the benefit or profit derived by the service provider is maximized by prioritizing incoming job requests within the parameters of the specified service levels while meeting the specified service levels. Thus, operation and hardware costs remain substantially unchanged, while system output and profit are maximized.10-23-2008
20080263557SCHEDULING METHOD AND SYSTEM, CORRESPONDING COMPUTATIONAL GRID AND COMPUTER PROGRAM PRODUCT - A scheduler device schedules executions of jobs using resources of a computational grid. The scheduler is configured for identifying an equilibrium threshold between resources and jobs. Below the equilibrium threshold, the scheduler schedules the execution of the jobs using the resources of the computational grid according to Pareto-optimal strategies. Above the equilibrium threshold, the scheduler schedules the execution of the jobs using the resources of the computational grid according to Nash-equilibrium strategies.10-23-2008
20080263556REAL-TIME SYSTEM EXCEPTION MONITORING TOOL - Techniques for monitoring resources of a computer system are provided. A monitoring process collects and reports utilization data for one or more resources of a computer system, such as CPU, memory, disk I/O, and network I/O. Instead of reporting just an average of the collected data over a period of time (e.g., 10 seconds), the monitoring process at least reports individually collected resource utilization values. If one or more of the utilization values exceed specified thresholds for the respective resources, then an alert may be generated. In one approach, the monitoring process is made a real-time priority process in the computer system to ensure that the memory used by the monitoring process is not swapped out of memory. Also, being a real-time priority process ensures that the monitoring process obtains a CPU in order collect resource utilization data even when the computer system is in a starvation mode.10-23-2008
20110231858BURST ACCESS PROTOCOL - Methods and systems provide a burst access protocol that enables efficient transfer of data between a first and a second processor via a data interface whose access set up time could present a communication bottleneck. Data, indices, and/or instructions are transmitted in a static table from the first processor and stored in memory accessible to the second processor. Later, the first processor transmit to the second processor a dynamic table which specifies particular data, indices and/or instructions within the static table that are to be implemented by the second processor. The second processor uses the dynamic table to implement the identified particular subset of data, indices and/or instructions. By transmitting the bulk of data, indices and/or instructions to the second processor in a large static table, the burst access protocol enables efficient use of data interfaces which can transmit large amounts of information, but require relatively long access setup times.09-22-2011
20110231859PROCESS ASSIGNING DEVICE, PROCESS ASSIGNING METHOD, AND COMPUTER PROGRAM - A process assigning device includes executing an operation including receiving an assignment request including device identification information, content and process identification information, determining whether identification of another device exists on the basis of the content identification information indicated by the received assignment request, storing the device identification information included in the assignment request in association with the content identification information and the process identification information, and the process identification information in association with the device identification information when determining that the identification of the other device does not exist. When the processor determines the identification information of the other device exists, the processor causes the device identification information included in the assignment request, and the assigned part information indicating the part that is included in the content data and that varies by device identification information to be stored.09-22-2011
20110231857CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts.09-22-2011
20090100435HIERARCHICAL RESERVATION RESOURCE SCHEDULING INFRASTRUCTURE - Scheduling system resources. A system resource scheduling policy for scheduling operations within a workload is accessed. The policy is specified on a workload basis such that the policy is specific to the workload. System resources are reserved for the workload as specified by the policy. Reservations may be hierarchical in nature where workloads are also hierarchically arranged. Further, dispatching mechanisms for dispatching workloads to system resources may be implemented independent from policies. Feedback regarding system resource use may be used to determine policy selection for controlling dispatch mechanisms.04-16-2009
20090055834VIRTUALIZATION PLANNING SYSTEM - An interactive virtualization management system provides an assessment of proposed or existing virtualization schemes. A Virtual Technology Overhead Profile (VTOP) is created for each of a variety of configurations of host computer systems and virtualization technologies by measuring the overhead experienced under a variety of conditions. The multi-variate overhead profile corresponding to each target configuration being evaluated is used by the virtualization management system to determine the overhead that is to be expected on the target system, based on the particular set of conditions at the target system. Based on these overhead estimates, and the parameters of the jobs assigned to each virtual machine on each target system, the resultant overall performance of the target system for meeting the performance criteria of each of the jobs in each virtual machine is determined, and over-committed virtual machines and computer systems are identified.02-26-2009
20090222836System and Method for Implementing a Management Component that Exposes Attributes - Software for providing a management interface comprises a descriptor file comprising at least one type for at least one resource and further comprising at least one attribute for each type. A management component associated with one of the resources describes at least one of the types. The management component is operable to provide a management interface exposing at least one of the attributes associated with each of the one or more types describing the resource,09-03-2009
20090222832SYSTEM AND METHOD OF ENABLING RESOURCES WITHIN AN INFORMATION HANDLING SYSTEM - A system and method of enabling resources within an information handling system is disclosed. In one form, an information handling system can include an event detection module operable to detect user initiated events and non-user initiated events. The information handling system can also include a resource allocation module coupled to the event detection module. In one form, the resource allocation module can be operable to map a first detected event to a first operating state of a first processing system. The information processing system can also include a second processing system responsive to the resource allocation module and operable to access a shared resource of the first processing system. The resource allocation module can be configured to initiate an outputting of information intended to be output by the second processing system using a shared resource of the first processing system.09-03-2009
20090204971AUTOMATED ACCESS POLICY TRANSLATION - The use of one resource access policy to populate a second resource access policy. One of more fields of the first resource access policy are each to be used to populate corresponding one or more fields of the second resource access policy. After identifying the field(s) of the first resource access policy, and identifying their corresponding field of the second resource access policy, the information from the source fields of the first resource access policy are then used to populate the destination fields of the second resource access policy. This may be done in an automated fashion thereby allowing for at least the possibility of the transition from one type of resource access security to another.08-13-2009
20090210880SYSTEMS AND METHODS FOR MANAGING SEMANTIC LOCKS - In one embodiment, a system for managing semantic locks and semantic lock requests for a resource is provided. Access to the resource is controlled such that compatible lock requests can access the resource and incompatible lock requests are queued.08-20-2009
20090199193SYSTEM AND METHOD FOR MANAGING A HYBRID COMPUTE ENVIRONMENT - Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon.08-06-2009
20090106763ASSOCIATING JOBS WITH RESOURCE SUBSETS IN A JOB SCHEDULER - A method, information processing system, and computer program storage product for associating jobs with resource subsets in a job scheduler. At least one job class that defines characteristics associated with a type of job is received. A list of resource identifiers for a set of resources associated with the job class is received. A set of resources available on at least one information processing system is received. The resource identifiers are compared with each resource in the set of resources available on the information processing system. A job associated with the job class with is scheduled with a set of resources determined to be usable by the job based on the comparing.04-23-2009
20120198467System and Method for Enforcing Future Policies in a Compute Environment - A disclosed system receives a request for resources, generates a credential map for each credential associated with the request, the credential map including a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all the generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map.08-02-2012
20080276244SYSTEM AND METHOD FOR ADAPTIVELY COLLECTING PERFORMANCE AND EVENT INFORMATION - A method for communicating information from a first computing node to at least one of the following: a storage device and a second computing node. The first computing node is monitored to collect at least one estimate of available resources, and based on this estimate, an amount of data collected is modified. Then, the modified data is sent to at least one of the following: the storage device and the second computing node. This invention also provides for the determination of an optimum batch size for aggregating data wherein, for a number of batch sizes, costs are estimated for sending batched information to persistent storage and for losing batched data. Then, the optimum batch size is selected from the number of different batch sizes based on sums of these costs. This invention also provides for selective compression of data, wherein it is determined which of a number of compression algorithms do not incur an overhead that exceeds available resources. Then, one of the determined algorithms is selected to maximize compression.11-06-2008
20090217283SYSTEM UTILIZATION THROUGH DEDICATED UNCAPPED PARTITIONS - Improving system resource utilization in a data processing system is provided. A determination is made as to whether there is at least one ceded virtual processor in a plurality of virtual processors in a shared resource pool. Responsive to existence of the at least one ceded virtual processor, a determination is made as to whether there is at least one dedicated logical partition configured for a hybrid mode. Responsive to identifying at least one hybrid configured dedicated logical partition, a determination is made as to whether the at least one hybrid configured dedicated logical partition requires additional virtual processor cycles. If the at least one hybrid configured dedicated logical partition requiring additional virtual processor cycles, the at least one ceded virtual processor is deallocated from the plurality of virtual processors and allocated to a surrogate resource pool for use by the at least one hybrid configured dedicated logical partition.08-27-2009
20090222835Operating System for a Chip Card Comprising a Multi-Tasking Kernel - The invention relates to a method for operating a chip card (C), a microprocessor for being inserted into the chip card (C) and a computer program product, as well as a method for manufacturing and/or for maintaining a chip card (C) which is operated with the help of a method described above. Here central multi-tasking kernel (MTK) is provided, which controls the entire operation of the chip card (C), so that there can be activated a plurality of application programs (A) on the chip card (C) at the same time, an application program (A) also being able to realize security technical functions for the chip card (C).09-03-2009
20090222833CODELESS PROVISIONING SYNC RULES - Managing resources. A computing environment may include a resource manager. The resource manager includes programmatic code for managing resources. Expected rule entries are added to an expected rules list. Each of the expected rule entries includes: an indicator used to identify a synchronization rule, a definition of flow type, a specification of an object type in the resource manager to which the synchronization rule applies, a specification of a downstream resource system, a specification of an object type in the downstream resource system to which the synchronization rule applies, a specification of relationship criteria including one or more conditions for linking objects in the resource manager and the downstream resource system, and a specification of attribute flow information. Objects in downstream resource systems can be synchronized with objects in the resource manager based on the expected rule entries in the expected rules list.09-03-2009
20090235266Operating System and Augmenting Operating System and Method for Same - A method for determining status of system resources in a computer system includes loading a first operating system into a first memory, wherein the first operating system discovers system resources and reserves a number of the system resources for use of an augmenting operating system, loading the augmenting operating system into a second memory reserved for the augmenting operating system by the first operating system, accessing the first memory from the augmenting operating system and obtaining data, running a process on the augmenting operating system to perform a computation using the data obtained from the first memory, and outputting the results of the computation using the system resources reserved for the augmenting operating system.09-17-2009
20120131590MANAGING VIRTUAL FUNCTIONS OF AN INPUT/OUTPUT ADAPTER - A computer implemented method may include identifying allocations for each virtual function of a plurality of virtual functions that are provided via an input/output adapter. The computer implemented method may further include determining a range associated with each group of a plurality of groups based on the identified allocations. The computer implemented method may also include associating each virtual function with a group of the plurality of groups based on the range associated with the group. Where at least one group of the plurality of groups is empty, and where one or more groups of the plurality of groups has two or more virtual functions associated with the one or more groups, the computer implemented method may include distributing the two or more virtual functions to the at least one empty group. The computer implemented method may further include transferring the plurality of virtual functions from each group to a corresponding category at the input/output adapter.05-24-2012
20090222834CODELESS PROVISIONING - Managing resources. A resource manager includes programmatic code for managing resources in the computing environment. Resources available from resource systems within the computing environment are managed. Methods may include receiving user input indicating one or more of that a new entity should be added to the resource manager, that an entity represented by an entity object of the resource manager should have permissions removed at the resource manager, or that an entity represented by an entity object of the resource manager should have permissions added at the resource manager. In response to receiving user input, events may be generated and objects created or removed from the resource manager for from downstream resource systems. The events may specify workflows that should be executed to perform synchronization between objects at the resource manager and objects at a downstream resource system by adding or changing rules in an expected rules list.09-03-2009
20130219403METHOD AND SYSTEM FOR MANAGING RESOURCE CONNECTIONS - Methods and system for managing resource connections are described. In one embodiment, an initial user request to access data stored at a resource is received. The initial user request is generated by an application of a plurality of applications having access to the resource. An existing connection from the application is utilized to provide the data to the application. A current user request to access data stored at the resource is received. Based on a determination that the existing connection is unavailable, the current user request is assigned to a waiter queue. A number of requests assigned to the waiter queue during a pre-defined time period is determined to exceed a threshold. A new connection from the application to the resource is created based on the availability of a further connection to the resource and the exceeding of the threshold.08-22-2013
20110113434METHOD, SYSTEM, AND STORAGE MEDIUM FOR MANAGING COMPUTER PROCESSING FUNCTIONS - Exemplary embodiments include a system and storage medium for managing computer processing functions in a multi-processor computer environment. The system includes a physical processor, a standard logical processor, an assist logical processor sharing a same logical partition as the standard logical processor, and a single operating system instance associated with the logical partition, the single operating system instance including a switch-to service and a switch-from service. The system also includes a dispatch component managed by the single operating system instance. Upon invoking the switch-to service by standard code, the switch-to service checks to see if an assist logical processor is online and, if so, it updates an integrated assist field of a work element block associated with the task for indicating the task is eligible to be executed on the assist logical processor. The switch-to service also assigns a work queue to the work element block.05-12-2011
20100175069DATA PROCESSING DEVICE, SCHEDULER, AND SCHEDULING METHOD - The present invention comprises: a unit time calculating unit for calculating, as a unit time, the greatest common denominator of the individual operating cycles of a plurality of programs; an allocating unit for allocating the individual operating cycles of the plurality of programs into each of a plurality of continuous base periods that each have their respective unit times, in sequence beginning with the shortest operating cycle, and for allocating the operating cycles of remaining programs for which the operations have not been completed during one of the plurality of base periods into remaining base periods, in sequence beginning with the shortest operating cycles; and an operating unit for running the plurality of programs that are allocated to operating times.07-08-2010
20100153961STORAGE SYSTEM HAVING PROCESSOR AND INTERFACE ADAPTERS THAT CAN BE INCREASED OR DECREASED BASED ON REQUIRED PERFORMANCE - A storage system is comprised of an interface unit 06-17-2010
20080313643WORKLOAD SCHEDULER WITH CUMULATIVE WEIGHTING INDEXES - A workload scheduler supporting the definition of a cumulative weighting index is proposed. The scheduler maintains (12-18-2008
20080313641Computer system, method and program for managing volumes of storage system - Provided is a computer system including a host computer, a storage system, and a management computer, in which the storage system receives data I/O request to virtual logical volumes and data I/O request to one or more real logical volumes, each of the virtual logical volumes is allocated to one of one or more pools, storage areas of physical storage systems are allocated to all storage areas defined as the pools, and when a performance problem has occurred in one of the virtual logical volumes, the management computer selects the one of the virtual logical volumes, and selects a pool other than the pool to which the selected virtual logical volume is allocated and the real logical volumes as a migration destination of the selected virtual logical volume, to thereby prevent a performance problem from being caused by interference among the virtual logical volumes sharing the pool.12-18-2008
20080313640Resource Modeling and Scheduling for Extensible Computing Platforms - Energy management modeling and scheduling techniques are described for reducing the power consumed to execute an application on a multi-processor computing platform within a certain time period. In one embodiment, a sophisticated resource model which accounts for discrete operating modes for computing components/resources on a computing platform and transition costs for transitioning between each of the discrete modes is described. This resource model provides information for a specific heterogeneous multi-processor computing platform and an application being implemented on the platform in a form that can be processed by a selection module, typically utilizing an integer linear programming (ILP) solver or algorithm, to select a task schedule and operating configuration(s) for executing the application within a given time.12-18-2008
20080313639POLICY BASED SCHEDULING OF SOFTWARE APPLICATIONS - A method and apparatus for using policies to limit resource usage by software applications is disclosed herein. The policies define rules that specify a maximum amount of a resource that a particular application is allowed to use given the current state of the computer system, in one embodiment. The state can be defined based on conditions such as user activity, resource usage, time of day, etc. A scheduler monitors the computer system and the application and enforces the policies to control the resource usage of each application. If the scheduler determines that an application has been using more of a particular resource than is allowed then the scheduler takes some action to reduce resource usage until actual resource usage is at or below allowed resource usage. Each application has its own set of policies associated that allow the application to define rules to limit resource usage, in one embodiment.12-18-2008
20100162259VIRTUALIZATION-BASED RESOURCE MANAGEMENT APPARATUS AND METHOD AND COMPUTING SYSTEM FOR VIRTUALIZATION-BASED RESOURCE MANAGEMENT - A computing system for virtualization-based resource management includes a plurality of physical machines, a plurality of virtual machines and a management virtual machine. The virtual machines are configured by virtualizing each of the plurality of physical machines. The management virtual machine is located at any one of the plurality physical machines. The management virtual machine monitors amounts of network resources utilized by the plurality of physical machines and time costs of the plurality of virtual machines, and performs a resource reallocation and a resource reclamation.06-24-2010
20100162256OPTIMIZATION OF APPLICATION POWER CONSUMPTION AND PERFORMANCE IN AN INTEGRATED SYSTEM ON A CHIP - A method for determining an operating point of a shared resource. The method includes receiving indications of access demand to a shared resource from each of a plurality of functional units and determining a maximum access demand from among the plurality of functional units based on their respective indications. The method further includes determining a required operating point of the shared resource based on the maximum access demand, wherein the shared resource is shared by each of the plurality of functional units, comparing the required operating point to a present operating point of the shared resource, and changing to the required operating point from the present operating point if the required and present operating points are different.06-24-2010
20090260015SOFTWARE PIPELINING - A software pipelining method for generating a schedule for executing a plurality of instructions on a processor, the plurality of instructions involving one or more variables, the processor having one or more physical registers, the method comprising the step of scheduling each of the plurality of instructions, determining whether there is a variable for which there is less than a threshold number of physical registers to which that variable may be allocated, and unscheduling a currently scheduled instruction when there is a variable for which there is less than the threshold number of a physical registers to which that that variable may be allocated.10-15-2009
20080307427Methods and apparatus for channel interleaving in OFDM systems - A method and apparatus for channel interleaving in a wireless communication system. In one aspect of the present invention, the data resource elements are assigned to multiple code blocks, and the numbers of data resource elements assigned to each code block are substantially equal. In another aspect of the present invention, a time-domain-multiplexing-first (TDM-first) approach and a frequency-domain-multiplexing-first (FDM-first) approach are proposed. In the TDM-first approach, at least one of a plurality of code blocks are assigned with a number of consecutive data carrying OFDM symbols. In the FDM-first approach, at least one of the plurality of code blocks are assigned with all of the data carrying OFDM symbols. Either one of the TDM first approach and the FDM-first approach may be selected in dependence upon the number of the code blocks, or the transport block size, or the data rate.12-11-2008
20080307426DYNAMIC LOAD MANAGEMENT IN HIGH AVAILABILITY SYSTEMS - Techniques for dynamic load management in processing systems are described. Tuples or vectors, for example, can be used to characterize loads and capacities. Assignments of tasks and redistribution of tasks in the system can be made using the tuples or vectors.12-11-2008
20080307425Data Processing System and Method - A data processing system and method for reallocating resources among execution environments of the system. The reallocation of resources being performed by monitoring the utilization of the resource to determine whether or not the utilization has a predetermined relationship with a utilization measure and thereby unacceptable and based upon this determination reassigning the resource associated with a first execution environment to a second execution environment. The utilization measure is associated with the load of the processor of the utilization of the memory.12-11-2008
20100153960METHOD AND APPARATUS FOR RESOURCE MANAGEMENT IN GRID COMPUTING SYSTEMS - A method for resource management in grid computing systems includes defining user's demands on execution of a task as SLA (Service Level Agreements) information; monitoring states of resources in a grid to store the states as resource state information; calculating for each resource in the grid, based on the resource state information, an expected completion time of the task and an expected profit to be obtained by completing the task; creating an available resource cluster by using the expected execution time and the expected profit; and determining, if the SLA information is satisfied by the available resource cluster, a task processing policy for executing the task by using at least one resource in the available resource cluster. The available resource cluster is a set of resources having the expected completion time within a deadline of the task and the expected profit being positive.06-17-2010
20100153959CONTROLLING AND DYNAMICALLY VARYING AUTOMATIC PARALLELIZATION - A system and method for automatically controlling run-time parallelization of a software application. A buffer is allocated during execution of program code of an application. When a point in program code near a parallelized region is reached, demand information is stored in the buffer in response to reaching a predetermined first checkpoint. Subsequently, the demand information is read from the buffer in response to reaching a predetermined second checkpoint. Allocation information corresponding to the read demand information is computed and stored the in the buffer for the application to later access. The allocation information is read from the buffer in response to reaching a predetermined third checkpoint, and the parallelized region of code is executed in a manner corresponding to the allocation information.06-17-2010
20100180280SYSTEM AND METHOD FOR BATCH RESOURCE ALLOCATION - A system for configuring resources in an environment for use by at least one process. In one embodiment, the system includes: (1) a process sorter configured to rank the at least one process based on numbers of resources that steps in the at least one process can use, (2) an optimizer coupled to the process sorter and configured to employ an optimization heuristic to accumulate feasible allocations of resources to the steps based on the ranking of the at least one process, (3) a resource sorter coupled to the optimizer and configured to rank the resources in a non-decreasing order based on numbers of the steps in which the resources can be used, the optimizer further configured to remove one of the resources from consideration based on the ranking of the resources until infeasibility occurs and (4) an environment configuration interface configured to allow the environment to be configured in accordance with remaining ones of the resources.07-15-2010
20100162258ELECTRONIC SYSTEM WITH CORE COMPENSATION AND METHOD OF OPERATION THEREOF - A method of operation of an electronic system is provided including operating an integrated circuit device having a first core and a second core; detecting a first latency value between the first core and the second core; storing the first latency value in the first core; and compensating for the first latency value in the first core for a first transfer between the first core and the second core.06-24-2010
20100162257METHOD AND APPARATUS FOR PROVIDING RESOURCE ALLOCATION POLICY - A method and apparatus for providing a resource allocation policy in a network are disclosed. For example, the method constructs a queuing model for each application. The method defines a utility function for each application and for each transaction type of each application, and defines an overall utility in a system. The method performs an optimization to identify an optimal configuration that maximizes the overall utility for a given workload, and determines one or more adaptation policies for configuring the system in accordance with the optimal configuration.06-24-2010
20100262971MULTI CORE SYSTEM, VEHICULAR ELECTRONIC CONTROL UNIT, AND TASK SWITCHING METHOD - A multi core system for allocating a task generated from a control system program to an appropriate CPU core and executing the task includes a trial-execution instructing part configured to cause a second CPU core to trial-execute a task which a first CPU core executes before the multi core system transfers the task from the first CPU core to the second CPU core and causes the second CPU core to execute the task, a determining part configured to determine whether an execution result by the first CPU core matches an execution result by the second CPU core, and an allocation fixing part configured to fix the second CPU core as the appropriate CPU core to which the task is allocated if the determining part determines that the execution result by the first CPU core matches the execution result by the second CPU core.10-14-2010
20100262969DATA PROCESSING SYSTEM AND METHOD FOR SCHEDULING THE USE OF AT LEAST ONE EXCLUSIVE RESOURCE - It is an object of the invention to improve the performance of a multitasking data processing system in which at least one exclusive resource is used for executing at least two task flows. The method according to the invention achieves this by using a so-called master schedule, which is used as a template to construct the schedules for individual task flows. The term master schedule refers to a set of reservations of the exclusive resources for task flows.10-14-2010
20100192155SCHEDULING FOR PARALLEL PROCESSING OF REGIONALLY-CONSTRAINED PLACEMENT PROBLEM - Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results.07-29-2010
20100186018OFF-LOADING OF PROCESSING FROM A PROCESSOR BADE TO STORAGE BLADES - A processor blade determines whether a selected processing task is to be off-loaded to a storage blade for processing. The selected processing task is off-loaded to the storage blade via a planar bus communication path, in response to determining that the selected processing task is to be off-loaded to the storage blade. The off-loaded selected processing task is processed in the storage blade. The storage blade communicates the results of the processing of the off-loaded selected processing task to the processor blade.07-22-2010
20100186017System and method for medical image processing - An embodiment of the present invention provides a system and method for medical image processing. The proposed system includes a grid computing framework adapted for receiving patient data including one or more patient-scan images from an end-user application, and for scheduling image processing tasks to a plurality of nodes of a grid computing network. Each of the nodes includes a central processing unit and at least one of the nodes includes programmable graphics processing unit hardware. The proposed system further includes a second framework for image processing using graphics processing unit that is operative on each node of the network. The second framework operative on any node is adapted to execute the image processing task scheduled to that node based upon the availability of graphics processing unit hardware in that node. When graphics processing unit hardware is available in the node, the second framework is adapted to execute the task on the graphics processing unit of the node using stream computation. When graphics processing unit hardware is not available in the node, the second framework is adapted to execute the task on the central processing unit of the node.07-22-2010
20120198465System and Method for Massively Multi-Core Computing Systems - A system and method for massively multi-core computing are provided. A method for computer management includes determining if there is a need to allocate at least one first resource to a first plane. If there is a need to allocate at least one first resource, the at least one first resource is selected from a resource pool based on a set of rules and allocated to the first plane. If there is not a need to allocate at least one first resource, it is determined if there is a need to de-allocate at least one second resource from a second plane. If there is a need to de-allocate at least one second resource, the at least one second resource is de-allocated. The first plane includes a control plane and/or a data plane and the second plane includes the control plane and/or the data plane. The resources are unchanged if there is not a need to allocate at least one first resource and if there is not a need to de-allocate at least one second resource.08-02-2012
20100262970System and Method for Application Isolation - A system, method, and computer readable medium for providing application isolation to one or more applications and their associated resources. The system may include one or more isolated environments including application files and executables, and one or more interception layers intercepting access to system resources and interfaces. Further, the system may include an interception database maintaining mapping between the system resources inside the one or more isolated environments and outside, and a host operating system. The one or more applications may be isolated from other applications and the host operating system while running within the one or more isolated environments.10-14-2010
20100262972DEADLOCK AVOIDANCE - A transaction processing system is operated. A first resource is locked as a shared resource by a first task executing on a computing device. The first task attempts to lock a second resource as an exclusive resource. The occurrence of a deadlock is ascertained. A second task that wishes to use the locked first resource is identified. A current position of the first task with respect to the first resource is stored. The lock on the first resource is removed. The second task is prompted to use the first resource. The first task locks the first resource as the shared resource. The first task is repositioned with respect to first resource according to the stored position. The first task locks the second resource as the exclusive resource. The first task is performed.10-14-2010
20100186019DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application.07-22-2010
20100192157On-Demand Compute Environment - An on-demand compute environment comprises a plurality of nodes within an on-demand compute environment available for provisioning and a slave management module operating on a dedicated node within the on-demand compute environment, wherein upon instructions from a master management module at a local compute environment, the slave management module modifies at least one node of the plurality of nodes.07-29-2010
20100192156TECHNIQUE FOR CONSERVING SOFTWARE APPLICATION RESOURCES - Systems and methods of adjusting allocated hardware resources to support a running software application are disclosed. A system includes adjustment logic to adjust an allocation of a first hardware resource to support a running software application. Measurement logic measures at least one hardware resource metric associated with the first hardware resource. Service level logic calculates an application service level based on the measured at least one hardware resource metric. When the first application service level satisfies a threshold application service level, the allocation of the first hardware resource is iteratively reduced to reach a reduced allocation level where the application service level does not satisfy the threshold application service level. In response thereto, the allocation of the first hardware resource is increased by an increment, such that the application service level again satisfies the threshold application service level.07-29-2010
20090007129Method of allocating resources among client work machines - A method for allocating resources among a plurality of client work machines includes representing at least one client work machine as a resource object, representing at least one manufacturing process executable at a client work machine as a process, defining at least one usage capability for a resource object, selecting one of at least two states of the usage capability, and executing at least one manufacturing process on at least one client work machine according to the selected state of the usage capability.01-01-2009
20090007128 METHOD AND SYSTEM FOR ORCHESTRATING SYSTEM RESOURCES WITH ENERGY CONSUMPTION MONITORING - A method and system for orchestrating system resources including provisioning process, performance measurement, capacity planning and infrastructure deployment. An integrated solution is provided which could help monitoring the system power consumption and applying corrective rebalancing actions. Such orchestrating and rebalancing activity is performed by the system taking into account the estimated power consumption of the single SW applications.01-01-2009
20100153958SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR APPLYING CONDITIONAL RESOURCE THROTTLES TO FACILITATE WORKLOAD MANAGEMENT IN A DATABASE SYSTEM - A system, method, and computer-readable medium that facilitate workload management in a computer system are provided. A workload's system resource consumption is adjusted against a target consumption level thereby facilitating maintenance of the consumption to the target consumption within an averaging interval by dynamically controlling workload concurrency levels. System resource consumption is compensated during periods of over or under-consumption by adjusting workload consumption to a larger averaging interval. Further, mechanisms for limiting, or banding, dynamic concurrency adjustments to disallow workload starvation or unconstrained usage at any time are provided. Disclosed mechanisms provide for category of work prioritization goals and subject-area resource division management goals, allow for unclaimed resources due to a lack of demand from one workload to be used by active workloads to yield full system utilization at all times, and provide for monitoring success in light of the potential relative effects of workload under-demand, and under/over-consumption management.06-17-2010
20090300641SYSTEM AND METHOD FOR SUPPORTING A VIRTUAL APPLIANCE - A system and method for supporting a virtual appliance is provided. In particular, a support engine may include an update server that can manage a workflow to update an appliance in response to detecting upstream updates to one or more software components that have been installed for the appliance. For example, the workflow may generally include managing a rebuild the appliance to install the upstream updates and further managing an integration test to verify that the rebuilt appliance behaves correctly with the upstream updates installed. In addition, the support engine may further include a support analysis manager that can analyze the software components that have been installed for the appliance in view of various heuristic rules to generate a support statement indicating whether support is available for the appliance.12-03-2009
20090300639RESOURCE ACQUISITION AND MANIPULATION FROM WITHIN A VIRTUAL UNIVERSE - The present invention is directed to a system, method and program product that allows a user to access resources on a local computer during a session with a virtual universe. Disclosed is a system that obtains an inventory of resources from the client computer and generates renderings of the resources in the virtual universe. Also included is a resource interaction system for allowing an avatar to interact with the resources in the virtual universe, wherein the resource interaction system provides a transport facility for loading resources from the client computer to the virtual universe.12-03-2009
20100229177Reducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System - Disclosed is an apparatus, method, and program product for identifying and grouping threads that have interdependent data access needs. The preferred embodiment of the present invention utilizes two different constructs to accomplish this grouping. A Memory Affinity Group (MAG) is disclosed. The MAG construct enables multiple threads to be associated with the same node without any foreknowledge of which threads will be involved in the association, and without any control over the particular node with which they are associated. A Logical Node construct is also disclosed. The Logical Node construct enables multiple threads to be associated with the same specified node without any foreknowledge of which threads will be involved in the association. While logical nodes do not explicitly identify the underlying physical nodes comprising the system, they provide a means of associating particular threads with the same node and other threads with other node(s).09-09-2010
20100229176Distribute Accumulated Processor Utilization Charges Among Multiple Threads - A utilization analyzer acquires accumulator values from multiple accumulators. Each accumulator corresponds to a particular processor thread and also corresponds to a particular processor utilization resource register (PURR). The utilization analyzer identifies, from the multiple accumulators, a combination of equal accumulators that each includes a largest accumulator value. Next, the utilization analyzer selects a subset of processor utilization resource registers from a combination of processor utilization resource registers that correspond to the combination of equal accumulators. The subset of processor utilization resource registers omits at least one processor utilization resource register from the combination of utilization resource registers. In turn, the utilization analyzer increments each of the subset of utilization resource registers.09-09-2010
20090320035System for supporting collaborative activity - A system includes a processor which has access to a representation of model of activity, which includes workspaces. Each workspace includes domain hierarchies for representing an organizational structure of the collaborating users using the system, and initiatives hierarchies representing process structures for accomplishing goals. An interface permits users to view and modify the workspaces for which the user has access. Each user can have different access permissions in different workspaces. The domain and initiative hierarchies provide two views of the workspace objects without duplicating resources. A resource is a collection of shared elements defined by the users that give users associated with the workspace access to information sources. Users can define knowledge boards for creating reports based on information fields of the resources. The knowledge board is associated with a resource template from which the resource is created.12-24-2009
20100218192SYSTEM AND METHOD TO ALLOCATE RESOURCES IN SERVICE ORGANIZATIONS WITH NON-LINEAR WORKFLOWS - A method can include determining a number of cases received (e.g., a case load), a number of cases processed (e.g., a case rate), and dividing the case load by the case rate. The resource demand can be compared to a resource allocation, and the resource allocation can be changed based upon the resource demand. A information handling system can include a processor and a memory. The memory can have code stored therein, wherein the code can include instructions, which, when executed by the processor, allows the information handling system to perform part or substantially all of the method.08-26-2010
20090077561Pipeline Processing Method and Apparatus in a Multi-processor Environment - A pipelining processing method and apparatus in multi-processor environment partitions a task into overlapping sub-tasks that are to be allocated to multiple processors, overlapping portions among the respective sub-tasks being shared by the processors that process corresponding sub-tasks. A status of each of the processors is determined during a process where each of the processors executes sub-tasks and the overlapping portions among the respective sub-tasks to be executed by which processor among the processors is dynamically determined on the basis of the status of each of the processors.03-19-2009
20100229179SYSTEM AND METHOD FOR SCHEDULING THREAD EXECUTION - A method is described that comprises suspending a currently executing thread at a periodic time interval, calculating a next time slot during which the currently executing thread is to resume execution, appending the suspended thread to a queue of threads scheduled for execution at the calculated time slot, and updating an index value of a pointer index to a next sequential non-empty time slot, where the pointer index references time slots within an array of time slots, and where each of the plurality of time slots corresponds to a timeslice during which CPU resources are allocated to a particular thread. The method further comprises removing any contents of the indexed non-empty time slot and appending the removed contents to an array of threads requesting immediate CPU resource allocation and activating the thread at the top of the array of threads requesting immediate CPU resource allocation as a currently running thread.09-09-2010
20100242046MULTICORE PROCESSOR SYSTEM, SCHEDULING METHOD, AND COMPUTER PROGRAM PRODUCT - A multicore processor system includes: a plurality of software units, each of which executes predetermined processing using one or more cores among a plurality of cores of a multicore processor; and a scheduler that performs adjustment of allocation of the cores of the multicore processor to each of the software units and core occupation time of each of the software units to cause the software units to operate in parallel. Each of the software units outputs execution result data of the predetermined processing to an output buffer and issues notification based on an accumulated amount of the execution result data, which is output to the output buffer by the software unit, to the scheduler. The scheduler adjusts, based on the received notification, any one of a number of cores allocated to each of the software units and core occupation time of each of the software units or both.09-23-2010
20100242043Computer-Implemented Systems For Resource Level Locking Without Resource Level Locks - Computer-implemented systems and methods regulate access to a plurality of resources in a pool of resources without requiring individual locks associated with each resource. Access to one of the plurality of resources is requested, where a resource queue for managing threads waiting to access a resource is associated with each of the plurality of resources. A resource queue lock associated with the resource is acquired, where a resource queue lock is associated with multiple resources.09-23-2010
20100242042Method and apparatus for scheduling work in a stream-oriented computer system - An apparatus and method for scheduling stream-based applications in a distributed computer system includes a scheduler configured to schedule work using three temporal levels. Each temporal level includes a method. A macro method is configured to schedule jobs that will run, in a highest temporal level, in accordance with a plurality of operation constraints to optimize importance of work. A micro method is configured to fractionally allocate, at a medium temporal level, processing elements to processing nodes in the system to react to changing importance of the work. A nano method is configured to revise, at a lowest temporal level, fractional allocations on a continual basis.09-23-2010
20100218193RESOURCE ALLOCATION FAILURE RECOVERY MODULE OF A DISK DRIVER - A method of resource allocation failure recovery is disclosed. The method generally includes steps (A) to (E). Step (A) may generate a plurality of resource requests from a plurality of driver modules to a manager module executed by a processor. Step (B) may generate a plurality of first calls from the manager module to a plurality of allocation modules in response to the resource requests. Step (C) may allocate a plurality of resources to the driver modules using the allocation modules in response to the first calls. Step (D) may allocate a portion of a memory pool to a particular recovery packet using the manager module in response to the allocation modules signaling a failed allocation of a particular one of the resources. Step (E) may recover from the failed allocation using the particular recovery packet.08-26-2010
20100218194SYSTEM AND METHOD FOR THREAD SCHEDULING IN PROCESSORS - A method for controlling a data processing system, a data processing system executing a similar method, and a computer readable medium with instructions for a similar method. The method includes receiving, by an operating system executing on a data processing system, an execution request from an application, the execution request including at least one resource-defining attribute corresponding to an execution thread of the application. The method also includes allocating processor resources to the execution thread by the operating system according to the at least one resource-defining attribute, and allowing execution of the execution thread on the data processing system according to the allocated processor resources.08-26-2010
20100251255SERVER DEVICE, COMPUTER SYSTEM, RECORDING MEDIUM AND VIRTUAL COMPUTER MOVING METHOD - A server device which operates a plurality of virtual computers so as to respectively correspond to a plurality of terminal devices to which physical devices are connected, the server device includes a judging unit that judges whether move of each of the plurality of virtual computers to each of the plurality of terminal devices is possible; a moving unit that moves one corresponding virtual computer to one terminal device move of the corresponding virtual computer to which has been judged to be possible using the judging unit; and an allocating unit that allocates one physical device connected to the terminal device concerned to the virtual computer which has been moved to the terminal device using the moving unit.09-30-2010
20100211956METHOD AND SYSTEM FOR CONTINUOUS OPTIMIZATION OF DATA CENTERS BY COMBINING SERVER AND STORAGE VIRTUALIZATION - The invention provides a method and system for continuous optimization of a data center. The method includes monitoring loads of storage modules, server modules and switch modules in the data center, detecting an overload condition upon a load exceeding a load threshold, combining server and storage virtualization to address storage overloads by planning allocation migration between the storage modules, to address server overloads by planning allocation migration between the server modules, to address switch overloads by planning allocation migration mix between server modules and storage modules for overload reduction, and orchestrating the planned allocation migration to reduce the overload condition in the data center.08-19-2010
20100299674METHOD, SYSTEM, GATEWAY DEVICE AND AUTHENTICATION SERVER FOR ALLOCATING MULTI-SERVICE RESOURCES - In the field of network communications, a method, a system, a gateway device, and an authentication server for allocating multi-service resources while multiple services of a same user access to a network are provided. The method includes the following steps. A service request message sent by a first service terminal is received. Service capability and user identification of the first service terminal and a count of available resources that corresponds to the user identification are obtained. Resources are allocated for the first service terminal based on the service capability and the user identification of the first service terminal and the count of the available resources that corresponds to the user identification. Thus, the configuration of the gateway device is simplified, and the scale deployment for different services is achieved.11-25-2010
20100242044ADAPTABLE SOFTWARE RESOURCE MANAGERS BASED ON INTENTIONS - User intentions can be derived from observations of user actions or they can be programmatically specified by an application or component that is performing an action. The intentions can then be utilized to adjust the operation of resource managers to better suit the actions being performed by the user or application, especially if such actions are not “typical”. Resource managers can inform a centralized intention manager of environmental constraints, including constraints on the resources they manage and constraints on their operation, such as various, pre-programmed independent modes of operation optimized for differencing circumstances. The intention manager can then instruct the resource managers in accordance with these environmental constraints when the intention manager is made aware of the intentions. If no further optimization can be achieved, specified intentions may not result in directives from the intention manager to the resource managers.09-23-2010
20100242045METHOD AND SYSTEM FOR ALLOCATING A DISTRIBUTED RESOURCE - A method for migrating a virtual machine executing on a host. The method involves monitoring, by a monitoring agent connected to a device driver, hosts in a network, wherein the device driver is connected to a network interface card, determining a virtual machine to be migrated based on a virtual machine policy, sending, by the host, a request to migrate to at least one of a plurality of target hosts in the network, receiving an acceptance to the request to migrate from at least one of the plurality of target hosts, determining, by the monitoring agent, a chosen target host to receive the virtual machine based on a migration policy, wherein the chosen target host is one of the at least one target hosts that sent the acceptance, sending a confirmation and historical information to the chosen target host, and migrating the virtual machine to the chosen target host.09-23-2010
20100251254INFORMATION PROCESSING APPARATUS, STORAGE MEDIUM, AND STATE OUTPUT METHOD - An apparatus for controlling divided operation environments includes a first acquiring unit that acquires a first processing amount indicating an amount of hardware resources allocated to each of the operation environments, a second acquiring unit that acquires a second processing amount which varies depending on an application program executed by the operation environment, a calculating unit that calculates a third processing amount of each of the operation environments on the basis of a difference between the first processing amount of each operation environment acquired by the first acquiring unit and the second processing amount of each operation environment acquired by the second acquiring unit; and an output unit that outputs a state of each of the operation environments on the basis of the third processing amount of each operation environment calculated by the calculating unit and the second processing amount of each operation environment acquired by the second acquiring unit.09-30-2010
20100251252POLICY MANAGEMENT FRAMEWORK IN MANAGED SYSTEMS ENVIRONMENT - A method, system, and computer program product for implementing policies in a managed systems environment is provided. A plurality of the heterogeneous entities is organized into a system resource group (SRG). Each of the plurality of heterogeneous entities is visible to an application operable on the managed systems environment. The system resource group is subject to at least one membership requirement, defines a relationship between at least two of the heterogeneous entities, contains at least one policy defining an operation as to be performed on the system resource group for a domain of the managed systems environment, and defines at least a portion of a policy framework between the system resource group and an additional system resource group organized from an additional plurality of the heterogeneous entities. The system resource group expands according to an action performed incorporating the relationship, policy, or policy framework.09-30-2010
20100211957SCHEDULING AND ASSIGNING STANDARDIZED WORK REQUESTS TO PERFORMING CENTERS - Techniques for allocating work requests to performing centers include generating options for assigning the work requests to the performing centers. The options are based upon predetermined historical factors capturing work request characteristics and performing center characteristics. For each of the options, the work requests are scheduled to determine a corresponding duration of the work requests, and an overall cost is computed. One of the options is selected based on the overall cost and the corresponding duration.08-19-2010
20100235844DISCOVERING AND IDENTIFYING MANAGEABLE INFORMATION TECHNOLOGY RESOURCES - Allocating resource discovery and identification processes among a plurality of management tools and resources in a distributed and heterogeneous information technology (IT) management system by providing at least one authoritative manageable resource having minimal or no responsibility for reporting its identity, minimal or no responsibility for advertising any lifecycle-related creation event for the resource, and minimal or no responsibility for advertising any lifecycle-related destruction event for the resource. A services oriented architecture (SOA) defines one or more services needed to manage the resource within the management system. A component model defines one or more interfaces and one or more interactions to be implemented by the manageable resource within the management system.09-16-2010
20100235843IMPROVEMENTS RELATING TO DISTRIBUTED COMPUTING - There is provided a computer-implemented method of allocating a task to a set of distributed computing resources (09-16-2010
20100100886TASK GROUP ALLOCATING METHOD, TASK GROUP ALLOCATING DEVICE, TASK GROUP ALLOCATING PROGRAM, PROCESSOR AND COMPUTER - Even if a multiprocessor includes an uneven performance core, an inoperative core or a core that does not satisfy such a performance as designed but if the contrivance of task allocation can satisfy the requirement of an application to be executed, the multiple processors are shipped. In a task group allocation method for allocating, to a processor having a plurality of cores, task groups included in an application for the processor to execute, a calculation section measures performances and disposition patterns of the cores, generates a restricting condition associating the measured performances and disposition patterns of the cores with information indicating whether the application can be executed, and, with reference to the restricting condition, reallocates to the cores, the task groups that have previously been allocated to the cores.04-22-2010
20110119675Concurrent Data Processing and Electronic Bookkeeping - Concurrent processing of business transaction data uses a time slice-centered scheme to cope with the situation where multiple requests demand a same resource at the same time. The method divides the processing time into multiple time slices, allocates each request to a corresponding time slice, and iteratively processing requests according to their corresponding time slices. The method does not require the requests to be processed one by one, and therefore does not cause a situation where other requests have to wait until the current request has been completely processed. Moreover, if a certain time slice has been allocated multiple requests of a same type, the requests are collectively processed as if they were a single request to reduce the frequencies of resource locking and unlocking, as well as the waiting time in a queue for resource access.05-19-2011
20130219402ROBUST SYSTEM CONTROL METHOD WITH SHORT EXECUTION DEADLINES - A method of controlling a system comprising the following steps: 08-22-2013
20090288092Systems and Methods for Improving the Reliability of a Multi-Core Processor - Systems and methods for improving the reliability of multiprocessors by reducing the aging of processor cores that have lower performance. One embodiment comprises a method implemented in a multiprocessor system having a plurality of processor cores. The method includes determining performance levels for each of the processor cores and determining an allocation of the tasks to the processor cores that substantially minimizes aging of a lowest-performing one of the operating processor cores. The allocation may be based on task priority, task weight, heat generated, or combinations of these factors. The method may also include identifying processor cores whose performance levels are below a threshold level and shutting down these processor cores. If the number of processor cores that are still active is less than a threshold number, the multiprocessor system may be shut down, or a warning may be provided to a user.11-19-2009
20090282417WORKFLOW EXECUTING APPARATUS, WORKFLOW EXECUTING METHOD, AND STORAGE MEDIUM - A workflow executing method to execute a workflow of a plurality of steps according to a workflow definition. The method includes obtaining setting information of a user instructing execution of the workflow, which is setting information related to the execution of the workflow. The method also includes modifying the workflow definition corresponding to the workflow of which the user instructed execution, based on the obtained setting information. The method continues by dividing the workflow definition modified with the modifying unit for each workflow executing apparatus that executes the workflow definition. The method also includes executing at least one of the divided workflow definitions and sending at least one divided workflow definition to another workflow executing apparatus that executes processing based on the workflow definition, whereby workflow definitions are modified to match user settings, and the modified workflow definitions are divided to match apparatuses executing the workflow definition.11-12-2009
20110113433RESOURCE ALLOCATION METHOD, IDENTIFICATION METHOD, BASE STATION, MOBILE STATION, AND PROGRAM - Provided is a technique capable of reporting resource block allocation information with no waste when an allocated resource block is reported, because in the current LTE downlink, the waste of the amount of resource allocation information increases in some cases since a restriction is imposed such that 37-bit fixed scheduling information is transmitted. A resource block group consisting of at least one or more resource blocks continuous on the frequency axis is allocated to a terminal, and the number of controlling signals for reporting allocation information indicating the allocated resource blocks is determined.05-12-2011
20080295107Adaptive Thread Pool11-27-2008
20090328053ADAPTIVE SPIN-THEN-BLOCK MUTUAL EXCLUSION IN MULTI-THREADED PROCESSING - Adaptive modifications of spinning and blocking behavior in spin-then-block mutual exclusion include limiting spinning time to no more than the duration of a context switch. Also, the frequency of spinning versus blocking is limited to a desired amount based on the success rate of recent spin attempts. As an alternative, spinning is bypassed if spinning is unlikely to be successful because the owner is not progressing toward releasing the shared resource, as might occur if the owner is blocked or spinning itself. In another aspect, the duration of spinning is generally limited, but longer spinning is permitted if no other threads are ready to utilize the processor. In another aspect, if the owner of a shared resource is ready to be executed, a thread attempting to acquire ownership performs a “directed yield” of the remainder of its processing quantum to the other thread, and execution of the acquiring thread is suspended.12-31-2009
20090328052RESOURCE LOCATOR VERIFICATION METHOD AND APPARATUS - A method to be implemented using a computer system, the method comprising the steps of providing a resource database that specifies locations of resources for use by consumers, receiving a location communication originated by a mobile consumer device associated with a consumer at a time temporally proximate a time when the consumer accesses a resource where the location communication indicates the location of the consumer device and using the location of the consumer device indicated in the communication to update the resource database.12-31-2009
20090328051RESOURCE ABSTRACTION VIA ENABLER AND METADATA - Embodiments of the invention provide systems and methods for managing an enabler and dependencies of the enabler. According to one embodiment, a method of managing an enabler can comprise requesting a management function via a management interface of the enabler. The management interface can provide an abstraction of one or more management functions for managing the enabler and/or dependencies of the enabler. In some cases, prior to requesting the management function metadata associated with the management interface can be read and a determination can be made as to whether the management function is available or unavailable. Requesting the management function via the management interface of the enabler can be performed in response to determining the management function is available. In response to determining the management function is unavailable, one or more alternative functions can be identified based on the metadata and the one or more Falternative functions can be requested.12-31-2009
20090328050AUTOMATIC LOAD BALANCING, SUCH AS FOR HOSTED APPLICATIONS - A dynamic load balancing system is described that determines the load of resources in a hosted environment dynamically by monitoring the usage of resources by each customer and determines the number of customers hosted by a server based on the actual resources used. The system receives a performance threshold that indicates when a server is too heavily loaded and monitors the resource usage by each customer. When the load of an overloaded server in the hosted environment exceeds the received performance threshold, the system selects a source customer currently hosted by the overloaded server to move to another server.12-31-2009
20090276786Resource Data Management - In an illustrative embodiment, a data processing system for resource data management is provided. The data process system comprises a set of data structures defining resource relationships and locations for a set of resources to form defined resource relationships and defined locations for the set of resources, and a receiver capable of obtaining replaceable unit data and obtaining characterization data for a current resource in the set of resources to form obtained replaceable unit data and obtained characterization data for the current resource, wherein the obtained replaceable unit data is obtained from a secure device and the obtained characterization data is obtained from an unsecure device. The data processing system further comprises a writer capable of merging the obtained replaceable unit data for the current resource with the obtained characterization data for the current resource for each resource of the set of resources to form a set of data files, wherein each data file corresponds to a resource in the set of resources.11-05-2009
20090276784RESOURCE MANAGEMENT METHOD - There is provided a method of managing a resource within a computer system using a configuration wrapper, the method comprising: providing a configuration file comprising configuration data for the resource; generating metadata related to the configuration data; and automatically processing the metadata to produce a configuration wrapper for the resource. The configuration wrapper may be a java object with management attributes and methods.11-05-2009
20090254917SYSTEM AND METHOD FOR IMPROVED I/O NODE CONTROL IN COMPUTER SYSTEM - A computer system is provided with a file system storing data; a plurality of I/O nodes which are adapted to access the file system; a compute node adapted to execute a job and to issue an I/O request when requiring an I/O operation; and a job server for job scheduling which dynamically allocates an I/O resource of the I/O nodes to a job without stopping execution of the job. The job server includes an I/O node scheduler adapted to, when being not able to fully secure an desired amount of the I/O resource of the I/O nodes required by the job in starting the job, secure a part of the required amount of the I/O resource of the I/O nodes, and to allocate the secured part of the I/O resource to the job.10-08-2009
20090320037DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan.12-24-2009
20080313642SYSTEM AND METHOD FOR ALLOCATING SPARE SYSTEM RESOURCES - A system and method for allocating and/or utilizing spare computing system (e.g., personal computing system) resources. Various aspects of the present invention may, for example and without limitation, provide a system and/or method that communicates incentive information with computing systems, and/or representatives thereof, regarding the allocation of computing resources for utilization by other computing systems and/or incentives that may be associated with such utilization. Various aspects of the present invention may, for example, allocate one or more resources of a computing system for utilization by another computing system based, at least in part, on such communicated incentive information.12-18-2008
20080271033INFORMATION PROCESSOR AND INFORMATION PROCESSING SYSTEM - According to one embodiment, an information processing apparatus in which software resources are divided into first through N-th groups each of which has an operating system, a program operating on the operating system, and data, includes an execution section configured to simultaneously execute the groups with the groups isolated from one another, an OS activating section configured to operate on the operating system of the first group and activate the operating system of at least one of the second through N-th groups according to activation information, an activation information changing section configured to make communication with an administrative server over a network and change the activation information in response to an instruction from the administrative server, and a lock section configured to disable the operating system and the program of each of the second through N-th groups to change the activation information.10-30-2008
20080271031Resource Partition Management in Kernel Space - A method for managing resources in a computing system comprises providing a process initiation function which initiates a process and executing from a kernel an application manager that places the process into a resource partition at process initiation.10-30-2008
20110239224CALCULATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF - A calculation processing apparatus, which executes calculation processing based on a network composed by hierarchically connecting a plurality of processing nodes, assigns a partial area of a memory to each of the plurality of processing nodes, stores a calculation result of a processing node in a storable area of the partial area assigned to that processing node, and sets, as storable areas, areas that store the calculation results whose reference by all processing nodes connected to the subsequent stage of that processing node is complete. The apparatus determines, based on the storage states of calculation results in partial areas of the memory assigned to the processing node designated to execute the calculation processing of the processing nodes, and to processing nodes connected to the previous stage of the designated processing node, whether or not to execute a calculation of the designated processing node.09-29-2011
20090178049Multi-Element Processor Resource Sharing Among Logical Partitions - A method, apparatus, and program product to allocate processor resources to a plurality of logical partitions in a computing device including a plurality of processors, each processor having at least one general purpose processing element and a plurality of synergistic processing elements. General purpose processing element resources and synergistic processing element resources are separately allocated to each logical partition. The synergistic processing element resources to each logical partition are allocated such that each synergistic processing element is assigned to a logical partition exclusively. At least one virtual processor is allocated for each logical partition. The at least one virtual processor may be allocated virtual general purpose processing element resources and virtual synergistic processing element resources that correspond to the general purpose processing element resources and synergistic processing element resources allocated to the logical partition.07-09-2009
20090150897MANAGING OPERATION REQUESTS USING DIFFERENT RESOURCES - Provided are a system and program for managing operation requests using different resources. In one embodiment, a first queue is provided for operations which utilize a first resource of a first and second resource. A second queue is provided for operations which utilize the second resource. An operation is queued on the first queue until the first resource is acquired. The first resource is released if the second resource is not also acquired. The operation is queued on the second queue when the first resource is acquired but the second resource is not. In addition, the first resource is released until the operation acquires both the first resource and the second resource.06-11-2009
20090113440Multiple Queue Resource Manager - In one embodiment, a multiple queue resource manage includes a number of queues in communication with at least one thread. The queues are coupled to each of a corresponding number of clients and operable to receive messages from its respective client. The at least one thread is coupled to a processor configured in a computing system and operable to alternatively process a specified quantity of the messages from each of the plurality of queues.04-30-2009
20090113441Registering a resource that delegates commit voting - A computer system and storage medium that, in an embodiment, receive an allocation request for a resource and registers the resource as a non-voting participant if the resource desires to delegate commit voting to another resource. The registered resource is then prohibited from participating in an enclosing transactional context and instead is informed when the transaction completes. The resource is enlisted as a voting participant if the resource does not desire to delegate commit voting. In this way, when multiple resources are used in a transaction, a resource may be registered and receive notifications of transaction completion instead of being enlisted and voting on commit decisions. The result of a transaction in which a single resource takes responsibility for a number of other resources is that transaction completion avoids the two-phase commit protocol and the resulting performance degradation.04-30-2009
20100223620SMART RECOVERY OF ASYNCHRONOUS PROCESSING - Systems, methods, and computer program products are described that are capable of recovering an asynchronous process after an error occurs with respect to the process. For example, the process may be re-initiated upon detection of the error. The re-initiated process is capable of not repeating tasks of the process that were completed prior to the occurrence of the error.09-02-2010
20100299673SHARED FILE SYSTEM CACHE IN A VIRTUAL MACHINE OR LPAR ENVIRONMENT - Computer system, method and program for defining first and second virtual machines and a memory shared by the first and second virtual machines. A filesystem cache resides in the shared memory. A lock structure resides in the shared memory to record which virtual machine, if any, currently has an exclusive lock for writing to the cache. The first virtual machine includes a first program function to acquire the exclusive lock when available by manipulation of the lock structure, and a second program function active after the first virtual machine acquires the exclusive lock, to write to the cache. The lock structure is directly accessible by the first program function. The cache is directly accessible by the second program function. The second virtual machine includes a third program function to acquire the exclusive lock when available by manipulation of the lock structure, and a fourth program function active after the second virtual machine acquires the exclusive lock, to write to the cache. The lock structure is directly accessible by the third program function. The cache is directly accessible by the fourth program function. Another computer system, method and program is embodied in logical partitions of a real computer, instead of virtual machines.11-25-2010
20090064162RESOURCE TRACKING METHOD AND APPARATUS - The present invention is directed to a parallel processing infrastructure, which enables the robust design of task scheduler(s) and communication primitive(s). This is achieved, in one embodiment of the present invention, by decomposing the general problem of exploiting parallelism into three parts. First, an infrastructure is provided to track resources. Second, a method is offered by which to expose the tracking of the aforementioned resources to task scheduler(s) and communication primitive(s). Third, a method is established by which task scheduler(s) in turn may enable and/or disable communication primitive(s). In this manner, an improved parallel processing infrastructure is provided.03-05-2009
20090037923Apparatus and method for detecting resource consumption and preventing workload starvation - In an embodiment of the invention, an apparatus and method for detecting resource consumption and preventing workload starvation, are provided. The apparatus and method perform the acts including: receiving a query; determining if the query will be classified as a resource intense query, based on a number of passes by a cache call over a data blocks set during a time window, where the cache call is associated with the query; and if the query is classified as a resource intense query, then responding to prevent workload starvation02-05-2009
20090037921TECHNIQUES FOR INSTANTIATING AND CONFIGURING PROJECTS - Techniques for project management instantiation and configuration are provided. A master project includes policy directives that drive the dynamic instantiation and configuration of resources for a project. The resources are instantiated and configured on demand and when resources are actually requested, in response to the policy directives.02-05-2009
20090037920SYSTEM AND METHOD FOR INDICATING USAGE OF SYSTEM RESOURCES USING TASKBAR GRAPHICS - System and method for a method for indicating relative usage of a computer system resource by a plurality of applications each running in an active window, wherein each active window is represented on a taskbar element by a taskbar button, are described. In one embodiment, the method comprises, for each of the active windows, determining a resource usage rate for the application running in the active window, the resource usage rate comprising a percentage of a total system resource usage for which the application accounts; subsequent to the determining, ranking the applications in order of the determined resource usage rates thereof; and redisplaying the taskbar buttons to indicate, via at least one display characteristic, the relative system resource usage rates of the applications.02-05-2009
20100306780JOB ASSIGNING APPARATUS, AND CONTROL PROGRAM AND CONTROL METHOD FOR JOB ASSIGNING APPARATUS - A job assigning apparatus which is connected to a plurality of job processors and assigns the job to any of the job processors includes: an accepting section that accepts the job; an assigning section that selects a job processor having the least number of processes and assigns the accepted job to the selected job processor; a managing section that manages each of the job processors and the number of processes of the job assigned to each of the job processors by the assigning section in association with each other; an adding section that adds the number of processes of the jobs assigned by the assigning section to the number of processes managed by the managing section; and a notifying section that notifies another job assigning apparatus for assigning a job to a job processor of the number of processes of the job assigned by the assigning section.12-02-2010
20090070769PROCESSING SYSTEM HAVING RESOURCE PARTITIONING - A processing system includes a resource that is accessible by a processor and resource partitioning software executable by the processor. The resource partitioning software may be executed to establish a resource partition for the resource. The resource partition defines a set of rules that are used to control access to the resource when a request for the resource is received from a software application and/or process.03-12-2009
20090070767Determining Desired Job Plan Based on Previous Inquiries in a Stream Processing Framework - A data stream processing system is provided that utilizes independent sites to process user-defined inquires over dynamic, continuous streams of data. A mechanism is provided for processing these inquiries over the continuous streams of data by matching new inquiries to previously submitted inquiries. The job plans containing sets of processing elements that were created for both the new inquiry and the previous inquiries are compared for consistency in input and output formatting and commonality of processing elements used. In accordance with the comparison, the new job plan, previous job plans or a combination of the new and previous job plans are used to process the new inquiry. Based on the results of processing the new inquiry, a determination is made regarding which job plans are used for future inquiries.03-12-2009
20090070768System and Method for Using Resource Pools and Instruction Pools for Processor Design Verification and Validation - A system and method for using resource pools and instruction pools for processor design verification and validation is presented. A test case generator organizes processor resources into resource pools using a resource pool mask. Next, the test case generator separates instructions into instruction pools based upon the resources that each instruction requires. The test case generator then creates a test case using one or more sub test cases by assigning a resource pool to each sub test case, identifying instruction pools that correspond the assigned test case, and building each sub test case using instructions included in the identified instruction pools.03-12-2009
20130132969Methods And Apparatuses For Controlling Thread Contention - An apparatus comprises a plurality of cores and a controller coupled to the cores. The controller is to lower an operating point of a first core if a first number based on processor clock cycles per instruction (CPI) associated with a second core is higher than a first threshold. The controller is operable to increase the operating point of the first core if the first number is lower than a second threshold.05-23-2013
20130132970MULTITHREAD PROCESSING DEVICE, MULTITHREAD PROCESSING SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MULTITHREAD PROCESSING PROGRAM - Provided is a multithread processing device that includes a managing unit that assigns a free thread among a plurality of threads to at least one of a plurality of processes, and a processing unit that executes the one process to which the free thread is assigned by the managing unit, wherein, when a request is transmitted from a first process among the plurality of processes by the processing unit, the managing unit releases a thread assigned to the first process to be a free thread, and ends the first process, and when a response to the request is received by the processing unit, the managing unit assigns a free thread to a second process of executing a process related to the response among the plurality of processes.05-23-2013
20100318999PROGRAM PARTITIONING ACROSS CLIENT AND CLOUD - Partitioning execution of a program between a client device and a cloud of network resources, exploits the asymmetry between the computational and storage resources of the cloud and the resources and proximity of the client access device to a user. Programs may be decomposed into work units. Those work units may be profiled to determine execution characteristics, modeled based on current state information and the profile, and a model performance metric (MPM) generated. Based on the MPM, work units may be partitioned between the client and the cloud.12-16-2010
20130139169JOB SCHEDULING TO BALANCE ENERGY CONSUMPTION AND SCHEDULE PERFORMANCE - A computer program product including computer usable program code embodied on a computer usable medium, the computer program product comprising: computer usable program code for identifying job performance data for a plurality of representative jobs; computer usable program code for running a simulation of backfill-based job scheduling of the plurality of jobs at various combinations of a run-time over-estimation value and a processor adjustment value, wherein the simulation generates data including energy consumption and job delay; computer usable program code for identifying one of the combinations of a run-time over-estimation value and a processor adjustment value that optimize the mathematical product of an energy consumption parameter and a job delay parameter using the simulation generated data for the plurality of jobs; and computer usable program code for scheduling jobs submitted to a processor using the identified combination of a run-time over-estimation value and a processor adjustment value.05-30-2013
20130139173MULTI-CORE RESOURCE UTILIZATION PLANNING - Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources.05-30-2013
20100325638INFORMATION PROCESSING APPARATUS, AND RESOURCE MANAGING METHOD AND PROGRAM - An information processing apparatus includes: a resource manager that allocates a resource in response to a codec processing request from an application, wherein the resource manager has first information indicating the relationship between codec processing functions and resources and second information indicating the availability of the resources, and the resource manager identifies resources having the codec processing function corresponding to the codec processing request from the application based on the first information, selects an idle resource from the identified resources based on the second information, and allocates the idle resource.12-23-2010
20100325636INTERFACE BETWEEN A RESOURCE MANAGER AND A SCHEDULER IN A PROCESS - An interface between a resource manager and schedulers in a process executing on a computer system allows the resource manager to manage the resources of the schedulers. The resource manager communicates with the schedulers using the interface to access statistical information from the schedulers. The statistical information describes the amount of use of the resources by the schedulers. The resource manager also communicates with the schedulers to dynamically allocate and reallocate resources among the schedulers in the same or different processes or computer systems in accordance with the statistical information.12-23-2010
20100325637ALLOCATION OF RESOURCES TO A SCHEDULER IN A PROCESS - A resource manager manages processing and other resources of schedulers of one or more processes executing on one or more computer systems. For each scheduler, the resource manager determines an initial allocation of resources based on the policy of the scheduler, the availability of resources, and the policies of other schedulers. The resource manager receives feedback from the schedulers and dynamically changes the allocation of resources of schedulers based on the feedback. The resource manager determines if changes improved the performance of schedulers and commits or rolls back the changes based on the determination.12-23-2010
20130145374SYNCHRONIZING JAVA RESOURCE ACCESS - A method and an apparatus for synchronizing Java resource access. The method includes configuring for a first access interface of a resource set, a first monitor, and configuring, for a second access interface of the resource set, a second monitor, configuring, for the first monitor, a first waiting queue, and the second monitor, a second waiting queue, in response to the first access interface receiving an access request for a resource from a thread, the first monitor querying whether the resource set has a resource satisfying the access request, in response to a positive querying result, the thread obtains the resource and notifies the second monitor to awake a thread in the second waiting queue, in response to a negative querying result, the first monitor puts the thread in the first waiting queue to queue up.06-06-2013
20130145376DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan.06-06-2013
20130145377SYSTEM AND METHOD FOR COOPERATIVE VIRTUAL MACHINE MEMORY SCHEDULING - A resource scheduler for managing a distribution of host physical memory (HPM) among a plurality of virtual machines (VMs) monitors usage by each of the VMs of respective guest physical memories (GPM) to determine how much of the HPM should be allocated to each of the VMs. On determining that an amount of HPM allocated to a source VM should be reallocated to a target VM, the scheduler sends allocation parameters to a balloon application executing in the source VM causing it to reserve and write a value to a guest virtual memory (GVM) location in the source VM. The scheduler identifies the HPM location that corresponds to the reserved GVM and allocates it to the target VM by mapping a guest physical memory location of the target VM to the HPM location.06-06-2013
20090138881Prevention of Deadlock in a Distributed Computing Environment - A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container.05-28-2009
20090138882Prevention of Deadlock in a Distributed Computing Environment - A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container.05-28-2009
20100333103INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processor includes a management module that manages a plurality of register areas in a host controller for processing data protected by copyright. The register areas store confidential information for copyright protection. The management module includes a use state management module and a release module. The use state management module manages use state information on whether the register areas are used by existing process tasks. When all the register areas are occupied by the existing process tasks and a new process task requests for the use of a register area to perform a process based on the confidential information, the release module releases a register area occupied by one of the existing process tasks according to the use state information to assign the register area to the new process task.12-30-2010
20110029981SYSTEM AND METHOD TO UNIFORMLY MANAGE OPERATIONAL LIFE CYCLES AND SERVICE LEVELS - A system and a method to manage a data center, the method including, for example, retrieving a physical topology of a service; determining from the physical topology a concrete type of a resource for the service; and selecting an actual instance of the resource in the data center. The actual instance having the concrete type and the actual instance selected such that a consumption of the actual instance does not violate at least one of a constraint and a policy.02-03-2011
20110214129MANAGEMENT OF MULTIPLE RESOURCE PROVIDERS - A device receives a request for an amount of a resource. It determines for each resource provider in a set of resource providers a current load, a requested load corresponding to the requested amount of the resource, and an additional load corresponding to an expected state of an application. It determines for each of the resource providers an expected total load on the basis of the current load, the requested load, and the additional load. It subsequently selects from the set of resource providers a preferred resource provider on the basis of the expected total loads. The resource may be one of the following: memory, processing time, data throughput, power, and usage of a device.09-01-2011
20110119676Resource File Localization - A system and method for localizing an application resource file. An application localizer may receive an application resource file containing text strings to be localized. The application localizer extracts each text string and sends it to a remote automated translation service, receiving a corresponding localized text string. The localizer writes each of the localized text strings to generate a localized application resource file. Configuration specifications may specify target locales, a format of the application resource file, or a format of application resource file names.05-19-2011
20110055843Scheduling Jobs For Execution On A Computer System - A technique includes determining an order for projects to be performed on a computer system. Each project is associated with multiple job sets, such that any of the job sets may be executed on the computer system to perform the project. The technique includes selecting the projects in a sequence according to the determined order to progressively build a schedule of jobs for execution on the computer system. For each selected project, incorporating one of the associated job sets into the schedule based on a cost of each of the associated job sets.03-03-2011
20110119677MULTIPROCESSOR SYSTEM, MULTIPROCESSOR CONTROL METHOD, AND MULTIPROCESSOR INTEGRATED CIRCUIT - In a multiprocessor system, in general, a processor assigned with a larger amount of tasks is apt to perform a larger amount of communication with other processors assigned with tasks, than a processor assigned with a smaller amount of tasks.05-19-2011
20110055842VIRTUAL MULTIPLE INSTANCE EXTENDED FINITE STATE MACHINES WITH WAIT ROOMS AND/OR WAIT QUEUES - A method and apparatus for processing data by a pipeline of a virtual multiple instance extended finite state machine (VMI EFSM). An input token is selected to enter the pipeline. The input token includes a reference to an EFSM instance, an extended command, and an operation code. The EFSM instance requires the resource to be available to generate an output token from the input token. In response to receiving an indication that the resource is unavailable, the input token is sent to a wait room or an initiative token containing the reference and the operation code is sent to a wait queue, and the output token is not generated. Without stalling and restarting the pipeline, another input token is processed in the pipeline while the resource is unavailable and while the input token is in the wait room or the initiative token is in the wait queue.03-03-2011
20110126207SYSTEM AND METHOD FOR PROVIDING ANNOTATED SERVICE BLUEPRINTS IN AN INTELLIGENT WORKLOAD MANAGEMENT SYSTEM - The system and method described herein for providing annotated service blueprints in an intelligent workload management system may include a computing environment having a model-driven, service-oriented architecture for creating collaborative threads to manage workloads. In particular, the management threads may converge information for creating annotated service blueprints to provision and manage tessellated services distributed within an information technology infrastructure. For example, in response to a request to provision a service, a service blueprint describing one or more virtual machines may be created. The service blueprint may then be annotated to apply various parameters to the virtual machines, and the annotated service blueprint may then be instantiated to orchestrate the virtual machines with the one or more parameters and deploy the orchestrated virtual machines on information technology resources allocated to host the requested service, thereby provisioning the requested service.05-26-2011
20100242048RESOURCE ALLOCATION SYSTEM - The present provides a resource allocation system, including providing a workstation session manager in a workstation, coupling a resource schedule manager to the workstation session manager, coupling a disk drive storage system to the resource schedule manager, and provisioning a workflow process on the desk drive storage system utilizing the resource schedule manager.09-23-2010
20110078695CHARGEBACK REDUCTION PLANNING FOR INFORMATION TECHNOLOGY MANAGEMENT - Reducing cost chargeback in an information technology (IT) computing environment including multiple resources, is provided. One implementation involves a process wherein resource usage and allocation statistics are stored for a multitude of resources and associated cost policies. Then, time-based usage patterns are determined for the resources from the statistics. A correlation of response time with resource usages and outstanding input/output instructions for the resources is determined. Based on usage patterns and the correlation, a multitude of potential cost reduction recommendations are determined. Further, a multitude of integrals are obtained based on the potential cost reduction recommendations, and a statistical integral is obtained based on the statistics. A difference between the statistical integral and each of the multiple integrals is obtained and compared with a threshold to determine potential final cost reduction recommendations. A final cost reduction recommendation is then selected from the potential cost reduction recommendations.03-31-2011
20090313633Method and System for Managing a Workload in a Cluster of Computing Systems with Multi-Type Operational Resources - Determining an equivalent capacity (ECP) of a computing system comprising multi-type operational resources. The multi-type operational resources comprises at least one general type of resources and at least one specialized type of resources Parameters characteristic of the performance of the system is determined. Assignment of work units to the various resources subject to pre-defined constraints is simulated. Utilization of said general type of resources of the computing system when executing the work units is calculated.12-17-2009
20100131957VIRTUAL COMPUTER SYSTEM AND ITS OPTIMIZATION METHOD - Optimization of resource allocation in a virtual computer system is efficiently performed according to a method consistent with a virtualization design concept. The virtual computer system includes a plurality of virtual devices that share the physical resources of a computer and execute an application, a virtualization section that manages the plurality of virtual devices, and a management section that controls the virtualization section. The plurality of virtual devices set allocation of physical resources to the applications by a first optimization calculation using resource supply information from the management section and transmit resource request information corresponding to the resource allocation setting to the management section. The management section sets allocation of the physical resources to the virtual devices by a second optimization calculation using the resource request information from the plurality of virtual devices and transmits resource supply information corresponding to the resource allocation setting to the plurality of virtual devices. While the resource supply information and the resource request information are exchanged between the plurality of virtual devices and the management section, the first and second optimization calculations are performed, thereby dynamically allocating the physical resources.05-27-2010
20100131956METHODS AND SYSTEMS FOR MANAGING PROGRAM-LEVEL PARALLELISM - Methods and systems for managing program-level parallelism in a multi-core processor environment are provided. The methods for managing parallel execution of processes associated with computer programs include providing an agent process in an application space, which is operatively coupled to an operating system having a kernel configured to determine processor configuration information. The application space may be a runtime environment or a user space of the operating system, and has a lower privilege level than the kernel. The agent process retrieves the processor configuration information from the kernel, and after receiving a request for the processor configuration information from application processes running in the application space, the agent process provides a response to the requesting application process. The agent process may also generate resource availability data based on the processor configuration information, and the application processes may initiate a thread based on the resource availability data.05-27-2010
20100011370CONTROL UNIT, DISTRIBUTED PROCESSING SYSTEM, AND METHOD OF DISTRIBUTED PROCESSING - A control unit includes a determination section that determines information on a type and a function of processing elements connected thereto, a library loading section that loads, as needed, program information or reconfiguration information for hardware included in a connected library into the processing elements, and execution transition information control section that creates, based on information on an arbitrary service comprising a combination of one or more tasks to be executed by the processing elements and information on the type and the function of the processing elements determined by the determination section, execution transition information specifying a combination of processing elements corresponding to the information on service and transmits it to the processing elements.01-14-2010
20110093861Assigning A Portion Of Physical Computing Resources To A Logical Partition - A data processing system includes physical computing resources that include a plurality of processors. The plurality of processors include a first processor having a first processor type and a second processor having a second processor type that is different than the first processor type. The data processing system also includes a resource manager to assign portions of the physical computing resources to be used when executing logical partitions. The resource manager is configured to assign a first portion of the physical computing resources to a logical partition, to determine characteristics of the logical partition, the characteristics including a memory footprint characteristic, to assign a second portion of the physical computing resources based on the characteristics of the logical partition, and to dispatch the logical partition to execute using the second portion of the physical computing resources.04-21-2011
20110093860METHOD FOR MULTICLASS TASK ALLOCATION - Embodiments of the invention include a method of selection of server in a system including at least one dispatcher and several servers, in which system when a new task of a given class arrives, then the dispatcher assigns the task to one of these servers, characterized that the selection of the servers by the dispatcher is based on the MIPN (Multiclass Idle Period Notification) information, which is sent by the servers to the dispatcher.04-21-2011
20090204972AUTHENTICATING A PROCESSING SYSTEM ACCESSING A RESOURCE - Provided are a method, system, and article of manufacture for authenticating a processing system accessing a resource. An association of processing system identifiers with resources, including a first and second resources, is maintained. A request from a requesting processing system in a host is received for use of a first resource that provides access to a second resource, wherein the request is generated by processing system software and wherein the request further includes a submitted processing system identifier included in the request by host hardware in the host. A determination is made as to whether the submitted processing system identifier is one of the processing system identifiers associated with the first and second resources. The requesting processing system is provided access to the first resource that the processing system uses to access the second resource.08-13-2009
20090037922WORKLOAD MANAGEMENT CONTROLLER USING DYNAMIC STATISTICAL CONTROL - A computer system comprises a workload management controller that detects and tracks resource consumption volatility patterns and automatically and dynamically adjusts resource headroom according to the volatility patterns.02-05-2009
20100037232Virtualization apparatus and method for controlling the same - A virtualization apparatus and a method for controlling the same. In a method for controlling a virtualization apparatus including a plurality of domains, a sub domain transmits an input/output (IO) request for a hardware device to a main domain, and the main domain controls whether or not the IO request accesses the hardware device according to a resource needed to perform the IO request.02-11-2010
20100037231METHOD FOR READING/WRITING DATA IN A MULTITHREAD SYSTEM - A method for reading/writing data in a multithread system is disclosed. The method includes providing an unprocessed command number of a read/write command waiting queue; providing an expectation read/write thread number according to the unprocessed command number; comparing the expectation read/write thread number with a present read/write thread number; and equalizing the expectation read/write thread number and the present read/write thread number by newly-generating or deleting a read/write thread.02-11-2010
20090217285INFORMATION PROCESSING SYSTEM AND COMPUTER CONTROL METHOD - A first program obtains calculation resource information for determining computer calculation resource to be used by one of a plurality of second programs, one to be begun being executed by the computer, and releases, based on the calculation resource information obtained, a part of the computer calculation resource currently used by the first program. A second program is executed, using the released computer calculation resource. An information processing system comprises a parallel execution condition information obtaining unit for obtaining parallel execution condition information indicating condition of one of the plurality of second programs, one to be executed in parallel to the first program, the condition being set according to the first program and, and an execution restricting unit for restricting execution of a part or all of the plurality of second programs, based on the parallel execution condition information.08-27-2009
20090217284PASSING INITIATIVE IN A MULTITASKING MULTIPROCESSOR ENVIRONMENT - A computer program product for passing initiative in a multitasking multiprocessor environment includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes writing a request to process a resource of the environment to an associated resource control block, setting a resource flag in a central bit vector, the resource flag indicating that a request for processing has been received for the resource, and setting a target processor initiative flag in the environment, the target processor initiative flag indicating a pass of initiative to a target processor responsible for the resource.08-27-2009
20090217282PREDICTING CPU AVAILABILITY FOR SHORT TO MEDIUM TIME FRAMES ON TIME SHARED SYSTEMS - A computer implemented CPU utilization prediction technique is provided. CPU utilization prediction is implemented described in continuous time as an auto-regressive process of the first order. The technique used the inherent autocorrelation between successive CPU measurements. A specific auto-regression equation for predicting CPU utilization is provided. CPU utilization prediction is used in a computer cluster environment. In an implementation, CPU utilization percentage values are used by a scheduler service to manage workload or the distribution of requests over a vast number of CPUs.08-27-2009
20090217281Adaptable Redundant Bit Steering for DRAM Memory Failures - A method, computer program product and computer system for assigning computing resources in a computer system to solve multiple problems where tolerances to the problems are countable and have pre-set thresholds, and solutions to the problems share resources exclusively. The method, computer program product and system include counting the tolerances using at least one counter, assigning resources to solve a problem if the tolerance to the problem is higher than a first pre-set threshold, and reassigning resources to solve a second problem if the tolerance to the second problem is higher than a second pre-set threshold. The method, computer program product and system can also adopt an alternative solution that does not share resources exclusively with a current solution to solve the problems.08-27-2009
20090217280Shared-Resource Time Partitioning in a Multi-Core System - An improvement to computing systems is introduced that allows a hardware controller to be configured to time partition a shared system resource among multiple processing elements, according to one embodiment. For example, a memory controller may partition shared memory and may include processor-accessible registers for configuring and storing a rate of resource budget replenishment (e.g. size of a repeating arbitration window), a time budget allocated among each entity that shares the resource, and a selection of a hard or soft partitioning policy (i.e. whether to utilize slack bandwidth). An additional feature that may be incorporated in a main-memory-access time-partitioning application is an accounting policy to ensure that cache write-backs prompted by snoop transactions are charged to the data requester rather than to the responder. Additionally, an arbiter may prioritize requests from particular requesting entities.08-27-2009
20090217279Method and Device for Controlling a Computer System - A method and device for controlling a computer system having at least two execution units, a switchover taking place between at least two operating modes, and a first operating mode corresponds to a compare mode, and a second operating mode corresponds to a performance mode, wherein at least one set of run-time objects is defined, and a control program is provided, in particular a scheduler, which assigns resources of the computer system to the run-time objects as a function of an item of information regarding the operating mode.08-27-2009
20100058350FRAMEWORK FOR DISTRIBUTION OF COMPUTER WORKLOADS BASED ON REAL-TIME ENERGY COSTS - Energy costs for conducting compute tasks at diverse data center sites are determined and are then used to route such tasks in a most efficient manner. A given compute task is first evaluated to predict potential energy consumption. The most favorable real-time energy costs for the task are determined at the various data center sites. The likely time period of the more favorable cost as well as the stability at the data center are additional factors. A workload dispatcher then forwards the selected compute task to the data center having the most favorable real-time energy costs. Among the criteria used to select the most favorable data center is a determination that the proposed center presently has the resources for the task. A stabilizer is utilized to balance the workload among the data centers. A computer implementation for performing the various steps of the cost determination and allocation is also described.03-04-2010
20100058348MEMORY MANAGEMENT FOR PREDICTION BY PARTIAL MATCHING CONTEXT MODELS - Techniques for resource management of a PPM context model are described herein. According to one embodiment, in response to a sequence of symbols to be coded, contexts are allocated, each having multiple entries and each entry representing a symbol that the current context is able to encode, including a counter value representing a frequency of each entry being used. For each symbol coded by a context, a local counter value and a global counter value are maintained. The global counter value represents a total number of symbols that have been coded by the context model and the local counter value represents a number symbols that have been coded by the respective context. Thereafter, a resource management operation is performed for system resources associated with the plurality of contexts based on a global counter value and a local counter value associated with each of the plurality of contexts.03-04-2010
20100058347DATA CENTER PROGRAMMING MODEL - An exemplary method includes hosting a service at a data center, the service relying on at least one software component developed according to a programming model and the data center comprising a corresponding programming model abstraction layer that abstracts resources of the data center; receiving a request for the service; and in response to the request, assigning at least some of the resources of the data center to the service to allow for fulfilling the request wherein the programming model abstraction layer performs the assigning based in part on reference to a resource class in the at least one software component, the resource class modifiable to account for changes in one or more resources of the data center. Various other devices, systems and methods are also described.03-04-2010
20100058351INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A system resource leak is reliably detected and released. The invention is an information processing apparatus which allocates/releases a system resource in response to a request from a process. The apparatus includes a unit configured to, when a request to allocate the system resource is sent, store an identifier which is assigned to a job including the process as a request source, and system resource information in a management table, a unit configured to, when a request to release the system resource is sent, delete the corresponding system resource information from the management table, a unit configured to, each time the job ends, refer to the management table to determine whether the management table stores an identifier assigned to the job, and a unit configured to, when it is determined that the management table stores the identifier, release the system resource specified by the corresponding system resource information.03-04-2010
20100058349System and Method for Efficient Machine Selection for Job Provisioning - A method for efficient machine selection for job provisioning includes receiving a job request to perform a job using an unspecified server machine and determining one or more job criteria needed to perform the job from the job request. The method further includes providing a list of one or more server machines potentially operable to perform the job. For each server machine on the list of one or more server machines, a utilization value, one or more job criteria satisfaction values, and an overall suitability value are determined. The overall suitability value for each server machine is determined from the one or more job criteria satisfaction values and the utilization value, and may include a numeric degree to which each server machine is suitable for performing the job. Furthermore, the overall suitability value for each server machine may be included on a list of one or more overall suitability values.03-04-2010
20110154354METHOD AND PROGRAM FOR RECORDING OBJECT ALLOCATION SITE - A method, system, and program for recording an object allocation site. In the structure of an object, a pointer to a class of an object is replaced by a pointer to an allocation site descriptor which is unique to each object allocation site, a common allocation site descriptor is used for objects created at the same allocation site, and the class of the object is accessed through the allocation site descriptor.06-23-2011
20110154349RESOURCE FAULT MANAGEMENT FOR PARTITIONS - In accordance with at least some embodiments, a system includes a plurality of partitions, each partition having its own operating system (OS) and workload. The system also includes a plurality of resources assignable to the plurality of partitions. The system also includes management logic coupled to the plurality of partitions and the plurality of resources. The management logic is configured to set priority rules for each of the plurality of partitions based on user input. The management logic performs automated resource fault management for the resources assigned to the plurality of partitions based on the priority rules.06-23-2011
20110154355METHOD AND SYSTEM FOR RESOURCE ALLOCATION FOR THE ELECTRONIC PREPROCESSING OF DIGITAL MEDICAL IMAGE DATA - A method and a system, for resource allocation provided for implementation of the method, are specified for the electronic preprocessing of digital medical image data. In at least one embodiment, provision is subsequently made to classify a plurality of preprocessing jobs, in particular by way of a classifier module, to determine whether they were generated interactively by a user request or automatically. Each preprocessing job is placed in a queue in accordance with the classification, in particular by way of an execution coordination module of the system. Data processing resources for job execution are assigned to each preprocessing job taking account of the classification, in particular by way of a resource allocation module of the system, with interactive preprocessing jobs being handled with higher priority than automatic preprocessing orders.06-23-2011
20110154352MEMORY MANAGEMENT SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT - According to one aspect of the present disclosure a method and technique for managing memory access is disclosed. The method includes setting a memory databus utilization threshold for each of a plurality of processors of a data processing system to maintain memory databus utilization of the data processing system at or below a system threshold. The method also includes monitoring memory databus utilization for the plurality of processors and, in response to determining that memory databus utilization for at least one of the processors is below its threshold, reallocating at least a portion of unused databus utilization from the at least one processor to at least one of the other processors.06-23-2011
20110154350AUTOMATED CLOUD WORKLOAD MANAGEMENT IN A MAP-REDUCE ENVIRONMENT - A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device.06-23-2011
20110154348METHOD OF EXPLOITING SPARE PROCESSORS TO REDUCE ENERGY CONSUMPTION - A method, system, and computer program product for reducing power and energy consumption in a server system with multiple processor cores is disclosed. The system may include an operating system for scheduling user workloads among a processor pool. The processor pool may include active licensed processor cores and inactive unlicensed processor cores. The method and computer program product may reduce power and energy consumption by including steps and sets of instructions activating spare cores and adjusting the operating frequency of processor cores, including the newly activated spare cores to provide equivalent computing resources as the original licensed cores operating at a specified clock frequency.06-23-2011
20110078698METHOD FOR RECONCILING MAPPINGS IN DYNAMIC/EVOLVING WEB-ONTOLOGIES USING CHANGE HISTORY ONTOLOGY - The present invention is directed to reconciliation/reengineering of mappings in dynamic/evolving ontologies. Mappings are established among different ontologies for resolving the terminological and conceptual incompatibilities and support information exchange. As ontology evolves from one consistent state to another consistent state; this consequently makes the existing mappings of the domain ontology with other ontologies unreliable and staled, so mapping evolution is required. The present invention uses Change History Log of ontology changes to drastically reduce the time required for (re)establishing mappings among ontologies, achieving higher accuracy, and eliminating staleness in mappings. It is valid for more than two ontologies with local, centralized, and distributed Change History Log.03-31-2011
20110078696WORK QUEUE SELECTION ON A LOCAL PROCESSOR WITHIN A MULTIPLE PROCESSOR ARCHITECTURE - A method and system is disclosed for selecting a work queue associated with a processor within a multiple processor architecture to assign a new task. A local and a remote queue availability flag is maintained to indicate a relative size of work queues, in relationship to a mean queue size, for each processor in a multiple processor architecture. In determining to which processor to assign a task, the processor evaluates its own queue size by examining its local queue availability flag and evaluates other processor's queue sizes by examining their remote queue availability flags. The local queue availability flags are maintained asynchronously from task assignment. Remote flags are maintained at time of task assignment. The presented algorithm provides improved local processor queue size determinations in systems where task distribution processes execute with lower priorities that other tasks.03-31-2011
20120304190Intelligent Memory Device With ASCII Registers - An ASCII-based processing system is disclosed. A memory is divided into a plurality of logical partitions. Each partition has a range of memory addresses and includes information associated with a particular task. Task information includes contents of task state register and one or more task data registers, with each task data register having an ASCII name. Each task data register is successively labeled with a unique alphabetic character label starting with the character ‘A.’ A dataflow unit within the processing system is configured to manage a mapping between registers with ASCII names and the memory addresses of a particular task. Task instructions can include ASCII characters that indicate a request for resources and indicate the ASCII-character designated names of task data registers on which the task instruction operates. A processing element receiving the task instruction performs the operation indicated by the ASCII operator code on the indicated task data registers.11-29-2012
20100125851APPARATUS, METHOD, AND SYSTEM TO PROVIDE A MULTI-CORE PROCESSOR FOR AN ELECTRONIC GAMING MACHINE (EGM) - An electronic gaming machine (EGM) implements a multi-core processor. A first of the processor cores is adapted to perform or otherwise control a first set of operations. The first set of operations can include, for example, game manager operations and other operations of the EGM that are more time-sensitive. A second one of the processor cores is adapted to perform or otherwise control a second set of operations. The second set of operations can include, for example, operations related to multimedia presentation associated with the running/playing of a game and/or other operations of the EGM that are not time-sensitive or are otherwise less time-sensitive than the operations performed/controlled by the first processor core. Each of the processor cores may run an operating system that matches the needs of its respective processor core.05-20-2010
20120304188Scheduling Flows in a Multi-Platform Cluster Environment - Techniques for scheduling multiple flows in a multi-platform cluster environment are provided. The techniques include partitioning a cluster into one or more platform containers associated with one or more platforms in the cluster, scheduling one or more flows in each of the one or more platform containers, wherein the one or more flows are created as one or more flow containers, scheduling one or more individual jobs into the one or more flow containers to create a moldable schedule of one or more jobs, flows and platforms, and automatically converting the moldable schedule into a malleable schedule.11-29-2012
20110072436RESOURCE OPTIMIZATION FOR REAL-TIME TASK ASSIGNMENT IN MULTI-PROCESS ENVIRONMENTS - A novel and useful system and method of decentralized decision-making for real-time scheduling in a multi-process environment. For each process step and/or resource capable of processing a particular step, a service index is calculated. The calculation takes into account several measures, such as business level measures, operational measures and employee level measure. The decision of which process step a resource should next work on or what step to assign to a resource is based on the service index calculation and, optionally, other production factors. In one embodiment, the resource is assigned the process step with the maximal service index. Alternatively, when a resource becomes available, all process steps the resource is capable of processing are presented in order of descending service index. The resource then selects which process step to work on next.03-24-2011
20110078699COMPUTER SYSTEM WITH DUAL OPERATING MODES - A system switches between non-secure and secure modes by making processes, applications, and data for the non-secure mode unavailable to the secure mode and vice versa. The process thread run queue is modified to include a state flag for each process that indicates whether the process is a secure or non-secure process. A process scheduler traverses the queue and only allocates time to processes that have a state flag that matches the current mode. Running processes are marked to be idled and are flagged as unrunnable, depending on the security mode, when the process reaches an intercept point. The scheduler is switched to allow only threads that have a flag that corresponding to the active security mode to be run.03-31-2011
20110078697OPTIMAL DEALLOCATION OF INSTRUCTIONS FROM A UNIFIED PICK QUEUE - Systems and methods for efficient out-of-order dynamic deallocation of entries within a shared storage resource in a processor. A processor comprises a unified pick queue that includes an array configured to dynamically allocate any entry of a plurality of entries for a decoded and renamed instruction. This instruction may correspond to any available active threads supported by the processor. The processor includes circuitry configured to determine whether an instruction corresponding to an allocated entry of the plurality of entries is dependent on a speculative instruction and whether the instruction has a fixed instruction execution latency. In response to determining the instruction is not dependent on a speculative instruction, the instruction has a fixed instruction execution latency, and said latency has transpired, the circuitry may deallocate the instruction from the allocated entry.03-31-2011
20110072439DECODING DEVICE, RECORDING MEDIUM, AND DECODING METHOD FOR CODED DATA - According to one embodiment, a decoding device includes a storage section, a control section, a decoding processing section. The storage section stores control information showing a progress state of process stages for a decoding process as to a plurality of processing data included in coded data. The control section allocates process stages corresponding to executable processing data which is executable in parallel, to a processor on the basis of the control information, a dependence relation between the processing data in the decoding process, and a dependence relation between the process stages. The decoding processing section parallelly executes allocated process stages corresponding to the executable processing data.03-24-2011
20110072438FAST MAPPING TABLE REGISTER FILE ALLOCATION ALGORITHM FOR SIMT PROCESSORS - One embodiment of the present invention sets forth a technique for allocating register file entries included in a register file to a thread group. A request to allocate a number of register file entries to the thread group is received. A required number of mapping table entries included in a register file mapping table (RFMT) is determined based on the request, where each mapping table entry included in the RFMT is associated with a different plurality of register file entries included in the register file. The RFMT is parsed to locate an available mapping table entry in the RFMT for each of the required mapping table entries. For each available mapping table entry, a register file pointer is associated with an address that corresponds to a first register file entry in the plurality of register file entries associated with the available mapping table entry.03-24-2011
20110072437COMPUTER JOB SCHEDULER WITH EFFICIENT NODE SELECTION - The present invention provides a method, program product, and information processing system that efficiently dispatches jobs from a job queue. The jobs are dispatched to the computational nodes in the system. First, for each job, the number of nodes required to perform the job and the required computational resources for each of these nodes are determined. Then, for each node required, a node is selected to determine whether a job scheduler has a record indicating if this node meets the required computational resource requirement. If no record exists, the job scheduler analyzes whether the node meets the computational resource requirements given that other jobs may be currently executing on that node. The result of this determination is recorded. If the node does meet the computational resource requirement, the node is assigned to the job. If the node does not meet the resource requirement, a next available node is selected. The method continues until all required nodes are assigned and the job is dispatched to the assigned nodes. Alternatively, if the number of required nodes is not available, it is indicated the job can not be run at this time.03-24-2011
20110035753MECHANISM FOR CONTINUOUSLY AND UNOBTRUSIVELY VARYING STRESS ON A COMPUTER APPLICATION WHILE PROCESSING REAL USER WORKLOADS - A mechanism for varying stress on a software application while processing real user workloads is disclosed. A method of embodiments of the invention includes configuring application resources for a recovery configuration whose service levels are satisfactory. The application resources are associated with the software application. The method further includes configuring the application resources for stress configurations to affect service levels, and transitioning the application resources from the recovery configuration to a stress configuration for a time duration, while the application resources of the stress configuration are transitioned back to the recovery configuration. The method further includes determining a next stress configuration and a time duration combination to vary stress such that user service levels are unobtrusively affected by limiting the time duration in inverse relation to an uncertainty in predicting the service level impact of the stress configuration.02-10-2011
20110061058TASK SCHEDULING METHOD AND MULTI-CORE SYSTEM - A task scheduling method and multi-core system according to an embodiment of the present invention comprises: in scheduling for selecting a task that is set in an execution state with a microprocessor allocated thereto out of tasks in an executable state, it is determined whether at least one of the tasks in a young generation, for which the number of times of refill performed until a point of scheduling after transitioning from the execution state to a standby state according to release of the microprocessor is smaller than a predetermined number of times, is present and, when at least one of the tasks in the young generation is present, microprocessor is allocated to the task selected from at least one of the tasks of the young generation.03-10-2011
20110061057Resource Optimization for Parallel Data Integration - For optimizing resources for a parallel data integration job, a job request is received, which specifies a parallel data integration job to deploy in a grid. Grid resource utilizations are predicted for hypothetical runs of the specified job on respective hypothetical grid resource configurations. This includes automatically predicting grid resource utilizations by a resource optimizer module responsive to a model based on a plurality of actual runs of previous jobs. A grid resource configuration is selected for running the parallel data integration job, which includes the optimizer module automatically selecting a grid resource configuration responsive to the predicted grid resource utilizations and an optimization criterion.03-10-2011
20120079496COMPUTING SYSTEM AND JOB ALLOCATION METHOD - A computing system includes a plurality of computing apparatuses, a job allocation information storage unit, a position information storage unit, and a job allocation unit. The job allocation information storage stores job allocation information indicating job allocation status of each of the plurality of computing apparatuses. The job allocation status is one of in an active state and in an inactive state. The position information storage unit stores position information indicating relative positions of the plurality of computing apparatuses. The job allocation unit refers to the job allocation information and the position information, selects a candidate inactive computing apparatus on the basis of a distance between each pair of an inactive computing apparatus and an active computing apparatus, and allocates a job to the candidate inactive computing apparatus.03-29-2012
20130160021SIGNALING, ORDERING, AND EXECUTION OF DYNAMICALLY GENERATED TASKS IN A PROCESSING SYSTEM - One embodiment of the present invention sets forth a technique for enabling the insertion of generated tasks into a scheduling pipeline of a multiple processor system allows a compute task that is being executed to dynamically generate a dynamic task and notify a scheduling unit of the multiple processor system without intervention by a CPU. A reflected notification signal is generated in response to a write request when data for the dynamic task is written to a queue. Additional reflected notification signals are generated for other events that occur during execution of a compute task, e.g., to invalidate cache entries storing data for the compute task and to enable scheduling of another compute task.06-20-2013
20130160022TRANSACTION MANAGER FOR NEGOTIATING LARGE TRANSACTIONS - A computer receives a transaction request that includes information identifying computer resource requirements for the transaction, a resource policy, and a transaction failure policy. The computer determines if sufficient computer resources are available to complete the transaction request based on the received information identifying resource requirements for the transaction. If there are not sufficient computer resources available to complete the transaction request, the computer applies the resource policy to the transaction request and processes the transaction request. If the processed transaction request fails to complete successfully, the computer applies the transaction failure policy to the processed transaction request.06-20-2013
20110016471Balancing Resource Allocations Based on Priority - Balancing resource allocations based on priority may be provided. First, a plurality of repositories may be divided into at least two categories. Next, a first portion of computing resources may be dedicated to a first one of the at least two categories. Then a second portion of the computing resources may be dedicated to a second one of the at least two categories. A crawl may then be performed on the plurality of repositories with the computing resources.01-20-2011
20130160019Method for Resuming an APD Wavefront in Which a Subset of Elements Have Faulted - A method resumes an accelerated processing device (APD) wavefront in which a subset of elements have faulted. A restore command for a job including a wavefront is received. A list of context states for the wavefront is read from a memory associated with a APD. An empty shell wavefront is created for restoring the list of context states. A portion of not acknowledged data is masked over a portion of acknowledged data within the restored wavefronts.06-20-2013
20130160020GENERATIONAL THREAD SCHEDULER - Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied.06-20-2013
20110161973ADAPTIVE RESOURCE MANAGEMENT - Allocation of resources across multiple consumers allows efficient utilization of shared resources. Observed usages of resources by consumers over time intervals are used to determine a total throughput of resources by the consumers. The total throughput of resources is used to determine allocation of resources for a subsequent time interval. The consumers are associated with priorities used to determine their allocations. Minimum and maximum resource guarantees may be associated with consumers. The resource allocation aims to allocate resources based on the priorities of the consumers while aiming to avoid starvation by any consumer. The resource allocation allows efficient usage of network resources in a database storage system storing multiple virtual databases.06-30-2011
20100281486Enhanced scheduling, priority handling and multiplexing method and system - System and method for enhancing scheduling/priority handling and multiplexing on transmitting data of different logical channels includes a receiver and a processor. The receiver receives a payload unit. The processor processes payload unit and enhances scheduling/priority handling and multiplex from different logical channels. The processor calculates data that can be transmitted with available resource for each logical channel, prioritizes the logical channels with decreasing priority order, performs first round resource allocation without partition, prioritizes logical channels with remaining data that is not performed with first round resource allocation with strict decreasing priority order, and performs second round resource allocation with partition. As such, scheduling/priority handling and the multiplexing in a multiple carrier system will be carried out so as to increase the efficiency of resource allocation.11-04-2010
20100281487SYSTEMS AND METHODS FOR MOBILITY SERVER ADMINISTRATION - An administration server of an administration service assigns attributes to objects by a plug-in of the administration service. The plug-in implements a method of a functionality set and the method is callable by the administration service to perform the assigning. Additionally or alternatively, the administration server triggers a reconciliation event by changing the assignment of an attribute of the users that comprise objects of plug-ins; determines a scope of the users and which objects are affected by changing the assignment; and reconciles conflicting assignments. Additionally or alternatively, the administration server adds tasks by the plug-ins to a job created by the plug-ins with the tasks performing the assigning; and removes tasks from the job to optimize it.11-04-2010
20100100887METHOD AND DEVICE FOR ENCAPSULATING APPLICATIONS IN A COMPUTER SYSTEM FOR AN AIRCRAFT - The object of the invention is in particular a device for execution of applications (04-22-2010
20100064291System and Method for Reducing Execution Divergence in Parallel Processing Architectures - A method for reducing execution divergence among a plurality of threads executable within a parallel processing architecture includes an operation of determining, among a plurality of data sets that function as operands for a plurality of different execution commands, a preferred execution type for the collective plurality of data sets. A data set is assigned from a data set pool to a thread which is to be executed by the parallel processing architecture, the assigned data set being of the preferred execution type, whereby the parallel processing architecture is operable to concurrently execute a plurality of threads, the plurality of concurrently executable threads including the thread having the assigned data set. An execution command for which the assigned data functions as an operand is applied to each of the plurality of threads.03-11-2010
20120204186PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization.08-09-2012
20120124592METHODS OF PERSONALIZING SERVICES VIA IDENTIFICATION OF COMMON COMPONENTS - Methods and arrangements for more efficiently enhancing the personalization and customization of services while avoiding an undue overburdening of personnel, infrastructure or resources. An input service component comprising a plurality of tasks is assimilated, similarity among the tasks is determined, and output service components are routed to resources based on similarity among the tasks, the service components each comprising a subgroup of similar tasks.05-17-2012
20090241122SELECTING A NUMBER OF PROCESSING RESOURCES TO RUN AN APPLICATION EFFECTIVELY WHILE SAVING POWER - Selecting a number of processors to run an application in order to save power is performed. A number of code segments are selected from an application. Each of the code segments are executed using two or more of a plurality of processing resource combinations. Each of the code segments are scored with a performance value. The performance value indicates a performance of each code segment using each of the two or more processing resource combinations. A selection is made of one of the two or more processing resource combinations based on an associated performance value and a number of processing resources used to execute the code segment. The application is then executed using the selected processing resource combination.09-24-2009
20090241121Device, Method and Computer Program Product for Monitoring Collaborative Tasks - A method for controlling collaborate tasks, the method includes: receiving a request to initiate a collaborative task that is associated with an assignment; and responding to the request in response to an assignment resource utilization policy.09-24-2009
20080320485Logic for Synchronizing Multiple Tasks at Multiple Locations in an Instruction Stream - Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task).12-25-2008
20080320483RESOURCE MANAGEMENT SYSTEM AND METHOD - Resource management system is provided, implemented between a service bundle developer and provider and a service bundle user. A resource requirement determining device determines a system resource requirement for a service bundle provided by the service bundle developer and provider, and generates resource requirement information corresponding to the service bundle. A processor receives information of system resource utilization status from the service bundle user, determines whether available resource of the service bundle user is sufficient for the resource requirement of the service bundle, when the available resource of the service bundle user is insufficient, the processor generates a waiting queue, and adds the service bundle into the waiting queue. When available resource of the service bundle user is sufficient, the processor installs the service bundle specified in the waiting queue in the service user. A storage device stores the waiting queue and corresponding resource requirement information.12-25-2008
20080320482MANAGEMENT OF GRID COMPUTING RESOURCES BASED ON SERVICE LEVEL REQUIREMENTS - Generally speaking, systems, methods and media for management of grid computing resources based on service level requirements are disclosed. Embodiments of a method for scheduling a task on a grid computing system may include updating a job model by determining currently requested tasks and projecting future task submissions and updating a resource model by determining currently available resources and projecting future resource availability. The method may also include updating a financial model based on the job model, resource model, and one or more service level requirements of an SLA associated with the task, where the financial model includes an indication of costs of a task based on the service level requirements. The method may also include scheduling performance of the task based on the updated financial model and determining whether the scheduled performance satisfies the service level requirements of the task and, if not, performing a remedial action.12-25-2008
20080229321QUALITY OF SERVICE SCHEDULING FOR SIMULTANEOUS MULTI-THREADED PROCESSORS - A method and system for providing quality of service guarantees for simultaneous multithreaded processors are disclosed. Hardware and operating system communicate with one another providing information relating to thread attributes for threads executing on processing elements. The operating system controls scheduling of the threads based at least partly on the information communicated and provides quality of service guarantees.09-18-2008
20080229320Method, an apparatus and a system for controlling of parallel execution of services - According to an aspect of an embodiment, a method for controlling a plurality of nodes for executing a plurality of services, each of the services comprising a plurality of job nets which are to be executed sequentially, the method comprising: allocating at least one node for each of said services and initiating execution of said services by said nodes; obtaining weight information of job nets instantaneously executed for each of the services; and dynamically changing the allocation of the nodes for the services in accordance with the weight information.09-18-2008
20080229318Multi-objective allocation of computational jobs in client-server or hosting environments - A method of processing a computational job with a plurality of processors is disclosed. A request to process a job is received, where the job has a priority level associated with the job. A first group of the processors is designated as being available to process the job, where the number of processors in the first group is based on the priority level associated with the job. A second group of the processors is designated as being available to process the job, where for each processor in the second group a current utilization rate of the processor is less than a second predetermined utilization rate. Then, the job is processed with one or more of the processors selected from the first group of processors and the second group of processors.09-18-2008
20080320484METHOD AND SYSTEM FOR BALANCING THE LOAD AND COMPUTER RESOURCES AMONG COMPUTERS - A method and system for balancing the load of computer resources among a plurality of computers having consumers consuming the resources is disclosed. After defining the lower threshold of the consumption level of the resources and obtaining the consumption level of the resources for each of the consumers and for each of said computers, the consumption level for each of the computers is compared during a period with its associated lower threshold. Whenever a computer having a consumption level of the resources higher than the lower threshold is identified, a new layout of computer resources for each of the consumers is determined. Consumers are then shifted from their current location in the computer to a corresponding location in another computer according to the layout, so that the consumption level of the resource(s) for a computer may be reduced.12-25-2008
20090125911RESOURCE MANAGEMENT PROFILES - A resource management graphical user interface for a computer-controlled printing system in a networked environment enables an operator to create, modify, and apply resource management profiles to coordinate resource allocation within the printing system. The user interface displays a current resource management profile, which includes printing system resource allocations associated with specific tasks. A resource profile list includes at least one profile name, corresponding to a task type. Profiles associated with the task type are presented and controls are provided to enable the operator to set allocations for component resource usage. The operator is also presented with operational options, including deleting a profile, approving a profile, applying a profile to a print job or series of print jobs, saving a new profile, replacing an existing profile, and canceling a profile modification. The user interface transmits instructions to apply a profile to a printing system for processing of print jobs.05-14-2009
20110161978JOB ALLOCATION METHOD AND APPARATUS FOR A MULTI-CORE SYSTEM - A method and apparatus for efficiently allocating jobs to processing cores included in a computing system, are provided. The multi-core system includes a plurality of cores that may collect performance information of each respective core while the cores are executing a requested task in parallel. The multi-core system allocates additional jobs of the requested task to the cores based on the performance information and the amount of jobs remaining.06-30-2011
20110161974Methods and Apparatus for Parallelizing Heterogeneous Network Communication in Smart Devices - The present disclosure relates to devices, implementations and techniques for task scheduling. Specifically, task scheduling in an electronic device that has a multi-processing environment and support network interface devices.06-30-2011
20110161977METHOD AND DEVICE FOR DATA PROCESSING - Designing a coupling of a traditional processor, in particular a sequential processor, and a reconfigurable field of data processing units, in particular a runtime-reconfigurable field of data processing units is described.06-30-2011
20110161976METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN PROCESSING NODES - A method efficiently dispatches/completes a work element within a multi-node, data processing system that has a global command queue (GCQ) and at least one high latency node. The method comprises: at the high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units can be staged prior to execution; a first local processing unit retrieving via a work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit; storing the larger chunk size of work retrieved in a local command/work queue (LCQ); enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; and transmitting a next work request to the GCQ only when all the work within the LCQ has been dispatched by the local processing units.06-30-2011
20110161975REDUCING CROSS QUEUE SYNCHRONIZATION ON SYSTEMS WITH LOW MEMORY LATENCY ACROSS DISTRIBUTED PROCESSING NODES - A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.06-30-2011
20110161972GOAL ORIENTED PERFORMANCE MANAGEMENT OF WORKLOAD UTILIZING ACCELERATORS - A method, information processing system, and computer readable storage medium are provided for dynamically managing accelerator resources. A first set of hardware accelerator resources is initially assigned to a first information processing system, and a second set of hardware accelerator resources is initially assigned to a second information processing system. Jobs running on the first and second information processing systems are monitored. When one of the jobs fails to satisfy a goal, at least one hardware accelerator resource in the second set of hardware accelerator resources from the second information processing system are dynamically reassigned to the first information processing system.06-30-2011
20080216081System and Method For Enforcing Future Policies in a Compute Environment - The invention relates to a system, method and computer-reliable medium, as well as grids and clusters managed according to the method described herein. An example embodiment relates to a method of processing a request for resources within a compute environment. The method is practiced by a system that contains modules configured or programmed to carry out the steps of the invention. The system receives a request for resources, generates a credential map for each credential associated with the request, the credential map comprising a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map. The allocations or reservation for the request can then be made in an optimal way for parameters such as the earliest time possible based on available resources and also that maintains the constraints on the requestor.09-04-2008
20080301691METHOD FOR IMPROVING RUN-TIME EXECUTION OF AN APPLICATION ON A PLATFORM BASED ON APPLICATION METADATA - A method for improving run-time execution of an application on a platform based on application metadata is disclosed. In one embodiment, the method comprises loading a first information in a standardized predetermined format describing characteristics of at least one of the applications. The method further comprises generating the run-time manager, based on the first information, the run-time manager comprising at least two run-time sub-managers, each handling the management of a different resource. The information needed to generate the two run-time sub-managers is at least partially shared.12-04-2008
20080301694COMMUNICATION SCHEDULING WITHIN A PARALLEL PROCESSING SYSTEM - Within a data processing system, one or more register files are assigned to respective states of a graph for each of a plurality of clock cycles. A plurality of edges are inserted to form connections between the states of the graph, with respective weights being assigned to each of the edges. A best route through the graph is then determined based, at least in part, on the weights assigned to the edges.12-04-2008
20080301693BLOCK ALLOCATION TIMES IN A COMPUTER SYSTEM - A method and apparatus improves the block allocation time in a parallel computer system. A pre-load controller pre-loads blocks of hardware in a supercomputer cluster in anticipation of demand from a user application. In the preferred embodiments the pre-load controller determines when to pre-load the compute nodes and the block size to allocate the nodes based on pre-set parameters and previous use of the computer system. Further, in preferred embodiments each block of compute nodes in the parallel computer system has a stored hardware status to indicate whether the block is being pre-loaded, or already has been pre-loaded. In preferred embodiments, the hardware status is stored in a database connected to the computer's control system. In other embodiments, the compute nodes are remote computers in a distributed computer system.12-04-2008
20080301692FACILITATING ACCESS TO INPUT/OUTPUT RESOURCES VIA AN I/O PARTITION SHARED BY MULTIPLE CONSUMER PARTITIONS - At least one input/output (I/O) firmware partition is provided in a partitioned environment to facilitate access to I/O resources owned by the at least one I/O firmware partition. The I/O resources of an I/O firmware partition are shared by one or more other partitions of the environment, referred to as consumer partitions. The consumer partitions use the I/O firmware partition to access the I/O resources. Since the I/O firmware partitions are responsible for providing access to the I/O resources owned by those partitions, the consumer partitions are relieved of this task, reducing complexity and costs in the consumer partitions.12-04-2008
20080301689DISCRETE, DEPLETING CHIPS FOR OBTAINING DESIRED SERVICE LEVEL CHARACTERISTICS - The present invention provides discrete, depleting chips for allocating computational resources for obtaining desired service level characteristics, wherein discrete chips deplete from a maximum allocated amount but may, in an optional implementation, be allowed to be replenished through the purchase of additional chips. A number of chips are assigned to a requestor/party, known as a business unit (BU), which could be a department, or group providing like-functionality services. In one implementation, the chips themselves could represent base monetary units integrated over time.12-04-2008
20080301688METHOD, SYSTEM, AND PROGRAM PRODUCT FOR ALLOCATING A RESOURCE - The invention provides a method, system, and program product for allocating a resource among a plurality of groups based on the role of each group within an organizational model. A method according to the invention may include, for example, granting a number of groups a privilege to bid on a resource, the privilege being based on a role of each group within an organizational model, accepting a bid for the resource from one or more of the groups, determining whether two or more groups have made equal, highest bids, in such a case, accepting a second bid from the groups having made equal, highest bids, and awarding a right to the resource to the group making the highest bid for the resource.12-04-2008
20120311601Method and apparatus for implementing task-process-table based hardware control - Disclosed is a method for implementing task-process-table based hardware control, which includes dividing a task that has to be implemented by a hardware circuit into multiple sub-processes, and determining the depth of the task process table according to the number of the sub-processes; according to the control information of the hardware unit corresponding to each sub-process and the number (SPAN) of clock cycles occupied by hardware processing for the sub-process, determining the bit width of the task process table and generating the task process table; starting the hardware unit corresponding to each sub-process in an order of the sub-processes, under the control of the control information in the task process table, and completing the processing of each sub-process. A device for implementing hardware control is also disclosed. The disclosure enables precise control of the hardware control flow and is of versatility. For the hardware implementation of a task with a complex algorithm flow, the data processing flow is accurate, and the development efficiency is improved.12-06-2012
20120311600INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A workload that can be processed with a resource amount available in a physical server is estimated. An information processing apparatus 20 includes a performance information storage unit 25 that stores information indicating each of plural types of workloads and a resource amount of the physical server allocated to each of the workloads when the workloads are run in a physical server 30, in a manner to be associated with each other, an acquiring unit 21 that acquires a resource amount available in the physical server 30, a comparison unit 22 that selects at least one stored work load, and compares the available resource amount with the resource amount associated with the selected workload, and a first extraction unit 23 that extracts the selected workload if the compared resource amount is less than or equal to the available resource amount.12-06-2012
20120311599INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processor includes processors of a plurality of types and a processing assignment module. The processing assignment module sequentially assigns basic modules to the processors if the processors are available based on the types of the processors. The type of a processor to which processing of each of the basic modules is preferentially assigned is specified in advance.12-06-2012
20120311598RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM - Exemplary method, system, and computer program product embodiments for resource allocation of a plurality of resources for a dual activity system by a processor device, are provided. In one embodiment, by way of example only, each of the activities may be started at a static quota. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. In response to the rejection of the resource request for the alternative one of the at least one of the dual activities, a resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary.12-06-2012
20120311597METHOD AND SYSTEM FOR INFINIBAND HOST CHANNEL ADAPTOR QUALITY OF SERVICE - A method for allocating resources of a host channel adapter includes the host channel adapter identifying an underlying function referenced in the first resource allocation request received from a virtual machine manager, determining that the first resource allocation request specifies a number of physical collect buffers (PCBs) allocated to the underlying function, allocating the number of PCBs to the underlying function, determining that the first resource allocation request specifies a number of virtual collect buffers (VCBs) allocated to the underlying function, and allocating the number of VCBs to the underlying function. The host channel adapter further receives command data for a command from the single virtual machine, determines that the underlying function has in use at least the number of PCBs when the command data is received, and drops the command data in the first command based on the underlying function having in use at least the number of PCBs.12-06-2012
20080276243Resource Management Platform - In client-server architectures, systems and methods for implementing an extensible resource management platform at a server are described. The extensible resource management platform is developed based on a plug-in based architecture which includes one or more subsystems for performing functions associated with resource management. Different implementations can be provided by new or different components or plug-ins. The resource management platform is thus a platform over which one or more functionalities can be further added to supplement existing and varying functions.11-06-2008
20110047553APPARATUS AND METHOD FOR INPUT/OUTPUT PROCESSING OF MULTI-THREAD - Provided sets a limit of execution threads which can be simultaneously processes in an input/output system and compares the number of threads which are being currently executed with the limit of execution threads at the time of requesting an input/output event from a thread and manages a job of processing the input/output event in accordance with the comparison result. The apparatus for asynchronous input/output processing of a multi-thread according to the present invention restricts the number of threads which are processed in the asynchronous input/output system as many as the limit of execution threads to prevent deterioration of performance caused due to context switching overhead of the thread and efficiently manage the thread.02-24-2011
20110055844HIGH DENSITY MULTI NODE COMPUTER WITH INTEGRATED SHARED RESOURCES - A multi-node computer system, comprising: a plurality of nodes, a system control unit and a carrier board. Each node of the plurality of nodes comprises a processor and a memory. The system control unit is responsible for: power management, cooling, workload provisioning, native storage servicing, and I/O. The carrier board comprises a system fabric and a plurality of electrical connections. The electrical connections provide the plurality of nodes with power, management controls, system connectivity between the system control unit and the plurality of nodes, and an external network connection to a user infrastructure. The system control unit and the carrier board provide integrated, shared resources for the plurality of nodes. The multi-node computer system is provided in a single enclosure.03-03-2011
20080250419METHOD AND SYSTEM FOR MANAGING RESOURCE CONNECTIONS - Methods and system for managing resource connections are described. In one embodiment, a user request associated with a centralized resource may be received. Availability of a connection to the centralized resource may be determined. A stagger delay for connection creation may be determined. The stagger delay may define a delay for creation of a new connection. The new connection to the centralized resource may be created based on the determining of whether the connection to the centralized resource is available and the delay interval. The new connection may be utilized to process the user request.10-09-2008
20080250417Application Management Support System and Method - A first information resource denoting which logical volume is allocated to which application program is prepared in a management computer. The management computer either regularly or irregularly acquires from the storage system information as to which logical volumes were updated at what times, registers same in a second information resource, references the first and second information resources, acquires update management information, which is information denoting which logical volume is updated at what time, and the application program to which this logical volume is allocated, and sends this update management information to a host computer. The host computer, based on the update management information from the management computer, displays which logical volume has been updated at what time, and which application program is allocated to this logical volume.10-09-2008
20080250420Jobstream Planner Considering Network Contention & Resource Availability - Disclosed is a computer-implemented planning process that aids a system administrator in the task of creating a job schedule. The process treats enterprise computing resources as a grid of resources, which provides greater flexibility in assigning resources to jobs. During the planning process, an administrator or other user, or software, builds a job-dependency tree. Jobs are then ranked according to priority, pickiness, and network centricity Difficult and problematic jobs then are assigned resources and scheduled first, with less difficult jobs assigned resources and scheduled afterwards. The resources assigned to the most problematic jobs then are changed iteratively to determine if the plan improves. This iterative approach not only increases the efficiency of the original job schedule, but also allows the planning process to react and adapt to new, ad-hoc jobs, as well as unexpected interruptions in resource availability.10-09-2008
20080250418HEALTH CARE ADMINISTRATION SYSTEM - Embodiments of the present invention provide systems and methods for managing an event in a health care organization, the method comprising standardizing, during a design phase, a workflow associated with an event; and executing, during an executing phase, the workflow to complete a procedure associated with the event. Other embodiments may be described and claimed.10-09-2008
20110023046MITIGATING RESOURCE USAGE DURING VIRTUAL STORAGE REPLICATION - Systems and methods of mitigating resource usage during virtual storage replication are disclosed. An exemplary method comprises detecting quality of a link between virtual storage libraries used for replicating data. The method also comprises determining a number of concurrent jobs needed to saturate the link. The method also comprises dynamically adjusting the number of concurrent jobs to saturate the link and thereby mitigate resource usage during virtual storage replication.01-27-2011
20110265093Computer System and Program Product - A computer system includes a plurality of processors, a shared resource being used by the processors, and a storage unit in which management information corresponding to the shared resource is stored. The management information includes a semaphore for each OS managing a task which runs on the processors, a queue in which information for specifying a processor which has requested acquisition of the shared resource is stored in series, and a resource counter indicating a remaining number of the shared resources which can be acquired. Each of the processors includes a counter obtaining section that obtains a value of the resource counter, an acquisition decision-making section that makes a decision as to whether or not the shared resource can be acquired, and a resource acquiring section that stores information for specifying the processor in the queue if decided that it can not be acquired.10-27-2011
20110265092PARALLEL COMPUTER SYSTEM, JOB SERVER, JOB SCHEDULING METHOD AND JOB SCHEDULING PROGRAM - A parallel computer system comprising a node group having numbers of nodes connected by a network, in which a job scheduler of a job server that schedules jobs to be executed by a node of the node group comprises a temperature calculating unit which with a node being used of the node group as an imaginary heat source and with the assumption that a quantity of heat is conducted from the heat source to a surrounding node, calculates a temperature of a surrounding free node based on a distance from the heat source, a free region extracting unit which selects, from a plurality of temperature groups obtained by grouping free nodes on a certain temperature range basis, a temperature group meeting the number of free nodes required by a job according to a temperature and takes out a lowest temperature free node from the selected temperature group as a center node, and a node selecting unit which sequentially selects the necessary number of free nodes starting with a shortest distance free node centered around the center node.10-27-2011
20100293551Job scheduling apparatus and job scheduling method - When allocating an unallocated queued job, by using a CDA having a mesh structure to which active jobs are allocated, a job scheduling apparatus scans an event list that includes information about allocation events and release events for jobs, determines the coordinates and the time at which submeshes corresponding to the queued jobs are reserved, and arranges the submeshes by overlapping them on the CDA.11-18-2010
20100293550SYSTEM AND METHOD PROVIDING FOR RESOURCE EXCLUSIVITY GUARANTEES IN A NETWORK OF MULTIFUNCTIONAL DEVICES WITH PREEMPTIVE SCHEDULING CAPABILITIES - A system and method for enabling automated task preemption, including a plurality of multifunctional devices having a plurality of functional capabilities; and a processing module configured to: (i) separate the tasks requiring the plurality of functional capabilities into the tasks requiring a first category of capabilities and the tasks requiring a second category of capabilities, where the tasks requiring the first category of capabilities has a higher processing priority than the tasks requiring the second category of capabilities; and (ii) selectively process the tasks requiring the first category of capabilities before the tasks requiring the second category of capabilities regardless of arrival times of the tasks requiring the plurality of capabilities; wherein the tasks requiring the second category of capabilities that are preempted by the tasks requiring the first category of capabilities are rescheduled to be completed within a predetermined time period of completion.11-18-2010
20100293549System to Improve Cluster Machine Processing and Associated Methods - A system to improve cluster machine processing that may include a plurality of interconnected computers that process data as one if necessary, and at least one other plurality of interconnected computers that process data as one if necessary. The system may also include a central manager to control what data processing is performed on a shared processing job performed by the plurality of interconnected computers and the at least one other plurality of interconnected computers. Each of the plurality of interconnected computers runs parallel jobs scheduled by a local backfill scheduler. In order to schedule a cluster spanning parallel job, the local schedulers cooperate on placement and timing of the cluster spanning job, using existing backfill rules in order not to disturb the local job streams.11-18-2010
20110138393Thread Allocation and Clock Cycle Adjustment in an Interleaved Multi-Threaded Processor - Methods, apparatuses, and computer-readable storage media are disclosed for reducing power by reducing hardware-thread toggling in a multi-threaded processor. In a particular embodiment, a method allocates software threads to hardware threads. A number of software threads to be allocated is identified. It is determined when the number of software threads is less than a number of hardware threads. When the number of software threads is less than the number of hardware threads, at least two of the software threads are allocated to non-sequential hardware threads. A clock signal to be applied to the hardware threads is adjusted responsive to the non-sequential hardware threads allocated.06-09-2011
20110138394Service Oriented Collaboration - When a service is requested at a platform in a collaborative services environment, a service orchestration engine accesses a service definition from a repository and schedules a number of tasks at a number of end points in accordance with a number of end point profiles and a number of policies associated with the end points.06-09-2011
20100115527METHOD AND SYSTEM FOR PARALLELIZATION OF PIPELINED COMPUTATIONS - A method of parallelizing a pipeline includes stages operable on a sequence of work items. The method includes allocating an amount of work for each work item, assigning at least one stage to each work item, partitioning the at least one stage into at least one team, partitioning the at least one team into at least one gang, and assigning the at least one team and the at least one gang to at least one processor. Processors, gangs, and teams are juxtaposed near one another to minimize communication losses.05-06-2010
20100115528Software Defined Radio - A method for providing a division of SDR RA into operational states is described. The method includes, in a device which including a plurality of shared device resources and a plurality of RAs, receiving, from a first RA, a request to change a state of the first RA to a requested active state. The requested active state is one of a plurality of potential active states for the first RA and each potential active state has an associated set of device resource requirements. The method also includes determining whether sufficient device resources exist for the requested active state based at least in part on currently allocated device resources. In response to a determination that sufficient device resources exist, the change to the requested active state for the first RA is approved. Apparatus and computer readable media are also described.05-06-2010
20100115526METHOD AND APPARATUS FOR ALLOCATING RESOURCES IN A COMPUTE FARM - Some embodiments provide a system for allocating resources in a compute farm. During operation, the system can receive resource-requirement information for a project. Next, the system can receive a request to execute a new job in the compute farm. In response to determining that no job slots are available for executing the new job, and that the project associated with the new job has not used up its allocated job slots, the system may execute the new job by suspending or re-queuing a job that is currently executing, and allocating the freed-up job slot to the new job. If the system receives a resource-intensive job, the system may create dummy jobs, and schedule the dummy jobs on the same computer system as the resource-intensive job to prevent the queuing system from scheduling multiple resource-intensive jobs on the same computer system.05-06-2010
20110088040Namespace Merger - In a virtualization environment, there is often a need for an application to access different resources (e.g., files, configuration settings, etc.) on a computer by name. The needed resources can potentially come from any one of a plurality of discrete namespaces or containers of resources on the computer. A resource name can identify one resource in one namespace and another resource in another namespace, and the namespaces may have different precedence relative to one another. The resources needed by the application can be accessed by enumerating names in a logical merger of the namespaces such that as new names in the logical merger are needed they are dynamically chosen from among the namespaces. When two resources in different namespaces have a same name, the resource in the higher precedence namespace can be chosen.04-14-2011
20110088039Power Monitoring and Control in Cloud Based Computer - According to another general aspect, a method for displaying the system resource usage of a computer may include identifying the number of open tabs in one or more tabbed based browsers running on the computer. The method may include determining the system resource usage of each tab. The method may further include displaying the system resource usage of each tab in a system resource meter.04-14-2011
20110088038Multicore Runtime Management Using Process Affinity Graphs - Technologies are generally described for runtime management of processes on multicore processing systems using process affinity graphs. Two or more processes may be determined to be related when the processes share interprocess messaging traffic. These related processes may be allocated to neighboring or nearby processor cores within a multicore processor using graph theory techniques as well as communication analysis techniques to evaluate interprocess communication needs. Process affinity graphs may be established to aid in determining grouping of processors and evaluating interprocess message traffic between groups of processes. The process affinity graphs may be based upon process affinity scores determined by monitoring and analyzing interprocess messaging traffic. Process affinity graphs may further inform splitting process affinity groups from one core onto two or more cores.04-14-2011
20110179422Shared Resource Management - Systems, methods, apparatus, and computer program products are provided for monitoring and allocating shared resources. For example, in one embodiment, the status of resource dependent entities is continuously monitored to determine the current use of a shared resource. When a resource dependent entity requires use of the shared resource, a (a) request for use of the shared resource can be generated and (b) determination can be made as to whether any of the current allocations of the shared resource can be released for use by the resource dependent entity.07-21-2011
20100223619VISUALIZATION-CENTRIC PERFORMANCE-BASED VOLUME ALLOCATION - A method, system, and computer program product for visualization-centric performance-based volume allocation in a data storage system using a processor in communication with a memory device is provided. A unified resource graph representative of a global hierarchy of storage components in the data storage system, including each of a plurality of storage controllers, is generated. The unified resource graph includes a common root node and a plurality of subtree nodes corresponding to each of a plurality of nodes internal to the plurality of storage controllers. The common root node and the plurality of subtree nodes are ordered in a top-down orientation. Scalable volume provisioning of an existing or new workload amount by graphical manipulation of at least one of the storage components represented by the unified resource graph is performed based on an input.09-02-2010
20120151491Redistributing incomplete segments for processing tasks in distributed computing - A method or system for redistributing incomplete segments for processing tasks by generating a model based on resources of a plurality of separate electronic devices; simulating an assessment task to determine a computation time for the assessment task according to the model; updating the model to optimize the computation time based on a dynamic availability of the resources and a processing requirement of a live task; distributing task segments for processing the live task based on the updated model; and dynamically redistributing incomplete segments for processing the live task by further updating the model based on the dynamic availability of the resources.06-14-2012
20090300638MEMORY ALLOCATORS CORRESPONDING TO PROCESSOR RESOURCES - A memory allocator is provided for each processor resource in a process of a computer system. Each memory allocator includes a set of pages, a locally freed list of objects, and a remotely freed list of objects. Each memory allocator requests the pages from an operating system and allocates objects to all execution contexts executing on a corresponding processing resource. Each memory allocator attempts to allocate an object from the locally freed list before allocating an object from the remotely freed list or an allocated page.12-03-2009
20090300637SCHEDULER INSTANCES IN A PROCESS - A runtime environment of a computer system is provided that creates first and second scheduler instances in a process. Each scheduler instance includes allocated processing resources and is assigned a set of tasks for execution. Each scheduler instance schedules tasks for execution using the allocated processing resources to perform the work of the process.12-03-2009
20090300640ALLOCATION IDENTIFICATION APPARATUS OF I/O PORTS, METHOD FOR IDENTIFYING ALLOCATION THEREOF AND INFORMATION PROCESSOR - An allocation identification apparatus of input/output ports of an information processor (PC) operated as two or more virtual information processors, includes input/output ports (I/O ports) allocated to the virtual information processors, an identification information generating part (a hyper visor) that identifies the virtual information processors to which the input/output ports of the information processor are assigned and that generates identification information thereof, and a display part that displays the identification information generated by the identification information generating part.12-03-2009
20090300636REGAINING CONTROL OF A PROCESSING RESOURCE THAT EXECUTES AN EXTERNAL EXECUTION CONTEXT - A scheduler in a process of a computer system allows an external execution context to execute on a processing resource allocated to the scheduler. The scheduler provides control of the processing resource to the external execution context. The scheduler registers for a notification of an exit event associated with the external execution context. In response to receiving the notification that the exit event has occurred, the scheduler regains control of the processing resource and causes a task associated with an execution context controlled by the scheduler to be executed by the processing resource.12-03-2009
20090293062Method for Dynamically Freeing Computer Resources - A method dynamically frees computer resources in a multitasking and windowing environment by activating a GUI widget to initiate pausing of an application, pausing CPU processing of the application code, maintaining data of the application in main memory, storing state information for the application code and a process of the application in mass storage, removing the application code from main memory to mass storage, when another application requires additional memory, activating another GUI widget to resume running of the application, restoring the state information for the code and the process to main memory before the application resumes running, and resuming the CPU processing of the application.11-26-2009
20090293064SYNCHRONIZING SHARED RESOURCES IN AN ORDER PROCESSING ENVIRONMENT USING A SYNCHRONIZATION COMPONENT - An order processing system including an order processing container, a factory registry, a relationship registry, and synchronization function component. The order processing system can handle orders, which are build plans including a set of tasks. The tasks can specify programmatic actions which may include creation, deletion, and modification of resources and resource topologies. The order processing container can be central engine that programmatically drives order processing actions. The factory registry can support a creation and deletion of resource instances in a resource topology defined by at least one order. The relationship registry can maintain relationships among resources. The synchronization function component can permit transparent usage of shared resources in accordance with shared usage resource topology parameters specified within processed orders.11-26-2009
20090293063MINIMIZATION OF READ RESPONSE TIME - A method, system and computer program product for minimizing read response time in a storage subsystem including a plurality of resources is provided. A middle logical block address (LBA) is calculated for a read request. A preferred resource of the plurality of resources is determined by calculating a minimum seek time based on a closest position to a last position of a head at each resource of the plurality of resources, estimated from the middle LBA. The read request is directed to at least one of the preferred resource or an alternative resource.11-26-2009
20110154353Demand-Driven Workload Scheduling Optimization on Shared Computing Resources - Systems and methods implementing a demand-driven workload scheduling optimization of shared resources used to execute tasks submitted to a computer system are disclosed. Some embodiments include a method for demand-driven computer system resource optimization that includes receiving a request to execute a task (said request including the task's required execution time and resource requirements), selecting a prospective execution schedule meeting the required execution time and a computer system resource meeting the resource requirement, determining (in response to the request) a task execution price for using the computer system resource according to the prospective execution schedule, and scheduling the task to execute using the computer system resource according to the prospective execution schedule if the price is accepted. The price varies as a function of availability of the computer system resource at times corresponding to the prospective execution schedule, said availability being measured at the time the price is determined.06-23-2011
20110154351Tunable Error Resilience Computing - An attribute of a descriptor associated with a task informs a runtime environment of which instructions a processor is to run to schedule a plurality of resources for completion of the task in accordance with a level of quality of service in a service level agreement.06-23-2011
20100031265Method and System for Implementing Realtime Spinlocks - A system and method for receiving a request from a requester for access to a computing resource, instructing the requester to wait for access to the resource when the resource is unavailable and allowing the requester to perform other tasks while waiting, determining whether the requester is available when the resource subsequently becomes available, and granting access to the resource by the requester if the requester is available.02-04-2010
20100023949SYSTEM AND METHOD FOR PROVIDING ADVANCED RESERVATIONS IN A COMPUTE ENVIRONMENT - A system and method are disclosed for dynamically reserving resources within a cluster environment. The method embodiment of the invention comprises receiving a request for resources in the cluster environment, monitoring events after receiving the request for resources and based on the monitored events, dynamically modifying at least one of the request for resources and the cluster environment.01-28-2010
20100023948ALLOCATING RESOURCES IN A MULTICORE ENVIRONMENT - In a multicore programming environment comprising a plurality of processors in a plurality of categories, and having predetermined communication resources of different types for interconnecting the processors, resources are allocated by: receiving a plurality of software processes, each process having a connection requirement; receiving an allocation scheme, in which each of the software processes is allocated to a respective processor of the plurality of processors; determining a plurality of communication requirements based on the connection requirements and the processors to which each process is allocated; and for each of the communication requirements: determining the respective processors to which the associated processes have been assigned; and allocating a communications resource of a type that is suitable based on the categories of said respective processors, such that the total allocated communications resource does not exceed the predetermined communication resources.01-28-2010
20100023947SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR RESOURCE COLLABORATION OPTIMIZATION - A method including receiving a plurality of roles in a data processing system and adding a part-time resource to at least one role. The method also includes determining, in the data processing system, if a project duration has changed as a result of adding the part-time resource, and if the project duration has changed, repeating the process at the adding step. The method also includes storing results corresponding to the resources assigned to roles. There is also a similar data processing system and machine-usable medium.01-28-2010
20080301690Model-based planning with multi-capacity resources - Systems and methods are described that facilitate performing model-based planning techniques for allocations of multi-capacity resources in a machine. The machine may be, for instance, a printing platform, such as a xerographic machine. According to various features, the multi-capacity resource may be a sheet buffer, and temporal constraints may be utilized to determine whether an insertion point for a new allocation of the sheet buffer is feasible. Multiple insertion points may be evaluated (e.g., serially or in parallel) to facilitate determining an optimal solution for a print job or the like.12-04-2008
20090064163Mechanisms for Creation/Deletion of Linear Block Address Table Entries for Direct I/O - The present invention provides mechanisms that enable application instances to pass block mode storage requests directly to a physical I/O adapter without run-time involvement from the local operating system or hypervisor. In one aspect of the present invention, a mechanism is provided for handling user space creation and deletion operations for creating and deleting allocations of linear block addresses of a physical storage device to application instances. For creation, it is determined if there are sufficient available resources for creation of the allocation. For deletion, it is determined if there are any I/O transactions active on the allocation before performing the deletion. Allocation may be performed only if there are sufficient available resources and deletion may be performed only if there are no active I/O transactions on the allocation being deleted.03-05-2009
20110307902ASSIGNING TASKS IN A DISTRIBUTED SYSTEM - A method and apparatus are provided for assigning tasks in a distributed system. The method comprises indicating to one or more remote systems in the distributed system that a task is available for processing based on a list identifying the one or more remote systems. The method further comprises receiving at least one response from the one or more remote systems capable of performing the task based on the indication. The method comprises allowing at least one of the remote systems to perform the task based on the at least one received response.12-15-2011
20110307901SYSTEM AND METHOD FOR INTEGRATING CAPACITY PLANNING AND WORKLOAD MANAGEMENT - A system for integrating resource capacity planning and workload management, implemented as programming on a suitable computing device, includes a simulation module that receives data related to execution of the workloads, resource types, numbers, and capacities, and generates one or more possible resource configuration options; a modeling module that receives the resource configuration options and determines, based on one or more specified criteria, one or more projected resource allocations among the workloads; and a communications module that receives the projected resource allocations and presents the projected resource allocations for review by a user.12-15-2011
20110307900CHANGING STREAMING MEDIA QUALITY LEVEL BASED ON CURRENT DEVICE RESOURCE USAGE - Streaming media is received from a source system. A current overall resource usage of a resource of the device (such as a CPU or memory of the device) is obtained. A check is made as to whether the current overall resource usage exceeds a resource threshold value. If the current overall resource usage exceeds the resource threshold value, then an indication is provided to the source system to reduce a quality level of the streaming media. The streaming media is received from the source system at the reduced quality level until there is sufficient resource capacity at the device to increase the quality level.12-15-2011
20110307899COMPUTING CLUSTER PERFORMANCE SIMULATION USING A GENETIC ALGORITHM SOLUTION - Illustrated is a system and method that includes identifying a search space based upon available resources, the search space to be used to satisfy a resource request. The system and method also includes selecting from the search space an initial candidate set, each candidate of the candidate set representing a potential resource allocation to satisfy the resource request. The system and method further includes assigning a fitness score, based upon a predicted performance, to each member of the candidate set. The system and method also includes transforming the candidate set into a fittest candidate set, the fittest candidate set having a best predicted performance to satisfy the resource request.12-15-2011
20110307898METHOD AND APPARATUS FOR EFFICIENTLY DISTRIBUTING HARDWARE RESOURCE REQUESTS TO HARDWARE RESOURCE OFFERS - A method and an apparatus provide for efficiently distributing hardware resource requests to hardware resource offers. Applying the method and apparatus, an allocation of hardware resources is possible in a highly efficient and effective way. Therefore, a system architecture is introduced, which provides components for determining negotiation approaches as well as splitting complex allocation problems into single and independent allocation problems. The method and apparatus find application in a variety of technical domains and especially in the domain of hardware resource allocation as well as agent technology.12-15-2011
20090172692Enterprise Resource Planning with Asynchronous Notifications of Background Processing Events - Methods, systems, and computer program products for operating an enterprise resource planning system. The method includes running a placeholder job in said enterprise resource planning system in response to a request from at least one client application for notification of at least one background processing event, wherein the placeholder job is executed in response to the at least one background processing event.07-02-2009
20090172691STREAMING OPERATIONS FOR WORKFLOW PROCESS MODELS - A buffer may be configured to store a plurality of items, and to be accessed by one or more activities of an instance of a process model. A scheduler may be configured to schedule execution of each of a plurality of activities of the process model, and to determine an activation of an activity of the plurality of activities. The scheduler may include an activity manager configured to access an activity profile of the activity upon the determining of the activation, the activity profile including buffer access characteristics according to which the activity is designed to access the buffer. A process execution unit may be configured to execute the activity and may include a buffer access manager configured to access the buffer according to the buffer access characteristics of the activity profile, and to thereby facilitate an exchange of at least one item between the buffer and the activity.07-02-2009
20090172690System and Method for supporting metered clients with manycore - In some embodiments, the invention involves partitioning resources of a manycore platform for simultaneous use by multiple clients, or adding/reducing capacity to a single client. Cores and resources are activated and assigned to a client environment by reprogramming the cores' route tables and source address decoders. Memory and I/O devices are partitioned and securely assigned to a core and/or a client environment. Instructions regarding allocation or reallocation of resources is received by an out-of-band processor having privileges to reprogram the chipsets and cores. Other embodiments are described and claimed.07-02-2009
20090172688MANAGING EXECUTION WITHIN A COMPUTING ENVIRONMENT - The projected effect of executing a proposed action on the computing environment is determined. Based on the projected effect, programmatic enforcement of whether the action is allowed to execute or not is provided. The action is selected based on the current status of the environment.07-02-2009
20090172687MANAGEMENT OF COMPUTER EVENTS IN A COMPUTER ENVIRONMENT - The scope and impact of an event, such as a failure, are identified. A Containment Region is used to identify the resources affected by the event. It is also used to aggregate resource state for those resources. This information is then used to manage one or more aspects of a customer's environment. This management may include recovery from a failure.07-02-2009
20120042321DYNAMICALLY ALLOCATING META-DATA REPOSITORY RESOURCES - The apparatus for dynamically allocating resources used in a meta-data repository includes a tracking module to track resources allocated to a meta-data repository, the meta-data repository comprising a repository that stores meta-data related to a computer system. An adjustment evaluation module evaluates repository usage of the resources allocated to the meta-data repository and ascertains whether a resource adjustment is desirable. An adjustment determination module determines desirable adjustments to the resources available to the meta-data repository. An allocation module adjusts resources allocated to the meta-data repository in accordance with the adjustment determination module. Adjusting resources includes changing a number of strings allocated to handle concurrent meta-data repository I/O requests.02-16-2012
20120042320SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources.02-16-2012
20120042319Scheduling Parallel Data Tasks - A method for allocating parallel, independent, data tasks includes receiving data tasks, each of the data tasks having a penalty function, determining a generic ordering of the data tasks according to the penalty functions, wherein the generic ordering includes solving an aggregate objective function of the penalty functions, the method further including determining a schedule of the data tasks given the generic ordering, which packs the data tasks to be performed.02-16-2012
20120210328Guarded, Multi-Metric Resource Control for Safe and Efficient Microprocessor Management - A mechanism is provided for guarded, multi-metric resource control. Monitoring is performed for an intended action to address a negative condition from a resource manager in a plurality of resource managers in the data processing system. Responsive to receiving the intended action, a determination is made as to whether the intended action will cause an additional negative condition within the data processing system. Responsive to determining that the intended action will cause the additional negative condition within the data processing system, at least one alternative action is identified to be implemented in the data processing system that addresses the negative condition while not causing any additional negative condition. The at least one alternative action is then implemented in the data processing system.08-16-2012
20120210330Executing A Distributed Java Application On A Plurality Of Compute Nodes - Methods, systems, and products are disclosed for executing a distributed Java application on a plurality of compute nodes. The Java application includes a plurality of jobs distributed among the plurality of compute nodes. The plurality of compute nodes are connected together for data communications through a data communication network. Each of the plurality of compute nodes has installed upon it a Java Virtual Machine (‘JVM’) capable of supporting at least one job of the Java application. Executing a distributed Java application on a plurality of compute nodes includes: tracking, by an application manager, a just-in-time (‘JIT’) compilation history for the JVMs installed on the plurality of compute nodes; and configuring, by the application manager, the plurality of jobs for execution on the plurality of compute nodes in dependence upon the JIT compilation history for the JVMs installed on the plurality of compute nodes.08-16-2012
20120210331PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization.08-16-2012
20120210329STORAGE SYSTEM AND METHOD FOR CONTROLLING THE SAME - Optimum load distribution processing is selected and executed based on settings made by a user in consideration of load changes caused by load distribution in a plurality of asymmetric cores, by using: a controller having a plurality of cores, and configured to extract, for each LU, a pattern showing the relationship between a core having an LU ownership and a candidate core as an LU ownership change destination based on LU ownership management information; to measure, for each LU, the usage of a plurality of resources; to predicate, for each LU based on the measurement results, a change in the usage of the plurality of resources and overhead to be generated by transfer processing itself; to select, based on the respective prediction results, a pattern that matches the user's setting information; and to transfer the LU ownership to the core belonging to the selected pattern.08-16-2012
20120047512METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR SELECTING A RESOURCE BASED ON A MEASURE OF A PROCESSING COST - Methods and systems are described for selecting a resource based on a measure of a processing cost. Resource information is received identifying a first resource and a second resource for processing by a program component. One or more of a first measure of a specified processing cost for the processing of the first resource and a second measure of the processing cost for the processing of the second resource is determined. One of the first resource and the second resource is selected based on at least one of the first measure and the second measure. The selected one of the first resource and the second resource is identified to the program component for processing.02-23-2012
20130014121METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment.01-10-2013
20130014120Fair Software Locking Across a Non-Coherent Interconnect - Access to a shared resource by a plurality of execution units is organized and controlled by issuing tickets to each execution unit as they request access to the resource. The tickets are issued by a hardware atomic unit so that each execution unit receives a unique ticket number. A current owner field indicates the ticket number of the execution unit that currently has access to the shared resource. When an execution unit has completed its access, it releases the shared resource and increments the owner field. Execution units awaiting access to the shared resource periodically check the current value of the owner field and take control of the shared resource when their respective ticket values match the owner field.01-10-2013
20120005685Information Processing Grid and Method for High Performance and Efficient Resource Utilization - System and method are proposed for intelligent assignment of submitted information processing jobs to computing resources in an information processing grid based upon real-time measurements of job behavior and predictive analysis of job throughput and computing resource consumption of the correspondingly generated workloads. The job throughput and computing resource utilization are measured and analyzed in multiple parametric dimensions. The analyzed workload may work with a job scheduling system to provide optimized job dispatchment to computing resources across the grid. Application of a parametric weighting system to the parametric dimensions makes the optimization system dynamic and flexible. Through adjustment of these parametric weights, the focus of the optimization can be adjusted dynamically to support the immediate operational goals of the system as a whole.01-05-2012
20090106764Support for globalization in test automation - Various technologies and techniques are disclosed for supporting globalization in user interface automation. A resource key is provided that contains at least three data elements. A resource type data element contains data representing a resource type, a resource location data element contains data representing a location to a resource file, and a resource identifier data element contains data-representing a resource identifier. During a resource file extraction operation, the resource location data element is used to locate the resource file, and the resource type data element and the resource identifier data element are used to locate a resource within the resource file that matches the resource type and the resource identifier. A process is provided for resolving a full path name to a resource file. A process is provided for performing a post-extraction action on an extracted resource string.04-23-2009
20120047513WORK PROCESSING APPARATUS FOR SCHEDULING WORK, CONTROL APPARATUS FOR SCHEDULING ACTIVATION, AND WORK SCHEDULING METHOD IN A SYMMETRIC MULTI-PROCESSING ENVIRONMENT - A work scheduling technology in a symmetric multi-processing (SMP) environment is provided. A work scheduling function for a SMP environment is implemented in a work processing apparatus, thereby reducing the scheduling overhead, and enhancing the efficiency in use of CPU resources and improving the CPU performance.02-23-2012
20120011517GENERATION OF OPERATIONAL POLICIES FOR MONITORING APPLICATIONS - Example embodiments relate to generation of operational policies for monitoring applications. In example embodiments, data generated based on decomposition of a Service Level Agreement (SLA) is received. Furthermore, in example embodiments, an operational policy is generated using the decomposition data. The operational policy may be used to control operation of a monitoring application.01-12-2012
20120011518SHARING WITH PERFORMANCE ISOLATION BETWEEN TENANTS IN A SOFTWARE-AS-A SERVICE SYSTEM - An apparatus hosting a multi-tenant software-as-a-service (SaaS) system maximizes resource sharing capability of the SaaS system. The apparatus receives service requests from multiple users belonging to different tenants of the multi-tenant SaaS system. The apparatus partitions the resources in the SaaS system into different resource groups. Each resource group handles a category of the service requests. The apparatus estimates costs of the service requests of the users. The apparatus dispatches service requests to resource groups according to the estimated costs, whereby the resources are shared, among the users, without impacting each other.01-12-2012
20120011516Method for the administration of resources - A method for the administration of resources, in which classes or instances, respectively, are assigned to the resources and a program receives a rule assigned to the class or instance, respectively, and applies it to the resource. It is made sure that only rules assigned to the class or instance, respectively, are applied on the resource. In alternative methods, only rules are applied on the resource, which were accepted by a verification rule assigned to the resource.01-12-2012
20120047511THROTTLING STORAGE INITIALIZATION FOR DATA DESTAGE - Method, system, and computer program product embodiments for throttling storage initialization for data destage in a computing storage environment are provided. An implicit throttling operation is performed by limiting a finite resource of a plurality of finite resources available to a background initialization process, the background initialization process adapted for performing the storage initialization ahead of a data destage request. If a predefined percentage of the plurality of finite resources is utilized, at least one of the plurality of finite resources is deferred to a foreground process that is triggered by the data destage request, the foreground process adapted to perform the storage initialization ahead of a data destage performed pursuant to the data destage request. An explicit throttling operation is performed by examining a snapshot of storage activity occurring outside the background initialization process.02-23-2012
20090070770Ordering Provisioning Request Execution Based on Service Level Agreement and Customer Entitlement - A solution provided here comprises receiving requests for a service from a plurality of customers, responding to the requests for a service, utilizing a shared infrastructure, and configuring the shared infrastructure, based on stored customer information. Another example of such a solution comprises: 03-12-2009
20120060165CLOUD PIPELINE - Cloud service providers are selected to perform a data processing job based on information about the cloud service providers and criteria of the job. A plan for a cloud pipeline for performing the job is designed based on the information about the cloud service providers. The plan comprises processing stages each of which indicates processing upon a subset of a data set of the job. Allocated resources of the set of cloud service providers are mapped to the processing stages. Instructions and software images based on the plan are generated. The instructions and the software images implement the cloud pipeline for performing the data processing job. The instructions and the software images are transmitted to machines of the cloud service providers. The machines and the performing of the job are monitored. If the monitoring detects a failure, then the cloud pipeline is adapted to the failure.03-08-2012
20120023503MANAGEMENT OF COMPUTING RESOURCES FOR APPLICATIONS - The subject matter of this disclosure can be implemented in, among other things, a method. In these examples, the method includes receiving a resource request message to obtain access to a computing resource, and storing the resource request message in a data repository that stores a collection of resource request messages received from a group of applications executing on the computing device. The method may also include responsive to determining that the resource request message received from the first application has a highest priority of the collection of resource request messages, determining whether a second application currently has access to the computing resource, issuing a resource lost message to the second application to indicate that the second application has lost access to the computing resource, and issuing a resource request granted message to the first application, such that the first application obtains access to the computing resource.01-26-2012
20120060170Method and scheduler in an operating system - Method and scheduler in an operating system, for scheduling processing resources on a multi-core chip. The multi-core chip comprises a plurality of processor cores. The operating system is configured to schedule processing resources to an application to be executed on the multi-core chip. The method comprises allocating a plurality of processor cores to the application. Also, the method comprises switching off another processor core allocated to the application, not executing the sequential portion of the application, when a sequential portion of the application is executing on only one processor core. In addition, the method comprises increasing the frequency of the one processor core executing the application to the second frequency, such that the processing speed is increased more than predicted by Amdahl's law.03-08-2012
20120060169SYSTEMS AND METHODS FOR RESOURCE CONTROLLING - A resource controller that includes a first buffer configured to store requests of a first predefined category having a first priority. In addition, the resource controller includes at least a second buffer configured to store requests of a second predefined category having a second priority where the first priority is set such that processing requests of the first category has priority over processing the requests of the second category. Also, the resource controller includes a mechanism configured to block the requests of the first category when a predefined condition is met.03-08-2012
20120060168VIRTUALIZATION SYSTEM AND RESOURCE ALLOCATION METHOD THEREOF - A virtualization system for supporting at least two operating systems and resource allocation method of the virtualization system are provided. The method includes allocating resources to the operating systems, calculating, when one of the operating systems is running, workloads for each operating system, and adjusting resources allocated to the operating systems according to the calculated workloads. The present invention determines the workloads of a plurality of operating systems running in the virtualization system and allocates time resources dynamically according to the variation of the workloads.03-08-2012
20120060167METHOD AND SYSTEM OF SIMULATING A DATA CENTER - A system and method for optimizing the dynamic behavior of a multi-tier data center is described, wherein the data center is simulated along with the resources in the form of hardware and software and the transaction process workloads are simulated to test the resources or the transaction process. The system requires the client computing device and a backend server to have the capabilities to host simulated hardware, complex software applications platforms, and to perform large scale simulations using these resources. The method includes securing parameter inputs from the client that defined the data center resources and the transaction process to be tested, generating various workload simulations, testing the simulations and provisioning the resources, thereby obtaining an optimized dynamic data center simulation of data center resources and the transaction processes.03-08-2012
20120060166Day management using an integrated calendar - A method and system for day management using an integrated calendar is disclosed. A user inputs his time available during a period and enters details on time-specific events to be performed during that period. The system fetches information entered by user and stores the details in the integrated calendar. Tasks to be performed are stored on task list in order of priority as entered by the user. The system determines free time available with the user for performing the tasks by subtracting the time allocated for the events from the time available. Further, tasks are allocated time schedules by allocating that free time to the duration of each task, as per priority. If a task cannot be performed in a particular time slot within the time period, the task may be split into multiple smaller tasks and performed at different time slots that are available.03-08-2012
20130014119Resource Allocation Prioritization Based on Knowledge of User Intent and Process Independence - A method and system to improve performance of a computer system is disclosed. One aspect of certain embodiments includes selectively deallocating or allocating computer resources to a set of computer programs associated with the computer system.01-10-2013
20130014118SIMULTANEOUS SUBMISSION TO A MULTI-PRODUCER QUEUE BY MULTIPLE THREADS - One embodiment of the present invention sets forth a technique for ensuring that multiple producer threads may simultaneously write entries in a shared queue and one or more consumers may read valid data from the shared queue. Additionally, writing of the shared queue by the multiple producer threads may occur in parallel and the one or more consumer threads may read the shared queue while the producer threads write the shared queue. A “wait-free” mechanism allows any producer thread that writes a shared queue to advance an inner pointer that is used by a consumer thread to read valid data from the shared queue.01-10-2013
20090150894Nonvolatile memory (NVM) based solid-state disk (SSD) system for scaling and quality of service (QoS) by parallelizing command execution - A method for scaling a SSD system which includes providing at least one storage interface and providing a flexible association between storage commands and a plurality of processing entities via the plurality of nonvolatile memory access channels. Each storage interface associates a plurality of nonvolatile memory access channels.06-11-2009
20120159506SCHEDULING AND MANAGEMENT IN A PERSONAL DATACENTER - A personal datacenter system is described herein that provides a framework for leveraging multiple heterogeneous computers in a dynamically changing environment together as an ad-hoc cluster for performing parallel processing of various tasks. A home environment is much more heterogeneous and dynamic than a typical datacenter, and typical datacenter scheduling strategies do not work well for these types of small clusters. Machines in a home are likely to be powered on and off, be removed and taken elsewhere, and be connected by an ad-hoc network topology with a mix of wired and wireless technologies. The personal data center system provides components to overcome these differences. The system identifies a dynamically available set of machines, characterizes their performance, discovers the network topology, and monitors the available communications bandwidth between machines. This information is then used to compute an efficient execution plan for data-parallel and/or High Performance Computing (HPC)-style applications.06-21-2012
20120159502VARIABLE INCREMENT REAL-TIME STATUS COUNTERS - Processes, devices, and articles of manufacture having provisions to monitor and track multi-core Central Processor Unit resource allocation and deallocation in real-time are provided. The allocation and deallocation may be tracked by two counters with the first counter incrementing up or down depending upon the allocation or deallocation at hand, and with the second counter being updated when the first counter value meets or exceeds a threshold value.06-21-2012
20120159503WORK FLOW COMMAND PROCESSING SYSTEM - A method including receiving a work flow for the ingestion, transformation, and distribution of content, wherein the work flow includes one or more work unit tasks; selecting one of the one or more work unit tasks for execution when resources are available; retrieving work unit task information that includes a work unit definition that specifies which of the one or more other work unit tasks are capable of being at least one of an input to the one of the one or more work unit tasks or an output for the one of the one or more work unit tasks, and work unit task connector parameters that specify a type of input content and a type of output content; and executing the one of the one or more work unit tasks based on a translated work unit task information.06-21-2012
20120159504Mutual-Exclusion Algorithms Resilient to Transient Memory Faults - Techniques for implementing mutual-exclusion algorithms that are also fault-resistant are described herein. For instance, this document describes systems that implement fault-resistant, mutual-exclusion algorithms that at least prevent simultaneous access of a shared resource by multiple threads when (i) one of the multiple threads is in its critical section, and (ii) the other thread(s) are waiting in a loop to enter their respective critical sections. In some instances, these algorithms are fault-tolerant to prevent simultaneous access of the shared resource regardless of a state of the multiple threads executing on the system. In some instances, these algorithms may resist (e.g., tolerate entirely) transient memory faults (or “soft errors”).06-21-2012
20120159509LANDSCAPE REORGANIZATION ALGORITHM FOR DYNAMIC LOAD BALANCING - A method and system for reorganizing a distributed computing landscape for dynamic load balancing is presented. A method includes the steps of collecting information about resource usage by a plurality of hosts in a distributed computing system, and generating a target distribution of the resource usage for the distributed computing system. The method further includes the step of generating an estimate of an improvement of the resource usage according to a reorganization plan.06-21-2012
20120159508TASK MANAGEMENT SYSTEM, TASK MANAGEMENT METHOD, AND PROGRAM - A task management system includes a capacity information acquisition section which acquires, from a computation device which executes a computation using electrical power derived from renewable energy, capacity information which shows the computation capacity of the computation device which is predicted based on weather information of a region where the computation device is disposed, and a task management section which allocates a computation task to a plurality of the computation devices based on the capacity information which is acquired from the plurality of computation devices using the capacity information acquisition section.06-21-2012
20120159507COMPILING APPARATUS AND METHOD OF A MULTICORE DEVICE - An apparatus and method capable of reducing idle resources in a multicore device and improving the use of available resources in the multicore device are provided. The apparatus includes a static scheduling unit configured to generate one or more task groups, and to allocate the task groups to virtual cores by dividing or combining the tasks included in the task groups based on the execution time estimates of the task groups. The apparatus also includes a dynamic scheduling unit configured to map the virtual cores to physical cores.06-21-2012
20120159505Resilient Message Passing Applications - A message passing system may execute a parallel application on multiple compute nodes. Each compute node may perform a single workload on at least two physical computing resources. Messages may be passed from one compute node to another, and each physical computing resource assigned to a compute node may receive and process the messages. In some embodiments, the compute nodes may be virtualized so that a message passing system may only detect a single compute node and not the multiple underlying physical computing resources.06-21-2012
20120072918GENERATION OF GENERIC UNIVERSAL RESOURCE INDICATORS - Various arrangements for creating and using generic universal resource indicators are presented. To create a generic universal resource indicator, one or more parameters of a universal resource indicator may be identified. An interface that permits a parameter of the one or more parameters to be selected and mapped to a variable may be presented. A selection of the parameter for mapping may be received. An indication of the variable to map to the parameter of the selection may also be received. The generic universal resource indicator having a generic parameter corresponding to the parameter of the selection may be created.03-22-2012
20120072917METHOD AND APPARATUS FOR DISTRIBUTING COMPUTATION CLOSURES - An approach is provided for backend based computation closure oriented distributed computing. A computational processing support infrastructure receives a request for specifying one or more processes executing on a device for distribution over a computation space. The computational processing support infrastructure also causes, at least in part, serialization of the one or more processes as one or more closure primitives, the one or more closure primitives representing computation closures of the one or more processes. The computational processing support infrastructure further causes, at least in part, distribution of the one or more closure primitives over the computation space based, at least in part, on a cost function.03-22-2012
20120110592Autonomic Self-Tuning Of Database Management System In Dynamic Logical Partitioning Environment - An automated monitor monitors one or more resource parameters in a logical partition running a database application in a logically partitioned data processing host. The monitor initiates dynamic logical partition reconfiguration in the event that the parameters vary from predetermined parameter values. In particular, the monitor can initiate removal of resources if one of the resource parameters is being underutilized and initiate addition of resources if one of the resource parameters is being overutilized. The monitor can also calculate an amount of resources to be removed or added. The monitor can interact directly with a dynamic logical partition reconfiguration function of the data processing host or it can utilize an intelligent intermediary that listens for a partition reconfiguration suggestion from the monitor. In the latter configuration, the listener can determine where available resources are located and attempt to fully or partially satisfy the resource needs suggested by the monitor.05-03-2012
20120110587Methods and apparatuses for accumulating and distributing processing power - Calculating and distributing resources of at least one electronic device over a network.05-03-2012
20120110593System and Method for Migration of Data - Systems and methods for data migration are disclosed. A method may include allocating a destination storage resource to receive migration data. The method may also include assigning the destination storage resource a first identifier value equal to an identifier value associated with a source storage resource. The method may additionally include assigning the source storage resource a second identifier value different than the first identifier value. The method may further include migrating data from the source storage resource to the destination storage resource.05-03-2012
20120079494System And Method For Maximizing Data Processing Throughput Via Application Load Adaptive Scheduling And Content Switching - The invention enables dynamic, software application load adaptive optimization of data processing capacity allocation on a shared processing hardware among a set of application software programs sharing said hardware. The invented techniques allow multiple application software programs to execute in parallel on a shared CPU, with application ready-to-execute status adaptive scheduling of CPU cycles and context switching between applications done in hardware logic, without a need for system software involvement. The invented data processing system hardware dynamically optimizes allocation of its processing timeslots among a number of concurrently running processing software applications, in a manner adaptive to realtime processing loads of the applications, without using the CPU capacity for any non-user overhead tasks. The invention thereby achieves continuously maximized data processing throughput for variable-load processing applications, while ensuring that any given application gets at least its entitled share of the processing system capacity whenever so demanded.03-29-2012
20120079498METHOD AND APPARATUS FOR DYNAMIC RESOURCE ALLOCATION OF PROCESSING UNITS - A method and apparatus for dynamic resource allocation in a system having at least one processing unit are disclosed. The method of dynamic resource allocation includes receiving information on a task to which resources are allocated and partitioning the task into one or more task parallel units; converting the task into a task block having a polygonal shape according to expected execution times of the task parallel units and dependency between the task parallel units; allocating resources to the task block by placing the task block on a resource allocation plane having a horizontal axis of time and a vertical axis of processing units; and executing the task according to the resource allocation information. Hence, CPU resources and GPU resources in the system can be used in parallel at the same time, increasing overall system efficiency.03-29-2012
20120079497Predicting Resource Requirements for a Computer Application - A resource consumption model is created for a software application, making it possible to predict the resource requirements of the application in different states. The model has a structure corresponding to that of the application itself, and is interpreted to some degree in parallel with the application, but each part of the model is interpreted in less time than it takes to complete the corresponding part of the application, so that resource requirement predictions are available in advance. The model may be interpreted in a look-ahead mode, wherein different possible branches of the model are interpreted so as to obtain resource requirement predictions for the application after completion of the present step. The model may be derived automatically from the application at design or compilation, and populated by measuring the requirements of the application in response to test scenarios in a controlled environment.03-29-2012
20120079495MANAGING ACCESS TO A SHARED RESOURCE IN A DATA PROCESSING SYSTEM - Processes requiring access to shared resources are adapted to issue a reservation request, such that a place in a resource access queue, such as one administered by means of a semaphore system, can be reserved for the process. The reservation is issued by a Reservation Management module at a time calculated to ensure that the reservation reaches the head of the queue as closely as possible to the moment at which the process actually needs access to the resource. The calculation may be made on the basis of priority information concerning the process itself, and statistical information gathered concerning historical performance of the queue.03-29-2012
20120079493USE OF CONSTRAINT-BASED LINEAR PROGRAMMING TO OPTIMIZE HARDWARE SYSTEM USAGE - A computer implemented method, system, and/or computer program product optimizes systems usage. A work request is decomposed into units of work. A processor selectively sends each unit of work from the work request to either a first system or a second system for execution, depending on a work constraint on each unit of work and/or system constraints on the first and second systems.03-29-2012
20090106766STORAGE ACCESS DEVICE - A storage access device, which issues an I/O request (input/output request) to a logical unit provided by one or more storage systems, holds association information showing that a plurality of logical volumes corresponding to a plurality of logical units, which belong to the same copy-set, are associated. In the storage access device, the respective associated logical volumes shown by this association information are allocated to a virtual device, and the virtual device is provided to an application.04-23-2009
20110107343SYSTEM AND METHOD OF PROVIDING A FIXED TIME OFFSET BASED DEDICATED CO-ALLOCATION OF A COMMON RESOURCE SET - Disclosed are a system, method and computer-readable medium relating to managing resources within a compute environment having a group of nodes or computing devices. The method comprises, for each node in the compute environment: traversing a list jobs having a fixed time relationship, wherein for each job in the list, the following steps occur: obtaining a range list of available timeframes for each job, converting each availability timeframe to a start range, shifting the resulting start range in time by a job offset, for a first job, copying the resulting start range into a node range, and for all subsequent jobs, logically AND'ing the start range with the node range. Next, the method comprises logically OR'ing the node range with a global range, generating a list of acceptable resources on which to start and the timeframe at which to start and creating reservations according to the list of acceptable resources for the resources in the group of computing devices and associated job offsets.05-05-2011
20120317582Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling.12-13-2012
20100095302DATA PROCESSING APPARATUS, DISTRIBUTED PROCESSING SYSTEM, DATA PROCESSING METHOD AND DATA PROCESSING PROGRAM - A terminal includes a task information acquiring unit which acquires information on a task of data processing, and a communication task generator which generates a send task to allow a source apparatus of data required by the task to transmit the data required by the task to an apparatus executing the task and which transmits the send task to the source apparatus, when the source apparatus is another apparatus, which is different from the apparatus executing the task and which is connected to the apparatus executing the task via a network.04-15-2010
20100095301METHOD FOR PROVIDING SERVICE IN PERVASIVE COMPUTING ENVIRONMENT AND APPARATUS THEREOF - Provided is a method for providing a service in a pervasive computing environment that extracts a service type which can be provided by a resource searched in the corresponding environment and when a service type to be executed is selected in an application, the corresponding resource is allocated to the selected service to allow the corresponding application to execute the service by utilizing the allocated resource. Further, the allocated resource is locked and the corresponding resource is unlocked upon a request of another application.04-15-2010
20090133030SYSTEM FOR ON DEMAND TASK OPTIMIZATION - An apparatus and program product determine information indicative of a performance differential between operation of a computer with the standby resource activated and operation of the computer with the standby resource inactivated. The information is communicated to a user. The standby resource may be activated in response to the determination.05-21-2009
20090133029METHODS AND SYSTEMS FOR TRANSPARENT STATEFUL PREEMPTION OF SOFTWARE SYSTEM - Methods and systems for preemption of software in a computing system that include receiving a preempt request for a process in execution using a set of resources, pausing the execution of the process; and releasing the resources to a shared pool.05-21-2009
20090133028SYSTEM AND METHOD FOR MANAGEMENT OF AN IOV ADAPTER THROUGH A VIRTUAL INTERMEDIARY IN A HYPERVISOR WITH FUNCTIONAL MANAGEMENT IN AN IOV MANAGEMENT PARTITION - A system and method which provide a mechanism for an I/O virtualization management partition (IMP) to control the shared functionality of an I/O virtualization (IOV) enabled I/O adapter (IOA) through a physical function (PF) of the IOA while the virtual functions (VFs) are assigned to client partitions for normal I/O operations directly. A hypervisor provides device-independent facilities to the code running in the IMP and client partitions. The IMP may include device specific code without the hypervisor needing to sacrifice its size, robustness, and upgradeability. The hypervisor provides the virtual intermediary functionally for the sharing and control of the IOA's control functions.05-21-2009
20120317580Apportioning Summarized Metrics Based on Unsummarized Metrics in a Computing System - A method for apportioning summarized metrics based on unsummarized metrics in a computing system includes receiving, by a memory device of the computing system, a log file, the log file comprising unsummarized metrics, the unsummarized metrics being related to a plurality of transactions performed by a program in the computing system, and a summarized metric, the summarized metric being related to the program, wherein the summarized metric comprises accumulated data from the plurality of transactions; selecting an unsummarized metric that reflects a distribution of the summarized metric among the plurality of transactions by a processing device of the computing system; and determining an amount of the summarized metric that belongs to a transaction of the plurality of transactions based on the selected unsummarized metric by the processing device of the computing system.12-13-2012
20120317579SYSTEM AND METHOD FOR PERFORMING DISTRIBUTED PARALLEL PROCESSING TASKS IN A SPOT MARKET - As a result of the systems and methods described herein, an alternative MapReduce implementation is provided which monitors for impending termination notices, and allows dynamic checkpointing and storing of processed portions of a map task, such that any processing which is interrupted by large scale terminations of a plurality of computing devices—such as those resulting from spot market rate fluctuations—is preserved.12-13-2012
20120317578Scheduling Execution of Complementary Jobs Based on Resource Usage - The subject disclosure is directed towards executing jobs based on resource usage. When a plurality of jobs is received, one or more jobs are mapped to one or more other jobs based on which resources are fully utilized or overloaded. The utilization of these resources by the one or more jobs complements utilization of these resources by the one or more other jobs. The resources are partitioned at one or more servers in order to efficiently execute the one or more jobs and the one or more other jobs. The resources may be partitioned equally or proportionally based on the resource usage or priorities.12-13-2012
20120167113VARIABLE INCREMENT REAL-TIME STATUS COUNTERS - Processes, devices, and articles of manufacture having provisions to monitor and track multi-core Central Processor Unit resource allocation and deallocation in real-time are provided. The allocation and deallocation may be tracked by two counters with the first counter incrementing up or down depending upon the allocation or deallocation at hand, and with the second counter being updated when the first counter value meets or exceeds a threshold value.06-28-2012
20120317581MANAGEMENT OF COPY SERVICES RELATIONSHIPS VIA POLICIES SPECIFIED ON RESOURCE GROUPS - At least one additional resource group attribute is defined to specify at least one policy prescribing a copy services relationship between two of the storage resources. Pursuant to a request to establish the copy services relationship between the two storage resources, each of the two storage resources exchange resource group labels corresponding to which of the plurality of resource groups the two storage resources are assigned, and each of the two storage resources validates the requested copy services relationship and the resource group label of an opposing one of the two storage resources against the individual ones of the at least one additional resource group attribute in the resource group object to determine if the copy services relationship may proceed.12-13-2012
20120317583HIGHLY RELIABLE AND SCALABLE ARCHITECTURE FOR DATA CENTERS - The present invention provides a highly reliable and scalable architecture for data centers. Work to be performed is divided into discrete work units. The work units are maintained in a pool of work units that may be processed by any number of different servers. A server may extract an eligible work unit and attempt to process it. If the processing of the work unit succeeds, the work unit is tagged as executed and becomes ineligible for other servers. If the server fails to execute the work unit for some reason, the work unit becomes eligible again and another server may extract and execute it. A server extracts and executes work units when they have available resources. This leads to the automatic load balancing of the data center.12-13-2012
20120131589METHOD FOR SCHEDULING UPDATES IN A STREAMING DATA WAREHOUSE - A method for scheduling atomic update jobs to a streaming data warehouse includes allocating execution tracks for executing the update jobs. The tracks may be assigned a portion of available processor utilization and memory. A database table may be associated with a given track. An update job directed to the database table may be dispatched to the given track for the database table, when the track is available. When the track is not available, the update job may be executed on a different track. Furthermore, pending update jobs directed to common database tables may be combined and separated in certain transient conditions.05-24-2012
20120167112Method for Resource Optimization for Parallel Data Integration - For optimizing resources for a parallel data integration job, a job request is received, which specifies a parallel data integration job to deploy in a grid. Grid resource utilizations are predicted for hypothetical runs of the specified job on respective hypothetical grid resource configurations. This includes automatically predicting grid resource utilizations by a resource optimizer module responsive to a model based on a plurality of actual runs of previous jobs. A grid resource configuration is selected for running the parallel data integration job, which includes the optimizer module automatically selecting a grid resource configuration responsive to the predicted grid resource utilizations and an optimization criterion.06-28-2012
20120216213ELECTRONIC CONTROL UNIT HAVING A REAL-TIME CORE MANAGING PARTITIONING - An electronic control unit having a microcontroller provided with RAM associated with variable data and ROM associated with the code of a software operating system incorporating a real time core for executing computer tasks. The RAM and ROM include zones corresponding to partitions, one of which is allocated to the real time core, while each of the others is allocated to at least one of the tasks. The RAM and the ROM are associated with an address bus that is physically programmed so that each partition is prevented firstly from writing in another one of the zones of the RAM, and secondly from executing another one of the zones of the ROM. The he real time core is associated with a timer for allocating an execution time to each partition.08-23-2012
20120216212ASSIGNING A PORTION OF PHYSICAL COMPUTING RESOURCES TO A LOGICAL PARTITION - A computer implemented method includes determining first characteristics of a first logical partition, the first characteristics including a memory footprint characteristic. The method includes assigning a first portion of a first set of physical computing resources to the first logical partition. The first set of physical computing resources includes a plurality of processors that includes a first processor having a first processor type and a second processor having a second processor type. The first portion includes the second processor. The method includes dispatching the first logical partition to execute using the first portion. The method includes creating a second logical partition that includes the second processor and assigning a second portion of the first set of physical computing resources to the second logical partition. The method includes dispatching the second logical partition to execute using the second portion.08-23-2012
20120216211AUTHENTICATING A PROCESSING SYSTEM ACCESSING A RESOURCE - Provided are a method, system, and article of manufacture for authenticating a processing system accessing a resource. An association of processing system identifiers with resources, including a first and second resources, is maintained. A request from a requesting processing system in a host is received for use of a first resource that provides access to a second resource, wherein the request is generated by processing system software and wherein the request further includes a submitted processing system identifier included in the request by host hardware in the host. A determination is made as to whether the submitted processing system identifier is one of the processing system identifiers associated with the first and second resources. The requesting processing system is provided access to the first resource that the processing system uses to access the second resource.08-23-2012
20100205608Mechanism for Managing Resource Locking in a Multi-Threaded Environment - A mechanism is disclosed for implementing resource locking in a massively multi-threaded environment. The mechanism receives from a stream a request to obtain a lock on a resource. In response, the mechanism determines whether the resource is currently locked. If so, the mechanism adds the stream to a wait list. At some point, based upon the wait list, the mechanism determines that it is the stream's turn to lock the resource; thus, the mechanism grants the stream a lock. In this manner, the mechanism enables the stream to reserve and to obtain a lock on the resource. By implementing locking in this way, a stream is able to submit only one lock request. When it is its turn to obtain a lock, the stream is granted that lock. This lock reservation methodology makes it possible to implement resource locking efficiently in a massively multi-threaded environment.08-12-2010
20120137303COMPUTER SYSTEM - Provided is a computer system capable of reliably eliminating duplicated data regardless of the size of the data write unit from the host computer to the storage subsystem or the management unit size in the elimination of duplicated data.05-31-2012
20110185365DATA PROCESSING SYSTEM, METHOD FOR PROCESSING DATA AND COMPUTER PROGRAM PRODUCT - A computer-implemented data processing system, computer-implemented method and computer program product for processing data. The system includes: a scheduler; a processor system; and at least one producer for executing a task. The scheduler is operable to allocate to the producer with respect to a scheme, a processing time of a processing resource of the processor system. The producer is operable to execute the task using said processor system during the allocated processing time.07-28-2011
20110185364EFFICIENT UTILIZATION OF IDLE RESOURCES IN A RESOURCE MANAGER - Embodiments are directed to dynamically allocating processing resources among a plurality of resource schedulers. A resource manager dynamically allocates resources to a first resource scheduler. The resource manager is configured to dynamically allocate resources among a plurality of resource schedulers, and each scheduler is configured to manage various processing resources. The resource manager determines that at least one of the processing resources dynamically allocated to the first resource scheduler is idle. The resource manager determines that at least one other resource scheduler needs additional processing resources and, based on the determination, loans the determined idle processing resource of the first resource scheduler to a second resource scheduler.07-28-2011
20120216209VISUALIZATION-CENTRIC PERFORMANCE-BASED VOLUME ALLOCATION - A method, system, and computer program product for visualization-centric performance-based volume allocation in a data storage system using a processor in communication with a memory device is provided. A unified resource graph representative of a global hierarchy of storage components in the data storage system, including each of a plurality of storage controllers, is generated. The unified resource graph includes a common root node and a plurality of subtree nodes corresponding to each of a plurality of nodes internal to the plurality of storage controllers. The common root node and the plurality of subtree nodes are ordered in a top-down orientation. Scalable volume provisioning of an existing or new workload amount by graphical manipulation of at least one of the storage components represented by the unified resource graph is performed based on an input.08-23-2012
20120216210PROCESSOR WITH RESOURCE USAGE COUNTERS FOR PER-THREAD ACCOUNTING - Processor time accounting is enhanced by per-thread internal resource usage counter circuits that account for usage of processor core resources to the threads that use them. Relative resource use can be determined by detecting events such as instruction dispatches for multiple threads active within the processor, which may include idle threads that are still occupying processor resources. The values of the resource usage counters are used periodically to determine relative usage of the processor core by the multiple threads. If all of the events are for a single thread during a given period, the processor time is allocated to the single thread. If no events occur in the given period, then the processor time can be equally allocated among threads. If multiple threads are generating events, a fractional resource usage can be determined for each thread and the counters may be updated in accordance with their fractional usage.08-23-2012
20120254885RUNNING A PLURALITY OF INSTANCES OF AN APPLICATION - Running of a root instance of an application is started. The root instance includes at least one thread. In response to determining that a thread of the root instance runs to a preset freezing point in the application, running of all threads of the root instance is stopped. In response to starting to run an additional instance of the application, a running state of all threads of the root instance is replicated as a running state of all threads of the additional instance of the application. Running all threads of the additional instance of the application is continued.10-04-2012
20120254884DYNAMICALLY SWITCHING THE SERIALIZATION METHOD OF A DATA STRUCTURE - Embodiments of the invention comprise a method for dynamically switching a serialization method of a data structure. If use of the serialization mechanism is desired, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is in use, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is not in use, an instruction to access the data structure without obtaining the serialization mechanism is received.10-04-2012
20100175068LIMITING THE AVAILABILITY OF COMPUTATIONAL RESOURCES TO A DEVICE TO STIMULATE A USER OF THE DEVICE TO APPLY NECESSARY UPDATES - Provided are a method, system, and article of manufacture for limiting the availability of computational resources to a device to stimulate a user of the device to apply necessary updates. Indication of the n update to the device is received and a determination is made as to whether the update has been applied to the device. The availability of computational resources at the device to use to execute processes at the device are limited in response to determining that the update has not been applied to the device. Processes are executed at the device using the limited available computational resources after the limiting of the availability of the computational resources. A determination is made as to whether the update has been applied to the device after limiting the availability of the computational resources. The limiting of the availability of the computational resources at the device is reversed in response to determining that the update to the device was applied.07-08-2010
20120174114METHOD OF CALCULATING PROCESSOR UTILIZATION RATE IN SMT PROCESSOR - The method of calculating the processor utilization for each of logical processors in a computer, including the steps of: dividing the computation interval in which the processor utilization by each logical processor is to be calculated into a single task mode (ST) execution interval and a multitask mode (MT) execution interval, appropriately calculating them based on the before-and-after relation between two times; and adding the MT execution interval multiplied by a predetermined MT mode processor resource assignment ratio to the ST mode execution interval to obtain the processor utilization for the calculation-targeted logical processor in the computation interval.07-05-2012
20120174113TENANT VIRTUALIZATION CONTROLLER FOR A MULTI-TENANCY ENVIRONMENT - A system and method for performing load balancing of systems in a multi-tenancy computing environment by shifting tenants from an overloaded system to a non-overloaded system. Initially, a determination is made as to whether a first tenant desires an access to an instance of a software application. The same instance of the software application is being accessed by other tenants of a first system. If the tenant desires access to the same instance of the software application, the tenant is created at the first system. The created first tenant and the other tenants exist in a multi-tenancy computing environment that enables the first tenant and the other tenants to access a same instance of a software application. Then, it is checked whether the first system is overloaded. If the first system is overloaded, load balancing is performed as follows. The first tenant is exported from the overloaded first system to a lesser loaded second system. The data containers of the first tenant remain stationary at a virtual storage. The first tenant is enabled to access the same instance of the software application that it was accessing while at the first system, but now using memory resources and processing resources of the second system. Related apparatus, systems, techniques and articles are also described.07-05-2012
20120174116HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component.07-05-2012
20100299671VIRTUALIZED THREAD SCHEDULING FOR HARDWARE THREAD OPTIMIZATION - Embodiments are disclosed herein related to scheduling of virtualized runtime threads to hardware threads that share hardware resources to improve processing performance. For example, one embodiment provides a computing system that includes a scheduler to schedule execution of virtualized source code. The virtualized source code may include virtualized runtime threads that may be scheduled by the scheduler onto hardware threads that share hardware resources. The scheduler may include a decoder to catalogue hardware resource parameters used by the virtualized source code. Furthermore, the scheduler may include a virtualization engine to schedule execution of the virtualized runtime threads onto the hardware threads based the hardware resource parameters and a hardware-specific profile of the computing system.11-25-2010
20120222040RESOURCE MANAGEMENT SYSTEM, RESOURCE INFORMATION PROVIDING METHOD AND PROGRAM - [Object] To provide a resource management system capable of stably providing most recently updated resource information at a high speed.08-30-2012
20120222039Resource Data Management - A set of data structures defines resource relationships and locations for a set of resources to form defined resource relationships and defined locations for the set of resources. A receiver obtains, from an unsecure device, replaceable unit data and characterization data for a current resource in the set of resources. A writer merges obtained replaceable unit data for a current resource with obtained characterization data for the current resource for each resource of the set of resources to form a set of data files.08-30-2012
20120222038TASK DEFINITION FOR SPECIFYING RESOURCE REQUIREMENTS - Task definitions are used by a task scheduler and prioritizer to allocate task operations to a plurality of processing units. The task definition is an electronic record that specifies resources needed by, and other characteristics of, a task to be executed. Resources include types of processing nodes desired to execute the task, needed amount or rate of processing cycles, amount of memory capacity, number of registers, input/output ports, buffer sizes, etc. Characteristics of a task in clued maximum latency time, frequency of execution of a task, communication ports, and other characteristics. An examplary task definition language and syntax is described that uses constructs including order of attempted scheduling operations, percentage or amount of resources desired by different operations, handling of multiple executable images or modules, overlays, port aliases and other features.08-30-2012
20100050181Method and System of Group-to-Group Computing - A method and system of group-to-group (G2G) computing, a G2G computing service system based on the portal network site, and a G2G search service system based on the G2G computing. The G2G computing is a kind of distributed computing based on the G2G network and carries a task by the group. The network comprised by the groups and related to the relation between the groups is referred to as a G2G network. The group is a collection of nodes with the same attributes. The G2G computing defines 4 basis operations: Transfer, Exchange, Node-process and Transmutation.02-25-2010
20100050180METHOD AND SYSTEM FOR GREEN COMPUTING INTERCHANGE SWITCHING FUNCTION - Systems, methods, devices and program products are provided for enabling users of a computing system to measure and compare the green efficiency of a set of resources used in a computing task. With the use of this information, the user can select a desired set of resources to be employed in the computing task to minimize the environmental impact of computing tasks in relation to requirements. In some embodiments, the invention creates metrics for measuring the greenness of a computing task. The metrics are calculated through analysis of the resource computation, energy consumption, consequence of computation, and dimensional characteristics of a computing task. The metrics could be beneficial or other metrics that permit the user or a processing system to make scheduling and execution decisions.02-25-2010
20100050179LAYERED CAPACITY DRIVEN PROVISIONING IN DISTRIBUTED ENVIRONMENTS - Techniques are disclosed for providing mapping of application components to a set of resources in a distributed environment using capacity driven provisioning using a layered approach. By way of example, a method for allocating resources to an application comprises the following steps. A first data structure is obtained representing a post order traversal of a dependency graph for the application and associated containers with capacity requirements. A second data structure is obtained representing a set of resources, and associated with each resource is a tuple representing available capacity. A mapping of the dependency graph data structure to the resource set is generated based on the available capacity such that resources of the set of resources are allocated to the application.02-25-2010
20080216085System and Method for Virtual Adapter Resource Allocation - A method, computer program product, and distributed data processing system that enables host software or firmware to allocate virtual resources to one or more system images from a single physical I/O adapter, such as a PCI, PCI-X, or PCI-E adapter, is provided. Adapter resource groups are assigned to respective system images. An adapter resource group is exclusively available to the system image to which the adapter resource group assignment was made. Assignment of adapter resource groups may be made per a relative resource assignment or an absolute resource assignment. In another embodiment, adapter resource groups are assigned to system images on a first come, first served basis.09-04-2008
20090106765Predetermination and propagation of resources in a distributed software build - Various technologies and techniques are disclosed for propagating resources during a distributed build process. Subscription of interest is registered in resources needed during a distributed build process. Build data is analyzed to determine what resources will be needed. The subscriptions of interest are stored in a data store that is accessible by all build machines participating in the distributed build process. A status of subscriptions of interest is monitored in the data store. When the status of respective subscriptions of interest indicates that a publication notice was registered for a respective resource, the respective resource is retrieved from a machine that contains the resource. When a new resource is created that is needed by other build machines, a publication notification is registered with the data store so the other build machines can determine that the new resource is now available.04-23-2009
20120180061Organizing Task Placement Based On Workload Characterizations - Task placement is influenced within a multiple processor computer. Tasks are classified as either memory bound or CPU bound by observing certain performance counters over the task execution. During a first pass of task load balance, tasks are balanced across various CPUs to achieve a fairness goal, where tasks are allocated CPU resources in accordance to their established fairness priority value. During a second pass of task load balance, tasks are rebalanced across CPUs to reduce CPU resource contention, such that the rebalance of tasks in the second pass does not violate fairness goals established in the first pass. In one embodiment, the second pass could involve re-balancing memory bound tasks across different cache domains, where CPUs in a cache domain share a same last mile CPU cache such as an L3 cache. In another embodiment, the second pass could involve re-balancing CPU bound tasks across different CPU domains of a cache domain, where CPUs in a CPU domain could be sharing some or all of CPU execution unit resources. The two passes could be executed at different frequencies.07-12-2012
20100275214DEVICE FOR SHARED MANAGEMENT OF A RESOURCE AMONG SEVERAL USERS - The device comprises a memory (10-28-2010
20100275213INFORMATION PROCESSING APPARATUS, PARALLEL PROCESS OPTIMIZATION METHOD - According to one embodiment, parallel processing optimization method for an apparatus configured to assign dynamically a part of some of basic modules, into which a program is divided and which comprise a execution rule which defines a executing order of the basic modules and are executable asynchronously with another modules, to threads includes identifiers based on the execution rule wherein the some of the basic modules are assignable to the threads, and configured to execute in parallel the threads by execution modules, the method includes managing the part of some of the basic modules and the identifiers of the threads assigned the part of some of the basic modules, managing an executable set includes the some of the basic modules, calculating transfer costs of the some of the basic modules when data, and selecting one of the basic module with a minimum transfer cost in the transfer costs.10-28-2010
20100275212CONCURRENT DATA PROCESSING IN A DISTRIBUTED SYSTEM - Systems, methods, and computer media for scheduling vertices in a distributed data processing network and allocating computing resources on a processing node in a distributed data processing network are provided. Vertices, subparts of a data job including both data and computer code that runs on the data, are assigned by a job manager to a distributed cluster of process nodes for processing. The process nodes run the vertices and transmit computing resource usage information, including memory and processing core usage, back to the job manager. The job manager uses this information to estimate computing resource usage information for other vertices in the data job that are either still running or waiting to be run. Using the estimated computing resource usage information, each process node can run multiple vertices concurrently.10-28-2010
20120180063Method and Apparatus for Providing Management of Parallel Library Implementation - A method for providing management of parallel library implementations relative to available resources may include receiving an indication of a registration of a parallel library and determining processor utilization information based on current load conditions. The processor utilization information may be indicative of a number of processors to be made available to the parallel library for a process associated with the parallel library. The method may further include causing provision of the processor utilization information to the parallel library. A corresponding apparatus is also provided.07-12-2012
20120180062System and Method for Controlling Excessive Parallelism in Multiprocessor Systems - Execution of a computer program on a multiprocessor system is monitored to detect possible excess parallelism causing resource contention and the like and, in response, to controllably limit the number of processors applied to parallelize program components.07-12-2012
20100037233Processor core with per-thread resource usage accounting logic - Processor time accounting is enhanced by per-thread internal resource usage counter circuits that account for usage of processor core resources to the threads that use them. Relative resource use can be determined by detecting events such as instruction dispatches for multiple threads active within the processor, which may include idle threads that are still occupying processor resources. The values of the resource usage counters are used periodically to determine relative usage of the processor core by the multiple threads. If all of the events are for a single thread during a given period, the processor time is allocated to the single thread. If no events occur in the given period, then the processor time can be equally allocated among threads. If multiple threads are generating events, a fractional resource usage can be determined for each thread and the counters may be updated in accordance with their fractional usage.02-11-2010
20100011369DEVICE MANAGEMENT APPARATUS, JOB FLOW PROCESSING METHOD, AND TASK COOPERATIVE PROCESSING SYSTEM - In a task cooperative processing system that allows a plurality of task processing devices to execute a plurality of tasks performed on document data as a job flow, task processing devices that can execute a task included in the job flow are decided as candidate task processing devices (S01-14-2010
20100011368Methods, systems and programs for partitioned storage resources and services in dynamically reorganized storage platforms - Exemplary embodiments establish durable partitions that are unified across storage systems and storage server computers. The partitions provide independent name spaces and are able to maintain specified services and conditions regardless of operations taking place in other partitions, and regardless of configuration changes in the information system. A management computer manages and assigns resources and functions provided by storage server computers and storage systems to each partition. By using the assigned resources, a partition is able to provide storage and other services to users and applications on host computers. When a configuration change occurs, such as addition or deletion of equipment, the management computer performs reassignment of resources, manages migration of services and/or data, and otherwise maintains the functionality of the partition for the user or application. Additionally, a partition can be migrated within the information system for various purposes, such as improved performance, load balancing, and the like.01-14-2010
20100011367METHODS AND SYSTEMS FOR ALLOCATING A RESOURCE OF A VEHICLE AMONG A PLURALITY OF USES FOR THE RESOURCE - A method for implementing a request pertaining to a requested use of a plurality of uses of a resource of a vehicle includes the steps of determining whether the resource is configured for simultaneous use by two or more of the plurality of uses, determining whether the resource is being used by an existing use of the plurality of uses, and allowing the requested use of the resource and the existing use of the resource, if the resource is configured for simultaneous use by two or more of the plurality of uses and the resource is being used by the existing use.01-14-2010
20100011365Resource Allocation and Modification - A computer-implemented method includes obtaining information characterizing a level of actual usage of a first item of content; based on the obtained information, determining whether a re-provisioning condition is satisfied and if so, generating a specification of a re-provisioning operation to be executed in association with the resources of a storage environment; and executing the re-provisioning operation. The first item of content is stored on a first set of elements of resources of the storage environment according to a first resource allocation arrangement. The re-provisioning operation includes identifying a second resource allocation arrangement for storing the first item of content; and allocating a second set of elements of the resources of the storage environment according to the second resource allocation arrangement.01-14-2010
20090070766DYNAMIC WORKLOAD BALANCING IN A THREAD POOL - Provided are techniques for workload balancing. A message is received on a channel. A thread in a thread pool is selected to process the message. In response to determining that the message has been processed and a response has been sent on the channel by the thread, it is determined whether a total number of threads in the thread pool is greater than a low water mark plus one and whether the channel has more than a maximum number of threads blocked on a receive, wherein the low water mark represents a minimum number of threads in the thread pool. In response to determining that a number of threads in the thread pool is greater than the low water mark plus one and that the channel has more than the maximum number of threads blocked on a receive, the thread is terminated. In response to determining at least one of the number of threads in the thread pool is less than or equal to the low water mark plus one and the channel has less than or equal to the maximum number of threads blocked on a receive, the thread is retained.03-12-2009
20120185866SYSTEM AND METHOD FOR MANAGING THE INTERLEAVED EXECUTION OF THREADS - A computer system for managing the execution of threads including at least one central processing unit which performs interleaved execution of a plurality of threads throughout a plurality of virtual processors from said same central processing unit, and a handler for distributing the execution of the threads throughout the virtual processors.07-19-2012
20120185865MANAGING ACCESS TO A SHARED RESOURCE IN A DATA PROCESSING SYSTEM - Processes requiring access to shared resources are adapted to issue a reservation request, such that a place in a resource access queue, such as one administered by means of a semaphore system, can be reserved for the process. The reservation is issued by a Reservation Management module at a time calculated to ensure that the reservation reaches the head of the queue as closely as possible to the moment at which the process actually needs access to the resource. The calculation may be made on the basis of priority information concerning the process itself, and statistical information gathered concerning historical performance of the queue.07-19-2012
20120185864Integrated Environment for Execution Monitoring and Profiling of Applications Running on Multi-Processor System-on-Chip - There is provided a system and method for providing an integrated environment for execution monitoring and profiling of applications running on multi-processor system-on-chips. There is provided a method comprising obtaining task execution data of an application, the task execution data including a plurality of task executions assigned to a plurality of hardware resources, showing a scheduler view of the plurality of task executions on a display, receiving a modification request for a selected task execution from the plurality of task executions, reassigning the plurality of task executions to the plurality of hardware resources based on implementing the modification request, and updating the scheduler view on the display. As a result, the high level results of specific low level optimizations may be tested and retried to discover which optimization routes provide the greatest benefits.07-19-2012
20120185863METHODS FOR RESTRICTING RESOURCES USED BY A PROGRAM BASED ON ENTITLEMENTS - In response to a request for launching a program, a list of one or more application frameworks to be accessed by the program during execution of the program is determined. Zero or more entitlements representing one or more resources entitled by the program during the execution are determined. A set of one or more rules based on the entitlements of the program is obtained from at least one of the application frameworks. The set of one or more rules specifies one or more constraints of resources associated with the at least one application framework. A security profile is dynamically compiled for the program based on the set of one or more rules associated with the at least one application framework. The compiled security profile is used to restrict the program from accessing at least one resource of the at least one application frameworks during the execution of the program.07-19-2012
20120084785RESOURCE RESERVATION - Technologies are generally described for systems and methods for requesting a reservation between a first and a second processor. In some examples, the method includes receiving a reservation request at the second processor from the first processor. The reservation request may include an identification of a resource in communication with the second processor, a time range, first key information relating to the first processor, and a first signature of the first processor based on the first key information. In some examples, the method includes verifying, by the second processor, the reservation request based on the first key information and the first signature. In some examples, the method includes determining, by the second processor, whether to accept the reservation request.04-05-2012
20090320036File System Object Node Management - Embodiments of the invention provide a method for assigning a home node to a file system object and using information associated with file system objects to improve locality of reference during thread execution. Doing so may improve application performance on a computer system configured using a non-uniform memory access (NUMA) architecture. Thus, embodiments of the invention allow a computer system to create a nodal affinity between a given file system object and a given processing node.12-24-2009
20120260258METHOD AND SYSTEM FOR DYNAMICALLY CONTROLLING POWER TO MULTIPLE CORES IN A MULTICORE PROCESSOR OF A PORTABLE COMPUTING DEVICE - A method and system for dynamically determining the degree of workload parallelism and to automatically adjust the number of cores (and/or processors) supporting a workload in a portable computing device are described. The method and system includes a parallelism monitor module that monitors the activity of an operating system scheduler and one or more work queues of a multicore processor and/or a plurality of central processing units (“CPUs”). The parallelism monitor may calculate a percentage of parallel work based on a current mode of operation of the multicore processor or a plurality of processors. This percentage of parallel work is then passed to a multiprocessor decision algorithm module. The multiprocessor decision algorithm module determines if the current mode of operation for the multicore processor (or plurality of processors) should be changed based on the calculated percentage of parallel work.10-11-2012
20090019448Cross Process Memory Management - A method for efficiently managing memory resources in a computer system having a graphics processing unit that runs several processes simultaneously on the same computer system includes using threads to communicate that additional memory is needed. If the request indicates that termination will occur then the other processes will reduce their memory usage to a minimum to avoid termination but if the request indicates that the process will not run optimally then the other processes will reduce their memory usage to 1/N where N is the count of the total number of running processes. The apparatus includes a computer system using a graphics processing unit and processes with threads that can communicate directly with other threads and with a shared memory which is part of the operating system memory.01-15-2009
20090019445Deterministic task scheduling in a computing device - Method and system for scheduling tasks in a computing device in a manner that ensures substantially seamless processing of an active job while preventing starvation of background tasks. In one aspect, a method for scheduling tasks in a computing device comprises the steps of statically allocating processor time (P) to a background task class (S) and dynamically allocating processor time (p) to background tasks within the background task class (S) based at least in part on a current count (n) of the background tasks. The background task processor time (p) may equal the background task class processor time (P) divided by the current count (n). The method may further comprise, in each of successive processing periods, assigning a processor to each of the background tasks for their respective background task processor times (P01-15-2009
20110126208Processing Architecture Having Passive Threads and Active Semaphores - Multiple parallel passive threads of instructions coordinate access to shared resources using “active” semaphores. The semaphores are referred to as active because the semaphores send messages to execution and/or control circuitry to cause the state of a thread to change. A thread can be placed in an inactive state by a thread scheduler in response to an unresolved dependency, which can be indicated by a semaphore. A thread state variable corresponding to the dependency is used to indicate that the thread is in inactive mode. When the dependency is resolved a message is passed to control circuitry causing the dependency variable to be cleared. In response to the cleared dependency variable the thread is placed in an active state. Execution can proceed on the threads in the active state.05-26-2011
20130174175RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM - Exemplary method, system, and computer program product embodiments for resource allocation of a plurality of resources for a dual activity system by a processor device, are provided. In one embodiment, by way of example only, each of the activities may be started at a static quota. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. In response to the rejection of the resource request for the alternative one of the at least one of the dual activities, a resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary.07-04-2013
20130174174HIERARCHICAL SCHEDULING APPARATUS AND METHOD FOR CLOUD COMPUTING - A hierarchical scheduling apparatus for a cloud environment includes a schedule configuring unit configured to classify a plurality of tasks into one or more local tasks and one or more remote tasks; a schedule delegating unit configured to transmit, to another resource, a list of the remote tasks and a list of available resources to delegate scheduling authority for the remote tasks to the other resource; and a scheduling unit configured to schedule the local tasks.07-04-2013
20120266178System Providing Resources Based on Licensing Contract with User by Correcting the Error Between Estimated Execution Time from the History of Job Execution - A network system includes an application service provider (ASP) which is connected to the Internet and executes an application, and a CPU resource provider which is connected to the Internet and provides a processing service to a particular computational part (e.g., computation intensive part) of the application, wherein: when requesting a job from the CPU resource provider, the application service provider (ASP) sends information about estimated computation time of the job to the CPU resource provider via the Internet; and the CPU resource provider assigns the job by correcting this estimated computation time based on the estimated computation time sent from the application service provider (ASP).10-18-2012
20110004885FEEDFORWARD CONTROL METHOD, SERVICE PROVISION QUALITY CONTROL DEVICE, SYSTEM, PROGRAM, AND RECORDING MEDIUM THEREFOR - An object of the present invention is to provide a feed forward type control method, a service provision quality control device, a system, a program and a recording medium which resolve problems of “lack of an evaluation function of a proper control plan”, “lack of a control-oriented evaluation function and a control-oriented execution function” and “lack of a correcting function and a verifying function of a control plan”01-06-2011
20110004884Performance degradation based at least on computing application priority and in a relative manner that is known and predictable beforehand - A model is constructed to determine performance of each computing application based on allocation of resources (including at least one hardware resource) to the computing applications. How the allocation of the resources to the computing applications affects the performance is unknown beforehand. The resources are allocated to the computing applications based at least on the model. Where the resources are overloaded as allocated to the computing applications, performance degradation of each computing application is performed based at least on priorities of the computing applications relative to one another and on the model. Performance degradation reduces usage of the resources by the computing applications so that the resources are no longer overloaded. How the priorities of the computing applications affect the performance degradation in a relative manner to one another is known and predictable beforehand.01-06-2011
20110131584THE METHOD AND APPARATUS FOR THE RESOURCE SHARING BETWEEN USER DEVICES IN COMPUTER NETWORK - To solve the problems in prior art, the present invention has provided a new scheme for resource sharing between user devices, which shall be easily used, implemented and extended. Also, the present invention aims to decrease the user input for resource sharing. In particular, the present invention is based on IM protocol, the sharing initiator, and its cooperators concerning the key information of the to-be-consigned tasks via IM messages. Then, preferably, the initiator chooses one or more cooperators for each of the to-be-consigned tasks after further communication with the cooperators. For each task, the chosen cooperators will be referred to as its nominated cooperators. At last, each of the cooperators will handle the task(s) consigned to it, if any, and sends the result back to the initiator.06-02-2011
20110131583MULTICORE PROCESSOR SYSTEM - A multicore processor system includes one or more client carrying out parallel processing of tasks by means of processor cores and a server assisting the client to carry out the parallel processing via a communication network. Task information containing the minimum number of required cores indicating the number of processor cores required to carry out processes of the tasks and core information containing operation setup information indicating operation setup content of the processor cores are stored in the server. The server determines whether the task is allocated to the plurality of processor cores or not in accordance with the task information and the core information. The server updates the core information in accordance with a determination result to transmit the updated core information to the client. The client carries out the parallel processing by means of the processor cores in accordance with the received core information.06-02-2011
20110131582RESOURCE MANAGEMENT FINITE STATE MACHINE FOR HANDLING RESOURCE MANAGEMENT TASKS SEPARATE FROM A PROTOCOL FINITE STATE MACHINE - A method and logic circuit for a resource management finite state machine (RM FSM) managing resource(s) required by a protocol FSM. After receiving a resource request vector, the RM FSM determines not all of the required resource(s) are available. The protocol FSM transitions to a new state, generates an output vector, and loads the output vector into an output register. The RM FSM transitions to a state indicating that not all the resources are available and freezes an input register. In a subsequent cycle, the RM FSM freezes the output register and a current state register, and forces the output vector to be seen by the FSM environment as a null token. After determining that the required resource(s) are available, the RM FSM transitions to another state indicating that the resources are available, enables the output vector to be seen by the FSM environment, and unfreezes the protocol FSM.06-02-2011
20120266176Allocating Tasks to Machines in Computing Clusters - Allocating tasks to machines in computing clusters is described. In an embodiment a set of tasks associated with a job are received at a scheduler. In an embodiment an index can be computed for each combination of tasks and processors and stored in a lookup table. In an example the index may be include an indication of the preference for the task to be processed on a particular processor, an indication of a waiting time for the task to be processed and an indication of how other tasks being processed in the computing cluster may be penalized by assigning a task to a particular processor. In an embodiment tasks are assigned to a processor by accessing the lookup table, selecting a task for processing using the index and scheduling the selected task for allocation to a processor.10-18-2012
20120266177MANAGEMENT SYSTEM, COMPUTER SYSTEM INCLUDING THE MANAGEMENT SYSTEM, AND MANAGEMENT METHOD - The present invention provides a technique capable of improving use efficiency of storage devices. In this regard, a computer system of the present invention includes: a plurality of storage subsystems; an information processing apparatus coupled to the storage subsystems and including a virtual layer for virtually providing information from the storage subsystems; and the a management system that manages the plurality of storage subsystems and the information processing apparatus. The management system manages, on a memory, configuration information of logical volumes allocated to virtual instances managed on a virtual layer of the information processing apparatus and operation information of hardware resources included in the storage subsystems. The management system evaluates use efficiency of the virtual instances based on the configuration information of the logical volumes and the operation information of the hardware resources and outputs an evaluation result.10-18-2012
20120324470SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources.12-20-2012
20120324469RESOURCE ALLOCATION APPARATUS, RESOURCE ALLOCATION METHOD, AND COMPUTER READABLE MEDIUM - A parameter determination unit 12-20-2012
20120324468PRODUCT-SPECIFIC SYSTEM RESOURCE ALLOCATION WITHIN A SINGLE OPERATING SYSTEM INSTANCE - Resource constraints for a group of individual application products to be configured for shared resource usage of at least one shared resource within a single operating system instance are analyzed by a resource allocation module. An individual resource allocation for each of the group of individual application products is determined based upon the analyzed resource constraints for the group of individual application products. The determined individual resource allocation for each of the group of individual application products is implemented within the single operating system instance using local inter-product message communication bindings by the single operating system instance.12-20-2012
20120324467COMPUTING JOB MANAGEMENT BASED ON PRIORITY AND QUOTA - In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota, the method including determining whether a declared priority of a computing job exceeds a predetermined declared priority quota of a submitter; in the case that the declared priority of the computing job exceeds the predetermined declared priority of the submitter, substituting a reduced priority for the declared priority of the computing job; determining whether the reduced priority of the computing job exceeds a predetermined reduced priority quota for the submitter; and in the case that the reduced priority of the computing job does not exceed the predetermined reduced priority quota of the submitter, assigning the computing job to at least one computer resource at the reduced priority.12-20-2012
20120324466Scheduling Execution Requests to Allow Partial Results - The subject disclosure is directed towards scheduling requests using quality values that are defined for partial responses to the requests. For each request in a queue, an associated processing time is determined using a system load and/or the quality values. The associated processing time is less than or equal to a service demand, which represents an amount of time to produce a complete response.12-20-2012
20120324465WORK ITEM PROCESSING IN DISTRIBUTED APPLICATIONS - A system for organizing messages related to tasks in a distributed application is disclosed. The system includes a work-list creator to create a work list of the top-level work items to be accomplished in performing a task. Work-item processors are distributed in the system. The work-item processors process the top-level work item included in a task and also append additional work items to the work list. A work-list scheduler invokes the work-item processors so local work-item processors are invoked prior to remote work-item processors.12-20-2012
20120324464PRODUCT-SPECIFIC SYSTEM RESOURCE ALLOCATION WITHIN A SINGLE OPERATING SYSTEM INSTANCE - Resource constraints for a group of individual application products to be configured for shared resource usage of at least one shared resource within a single operating system instance are analyzed by a resource allocation module. An individual resource allocation for each of the group of individual application products is determined based upon the analyzed resource constraints for the group of individual application products. The determined individual resource allocation for each of the group of individual application products is implemented within the single operating system instance using local inter-product message communication bindings by the single operating system instance.12-20-2012
20110239223COMPUTATION RESOURCE CONTROL APPARATUS, COMPUTATION RESOURCE CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM - A computation resource control apparatus includes an activation unit, a first queue managing unit, an allocating unit and a second queue managing unit. The activation unit activates a computation resource being in a stop state in accordance with a computation request. The first queue managing unit adds the computation resource which is being activated to a first queue. The allocating unit allocates the computation resource, which is output from the first queue, to the computation request to execute a computation process corresponding to the computation request. The second queue managing unit adds the computation resource which has completed the computation process to a second queue and places the computation resource, which is output from the second queue, in the stop state.09-29-2011
20110239222SYSTEM AND METHOD OF DETERMINING APPLICABLE INSTALLATION INFORMATION OF APPARATUS - A computer and method obtains user input from an input device to determine applicable installation information of an apparatus according to resource consumption of the apparatus. The computer and method are capable of obtaining resource consumption information of an apparatus according to the installation material from an input device and operable to perform transformation processing to obtain installation data according to resource consumption information of the apparatus. Differences between the installation data and standard specifications are calculated and the specified standard specification corresponding to a difference which is the smallest number in the differences is found. The specified standard specification is outputted.09-29-2011
20120131592PARALLEL COMPUTING METHOD FOR PARTICLE BASED SIMULATION AND APPARATUS THEREOF - Disclosed are a parallel computing method for particle based simulation that may decrease a calculation delay due to data communication by simultaneously performing the data communication and a simulation calculation and increasing parallelism of a task, and an apparatus thereof. The parallel computing method for particle based simulation according to an exemplary embodiment to the present invention may include decomposing the whole calculation domain of a manager node into a plurality of sub-domains based on a grid macro-cell based orthogonal recursive bisection (ORB) method; allocating the decomposed sub-domains to worker nodes; and performing load balancing with respect to the worker nodes.05-24-2012
20120278812TASK ASSIGNMENT IN CLOUD COMPUTING ENVIRONMENT - Technologies are generally described for a system and method for assigning a task in a cloud. In some examples, the method may include receiving a task request relating to a task and determining service related data relating to the task based on the task request. In some examples, the method may include receiving resource data relating to a first and second resource in the cloud. In some examples, the method may include determining a first correlation value between the task and the first resource and a second correlation value between the task and the second resource based on the service related data and the resource data. In some examples, the method may include assigning the task to the first resource based on the first and second correlation value.11-01-2012
20120278811STREAM PROCESSING ON HETEROGENEOUS HARDWARE DEVICES - A stream processing execution engine evaluates development-time performance characteristic estimates in combination with run-time parameters to schedule execution of stream processing software components in a stack of a stream processing application that satisfy a defined performance criterion in a heterogeneous hardware device. A stream processing application includes a stack of interdependent stream processing software components. A stream processing execution engine evaluates one or more performance characteristics of multiple computational resources in the heterogeneous hardware device. Each performance characteristic is associated with performance of a computational resource in executing a computational-resource-dependent instance of a stream processing software component. The stream processing execution engine schedules within the run-time environment a computational resource on which to execute a computational-resource-dependent instance of one of the stream processing software components. The computational-resource-dependent instance is targeted for execution on the computational resource that satisfies a performance policy attributed to the stream processing software component.11-01-2012
20120331478METHOD AND DEVICE FOR PROCESSING INTER-SUBFRAME SERVICE LOAD BALANCING AND PROCESSING INTER-CELL INTERFERENCE - The present application provides a method and device for processing inter-subframe service load balancing and processing inter-cell interference, which includes: when processing the inter-subframe service load balancing, determining a service load of a link in a time period; determining a resource utilization ratio threshold according to the service load; and transmitting service data in each subframe according to the utilization ratio threshold. The inter-subframe service load balancing is processed when the inter-cell interference is processed, and in combination with various inter-cell interference coordination technologies, interference mitigation in one of a frequency domain, power and a space domain or the combination thereof is processed by the interference coordination technology. The present application can relieve the phenomenon that the interference mitigation effect is not good as the load information can not adapt well to the dynamic change of the inter-subframe service load in a time division duplex system, and can further mitigate the inter-cell interference in a long term evolution system, and improve the entire throughput performance of the system and the service quality of the subscriber in the system.12-27-2012
20120331477SYSTEM AND METHOD FOR DYNAMICALLY ALLOCATING HIGH-QUALITY AND LOW-QUALITY FACILITY ASSETS AT THE DATACENTER LEVEL - A system and method are disclosed for dynamically allocating high-quality and low-quality facility assets at the datacenter level. The system and method provide an actuator with information on priorities of information technology (IT) workloads. The actuator ranks the IT workloads according to their priorities, monitors an amount of resources the IT workloads demand, and tracks total capacities of facility assets in the datacenter. The facility assets include high-quality facility assets and low-quality facility assets. According to the direction of the actuator, a distribution mechanism dynamically switches lower priority IT workloads from the high-quality facility assets to the low-quality facility assets when the high-quality facility assets are overburdened.12-27-2012
20120331475DYNAMICALLY ALLOCATED THREAD-LOCAL STORAGE - Dynamically allocated thread storage in a computing device is disclosed. The dynamically allocated thread storage is configured to work with a process including two or more threads. Each thread includes a statically allocated thread-local slot configured to store a table. Each table is configured to include a table slot corresponding with a dynamically allocated thread-local value. A dynamically allocated thread-local instance corresponds with the table slot.12-27-2012
20120331476METHOD AND SYSTEM FOR REACTIVE SCHEDULING - A method and system of scheduling demands on a system having a plurality of resources are provided. The method includes the steps of, on receipt of a new demand for resources: determining the total resources required to complete said demand and a deadline for the completion of that demand; determining a plurality of alternative resource allocations which will allow completion of the demand before the deadline; for each of said alternative resource allocations, determining whether, based on allocations of resources to existing demands, said alternative resource allocation will result in a utilization of resources which is closer to an optimum utilization of said resources; and selecting, based on said determination, one of said alternative resource allocations to complete said demand so as to optimise utilisation of resources of the system.12-27-2012
20120089986PROCESS POOL OF EMPTY APPLICATION HOSTS TO IMPROVE USER PERCEIVED LAUNCH TIME OF APPLICATIONS - Various embodiments enable a device to create a pool of at least one empty application. An empty application can be configured to contain resources that are common across one or more other applications and initialize the resources for the one or more other applications effective to reduce startup time of the other applications. In one or more embodiments, an empty application can further be populated with the one or more other applications effective to cause the one or more other applications to execute. Alternately or additionally, a device can be monitored for an idle state, and, upon determining the device is in the idle state, at least one empty application can be created.04-12-2012
20110276981RUNTIME-RESOURCE MANAGEMENT - A runtime-resource management method, system, and product for managing resources available to application components in a portable device. The method, system, and product provide for loading one or more new application components into a portable device only if maximum runtime resources required by the one or more new application components are available in the portable device assuming loaded application components within the device are using the maximum runtime resources reserved by the loaded application components, reserving maximum runtime resources required by application components when application components are loaded into the portable device, and running loaded application components using only the runtime resources reserved for the loaded application components.11-10-2011
20110276979Non-Real Time Thread Scheduling - A hard real time (HRT) thread scheduler and a non-real time (NRT) thread scheduler for allocating allocate processor resources among HRT threads and NRT threads are disclosed. The HRT thread scheduler communicates with a HRT thread table including a plurality of entries specifying a temporal order for allocating execution cycles are allocated to one or more HRT threads. If a HRT thread identified by the HRT thread table is unable to be scheduled during the current execution cycle, the NRT thread scheduler accesses an NRT thread table which includes a plurality of entries specifying a temporal order for allocating execution cycles to one or more NRT threads. In an execution cycle where a HRT thread is not scheduled, the NRT thread scheduler identifies an NRT thread from the NRT thread table and an instruction from the identified NRT thread is executed during the execution cycle.11-10-2011
20110276978System and Method for Dynamic CPU Reservation - A computer readable storage medium storing a set of instructions executable by a processor. The set of instructions is operable to receive an instruction to reserve a processor of a system including a plurality of processors, receive an instruction to perform a task, determine whether the task has affinity for the reserved processor, execute the task using the reserved processor if the task has affinity for the reserved processor, execute the task using one of the processors other than the reserved processor if the task does not have affinity for the reserved processor.11-10-2011
20110276977DISTRIBUTED WORKFLOW EXECUTION - A workflow is designated for execution across a plurality of autonomous computational entities automatically. Among other things, the cost of computation is balanced with the cost of communication among computational entities to reduce total execution time of a workflow. In other words, a balance is struck between grouping tasks for execution on a single computational entity and segmenting tasks for execution across multiple computational entities.11-10-2011
20100229178STREAM DATA PROCESSING METHOD, STREAM DATA PROCESSING PROGRAM AND STREAM DATA PROCESSING APPARATUS - Once data stagnation occurs in a query group which groups queries, a scheduler of a server apparatus calculates an estimated load value of each query forming the query group based on at least one of input flow rate information and latency information of the query. The scheduler divides the queries of the query group into a plurality query groups so that the sum of estimated load values of queries belonging to one query group becomes substantially equal to the sum of estimated load values of queries belonging to another query group. The divided query groups are reallocated to different processors respectively. Throughput in query processing of stream data in a stream data processing system can be improved.09-09-2010
20100229175Moving Resources In a Computing Environment Having Multiple Logically-Partitioned Computer Systems - As needs of a computer system grow, further logically-partitioned computer systems may be added to allow for more partitions to be created. When new partitions are added, or when an entire computing environment analysis is commenced, it may be discovered that better system efficiency may be had if the resources or computational work in a first partition in a first computer is moved to a second partition in the first computer. It is also may be determined that better system efficiency may be had if the resources or computational work in the first partition in the first computer is moved to a third partition in a second computer.09-09-2010
20110321055TRANSPORTATION ASSET MANAGER - Systems and methods of visualizing assets are disclosed that include registering assets into a system, creating a correlation between the location of the assets and a physical representation of the operational area, determining the status and class of the assets, selecting at least one asset, and exerting control over the at least one asset.12-29-2011
20120102499OPTIMIZING THE PERFORMANCE OF HYBRID CPU SYSTEMS BASED UPON THE THREAD TYPE OF APPLICATIONS TO BE RUN ON THE CPUs - A hybrid CPU system wherein the plurality of processors forming the hybrid system are initially undifferentiated by type or class. Responsive to the sampling of the threads of a received and loaded computer application to be executed, the function of at least one of the processors is changed so that the threads of the sampled application may be most effectively processed/run on the hybrid system.04-26-2012
20120102498RESOURCE MANAGEMENT USING ENVIRONMENTS - Apparatus, systems, and methods may operate to receive time-based reservation requests for predefined resource environments comprising resource types that include hardware, software, and data, among others. Additional activities may include detecting a conflict between at least one of the resource types in a first one of the predefined resource environments and at least one of the resource types in a second one of the predefined resource environments, and resolving the conflict in favor of the first one of the predefined resource environments by reserving additional resource elements in a cloud computing architecture and/or reserving a less capable version of the second one of the predefined resource environments. Additional apparatus, systems, and methods are disclosed.04-26-2012
20120102500NUMA AWARE SYSTEM TASK MANAGEMENT - Task management in a Non-Uniform Memory Access (NUMA) architecture having multiple processor cores is aware of the NUMA topology in task management. As a result memory access penalties are reduced. Each processor is assigned to a zone allocated to a memory controller. The zone assignment is based on a cost function. In a default mode a thread of execution attempts to perform work in a queue of the same zone as the processor to minimize memory access penalties. Additional work stealing rules may be invoked if there is no work for a thread to perform from its default zone queue.04-26-2012
20120291041ASSIGNING RESOURCES FOR TASKS - A processing subsystem has plural processing stages, where output of one of the plural processing stages is provided to another of the processing stages. Resources are dynamically assigned to the plural processing stages.11-15-2012
20120291040AUTOMATIC LOAD BALANCING FOR HETEROGENEOUS CORES - A system and method for efficient automatic scheduling of the execution of work units between multiple heterogeneous processor cores. A processing node includes a first processor core with a general-purpose micro-architecture and a second processor core with a single instruction multiple data micro-architecture. A computer program comprises one or more compute kernels, or function calls. A compiler computes pre-runtime information of the given function call. A runtime scheduler produces one or more work units by matching each of the one or more kernels with an associated record of data. The scheduler assigns work units either to the first or to the second processor core based at least in part on the computed pre-runtime information. In addition, the scheduler is able to change an original assignment for a waiting work unit based on dynamic runtime behavior of other work units corresponding to a same kernel as the waiting work unit.11-15-2012
20100199285VIRTUAL MACHINE UTILITY COMPUTING METHOD AND SYSTEM - An analytics engine receives real-time statistics from a set of virtual machines supporting a line of business (LOB) application. The statistics relate to computing resource utilization and are used by the analytics engine to generate a prediction of demand for the LOB application in order to dynamically control the provisioning of virtual machines to support the LOB application.08-05-2010
20130014124REDUCING CROSS QUEUE SYNCHRONIZATION ON SYSTEMS WITH LOW MEMORY LATENCY ACROSS DISTRIBUTED PROCESSING NODES - A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.01-10-2013
20130014123DETERMINATION OF RUNNING STATUS OF LOGICAL PROCESSOR - An embodiment provides for operating an information processing system. An aspect of the invention includes allocating an execution interval to a first logical processor of a plurality of logical processors of the information processing system. The execution interval is allocated for use by the first logical processor in executing instructions on a physical processor of the information processing system. The first logical processor determines that a resource required for execution by the first logical processor is locked by another one of the other logical processors. An instruction is issued by the first logical processor to determine whether a lock-holding logical processor is currently running. The lock-holding logical processor waits to release the lock if it is currently running. A command is issued by the first logical processor to a super-privileged process for relinquishing the allocated execution interval by the first logical processor if the locking holding processor is not running.01-10-2013
20130014122METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment.01-10-2013
20100131958Method, A Mechanism and a Computer Program Product for Executing Several Tasks in a Multithreaded Processor - The invention relates a method for executing several tasks in a multithreaded (MT) processor, each task having, for every hardware shared resource from a predetermined set of hardware shared resources in the MT processor, one associated artificial time delay that is introduced when said task accesses said hardware shared resource, the method comprising step (a) of establishing, for every hardware shared resource and each task to be artificially delayed, the artificial delay to be applied to each access of said task to said hardware shared resource; step (b) of performing the following steps (b05-27-2010
20120151493RELAY APPARATUS AND RELAY MANAGEMENT APPARATUS - A relay apparatus executes a reallocation process so as to transfer data received from an information processing apparatus allocated to the relay apparatus to a destination apparatus. The reallocation process includes the following operations. The relay apparatus determines reallocatability of the information processing apparatus on the basis of a status of receiving transfer data from the information processing apparatus. The reallocatability represents whether the information processing apparatus is reallocatable to another apparatus. The relay apparatus stores reallocatability information indicating the determined reallocatability in a storage unit. The relay apparatus determines whether to reallocate the information processing apparatus on the basis of the reallocatability information stored in the storage unit. The relay apparatus reallocates the information processing apparatus determined to be reallocated.06-14-2012
20120151492MANAGEMENT OF COPY SERVICES RELATIONSHIPS VIA POLICIES SPECIFIED ON RESOURCE GROUPS - Exemplary method, system, and computer program embodiments for prescribing copy services relationships for storage resources organized into a plurality of resource groups in a computing storage environment are provided. In one embodiment, at least one additional resource group attribute is defined to specify at least one policy prescribing a copy services relationship between two of the storage resources. Pursuant to a request to establish the copy services relationship between the two storage resources, each of the two storage resources exchange resource group labels corresponding to which of the plurality of resource groups the two storage resources are assigned, and each of the two storage resources validates the requested copy services relationship and the resource group label of an opposing one of the two storage resources against the individual ones of the at least one additional resource group attribute in the resource group object to determine if the copy services relationship may proceed.06-14-2012
20130019249System and Method For Managing Resources of A Portable Computing Device - A method and system for managing resources of a portable computing device is disclosed. The method includes receiving node structure data for forming a node, in which the node structure data includes a unique name assigned to each resource of the node. A node has at least one resource and it may have multiple resources. Each resource may be a hardware or software element. The system includes a framework manger which handles the communications between existing nodes within a node architecture. The framework manager also logs activity of each resource by using its unique name. The framework manager may send this logged activity to an output device, such as a printer or a display screen. The method and system may help reduce or eliminate a need for customized APIs when a new hardware or software element (or both) are added to a portable computing device.01-17-2013
20130024869Picture loading method and terminal - The disclosure provides a picture loading method and a terminal. The method includes determining a number of pictures that can be loaded according to an available memory space, acquiring, from a plurality of pictures, the determined number of pictures beginning at a starting position, and assigning resources for the acquired pictures and preloading the acquired pictures using the resources. The disclosure also provides a picture loading terminal.01-24-2013
20130024868APPARATUS AND METHOD FOR ALLOCATING A TASK - A task allocating apparatus capable of improving task processing performance is provided. The task allocating apparatus measures a core usage of a plurality of tasks that are run in multiple cores, according to predetermined periods, estimates a core usage of each task for a following period based on the measured core usages, and allocates one or more tasks to the multiple cores based on the estimated core usage.01-24-2013
20130024867Resource allocation using a library with entitlement - An entitlement vector may be used when selecting a thread for execution in a multi-threading environment in terms of aspects such as priority. An embodiment or embodiments of an information handling apparatus can comprise a library comprising a plurality of functions and components operable to handle a plurality of objects. The information handling apparatus can further comprise an entitlement vector operable to assign entitlement to at least one of a plurality of resources to selected ones of the plurality of functions and components.01-24-2013
20130024866Topology Mapping In A Distributed Processing System - Topology mapping in a distributed processing system, the distributed processing system including a plurality of compute nodes, each compute node having a plurality of tasks, each task assigned a unique rank, including: assigning each task to a geometry defining the resources available to the task; selecting, from a list of possible data communications algorithms, one or more algorithms configured for the assigned geometry; and identifying, by each task to all other tasks, the selected data communications algorithms of each task in a single collective operation.01-24-2013
20130024870MULTICORE SYSTEM AND ACTIVATING METHOD - A multicore system includes multiple processor cores; a scheduler in each of the processor cores and allocating a process to the processor cores when having a master authority that is an authority to assign processes; and a master controller performing control to repeat until a process to be executed no longer exists, a cycle in which the schedulers transfer the master authority to another processor core after receiving the master authority and before assigning a process to the processor cores, discards the master authority after assigning the process to the processor cores, and enters a state of waiting to receive the master authority.01-24-2013
20090089791RESOURCE ALLOCATION UNIT QUEUE - Provided is a system, deployment and program for resource allocation unit queuing in which an allocation unit associated with a task is classified. An allocation unit freed as the task ends is queued for use by another task in a queue at a selected location within the queue in accordance with the classification of said allocation unit. In one embodiment, an allocation unit is queued at a first end of the queue if classified in a first class and is queued at a second end of the queue if classified in said second class. Other embodiments are described and claimed.04-02-2009
20080244610Method and Apparatus for Dynamic Device Allocation for Managing Escalation of On-Demand Business Processes - Resource allocation techniques are provided for use in managing escalation of on-demand business processes. For example, in one aspect of the invention, a technique for managing escalation of a business process comprises the following steps/operations. A request is obtained from a business process, the business process having one or more tasks associated therewith. The one or more tasks are mapped to one or more roles. One or more available resources are allocated for the one or more roles. At least one communication session is launched such that data associated with the business process may be transferred to the one or more allocated resources.10-02-2008
20080244608Multiprocessor system and access protection method conducted in multiprocessor system - In a conventional multiprocessor system, an access right with respect to a shared resource could not be changed in a flexible manner. The present invention provides a multiprocessor system having a first processor element (PE-A) and a second processor element (PE-B), the first processor element (PE-A) and the second processor element (PE-B) independently executing a program, in which the first processor element (PE-A) includes: a central processing unit (CPUa) for performing an operation processing based upon the program; a shared resource (10-02-2008
20080244607Economic allocation and management of resources via a virtual resource market - Allocating distributed computing resources comprises creating offers to provide the resources for use by application programs. Each offer specifies a performance characteristic and a value associated with a corresponding resource. Bids to obtain the resources for use by the application programs are created. Each bid specifies a service level required for operation of a corresponding application program and a value associated with operating the corresponding application program. Bids are matched to offers via a market exchange model by matching the service level requirement and value of each bid to the performance characteristic and value of one of the offers. Resources associated with each offer are allocated to the application program associated with a matching bid, and the application program's operations are migrated to the allocated resources. Resources are monitored to ensure compliance with the service level requirement of each bid, and non-complying resources are replaced via the market exchange model.10-02-2008
20080244602Method for task and resource management - A method is disclosed for managing one or more tasks or human resources. In one embodiment, the method receives one or more tasks. The method determines at least one task evaluation criteria value for each received one or more tasks. In addition, the method determines a task value associated with each received one or more tasks based on the determined at least one task evaluation criteria value.10-02-2008
20080244601Method and apparatus for allocating resources among backup tasks in a data backup system - Method and apparatus for allocating resources among backup tasks in a data backup system is described. One aspect of the invention relates to managing backup tasks in a computer network. An estimated resource utilization is established for each of the backup tasks based on a set of backup statistics. A resource reservation is allocated for each of the backup tasks based on the estimated resource utilization thereof. The resource reservation of each of the backup tasks is dynamically changed during performance thereof.10-02-2008
20080244600Method and system for modeling and analyzing computing resource requirements of software applications in a shared and distributed computing environment - An application manager for enabling multiple applications to share resources in a shared and distributed computing environment. The disclosed system provides for the specification, representation and automatic analysis of resource requirements of applications in a shared and distributed computing environment. The application manager is provided with service specifications for each application, which defines the resource requirements necessary or preferred to run said application (or more precisely, its constituent application components). In addition, the resources may be required to have certain characteristics and constraints may be placed on the required resources. The application manager works in conjunction with a resource supply manager and requests the required resources be supplied for the application. If there are appropriate and sufficient available resources to meet the particular resource requirements, then the resources are allocated, and the application components mapped thereon. The disclosed system can enable the sharing of resources among multiple heterogeneous applications. The systems can allow resource sharing without application source code access or any knowledge of the internal design of the application. Integration of an application can be re-used for other similar applications. Furthermore, the disclosed system enables the dynamic and efficient management of shared resources, providing an agile resource infrastructure adaptive to dynamic changes and failures.10-02-2008
20080244599Master And Subordinate Operating System Kernels For Heterogeneous Multiprocessor Systems - Systems and methods establish communication and control between various heterogeneous processors in a computing system so that an operating system can run an application across multiple heterogeneous processors. With a single set of development tools, software developers can create applications that will flexibly run on one CPU or on combinations of central, auxiliary, and peripheral processors. In a computing system, application-only processors can be assigned a lean subordinate kernel to manage local resources. An application binary interface (ABI) shim is loaded with application binary images to direct kernel ABI calls to a local subordinate kernel or to the main OS kernel depending on which kernel manifestation is controlling requested resources.10-02-2008
20080235703On-Demand Utility Services Utilizing Yield Management - Techniques for provision of on-demand utility services utilizing a yield management framework are disclosed. For example, in one illustrative aspect of the invention, a system for managing one or more computing resources associated with a computing center comprises: (i) a resource management subsystem for managing the one or more computing resources associated with the computing center, wherein the computing center is able to provide one or more computing services in response to one or more customer demands; and (ii) a yield management subsystem coupled to the resource management subsystem, wherein the yield management subsystem optimizes provision of the one or more computing services in accordance with the resource management subsystem and the one or more computing resources.09-25-2008
20080235702Componentized Automatic Provisioning And Management Of Computing Environments For Computing Utilities - The present invention provides systems, methods and apparatus for automatically provisioning and managing re-sources in a computing utility. Its automation procedures are based on a resource model which allows resource specific provisioning and management tasks to be encapsulated into components for reuse. These components are assembled into more complex structures and finally computing services. This invention provides a method for constructing a computing service from a set of resources given a high level specification. Once constructed, the service includes a component that provides management function, which can allow modification of its underlying set of resources.09-25-2008
20080235701ADAPTIVE PARTITIONING SCHEDULER FOR MULTIPROCESSING SYSTEM - A symmetric multiprocessing system includes multiple processing units and corresponding instances of an adaptive partition processing scheduler. Each instance of the adaptive partition processing scheduler selectively allocates the respective processing unit to run process threads of one or more adaptive partitions based on a comparison between merit function values of the one or more adaptive partitions. The merit function for a particular partition of the one or more adaptive partitions may be based on whether the adaptive partition has available budget on the respective processing unit. The merit function for a particular partition associated with an instance of the adaptive partition scheduler also, or in the alternative, may be based on whether the adaptive partition has available global budget on the symmetric multiprocessing system.09-25-2008
20080235700Hardware Monitor Managing Apparatus and Method of Executing Hardware Monitor Function - A hypervisor OS includes a monitor context table in which plural monitor contexts each including monitor operation conditions and information concerning priority are set in order to set a hardware monitor function for monitoring operation states of plural physical processors that execute plural processes in parallel. The hypervisor OS causes the hardware monitor function to execute on a monitor context with high priority satisfying a monitor operation condition, for acquiring monitor data and outputting the monitor data together with timing data indicating time when the monitor operation condition is satisfied and outputs timing data indicating time when the monitor operation condition is satisfied, on a monitor context satisfying a monitor operation condition but having low priority.09-25-2008
20080235699SYSTEM FOR PROVIDING QUALITY OF SERVICE IN LINK LAYER AND METHOD USING THE SAME - A system and method of providing a quality of service (QoS) is provided. The method of providing the QoS in the link layer includes receiving, by a stream providing device, minimum and maximum resource requirement information of a stream receiving device; transmitting, by the stream providing device, a reservation message including the minimum and maximum resource requirement information; allocating a resource, by at least one bridge, based on the reservation message transmitted from the stream providing device; and receiving, by the stream receiving device, a stream transmitted from the stream providing device via the resource.09-25-2008
20110247004Information Processing Apparatus - According to one embodiment, an information processing apparatus is provided. The information processing apparatus which performs a signaling process with an external apparatus through a network and a multimedia process of data, includes: first and second CPU cores each including one or more CPU cores; a first controller configured to allocate one of the signaling process and the multimedia process to the first CPU core, and the other of the signaling process and the multimedia process to the second CPU core; and a second controller configured to allocate a process which is different from the multimedia process and the signaling process to one of the first and second CPU cores, according to process states of the first and second CPU cores.10-06-2011
20110247003Predictive Dynamic System Scheduling - Resources of a partitionable computer system are partitioned into at least first and second partitions, in accordance with a first or second mode of operation of the partitionable computer system. The system is run in the first or second mode, partitioned in accordance with the partitioning step. Periodically, it is determined whether the computer system should be switched from one mode to the other mode. If so, the computer system is run in the other mode, partitioned in accordance with the other mode. The first and second modes of operation are defined in accordance with historical observations of the partitionable computer system. The periodic determination is carried out based on predictions in accordance with the historical observations.10-06-2011
20110247002Dynamic System Scheduling - Resources of a partitionable computer system are partitioned into: (i) a first partition for first jobs, the first jobs being at least one of small and short running; and (ii) a second partition for second jobs, the second jobs being at least one of large and long running. The computer system is run as partitioned in the partitioning step and the partitioning is periodically re-evaluated against at least one threshold for at least one of the partitions. If the periodic re-evaluation suggests that one of the first and second partitions is underutilized, the resources of the partitionable computer system are dynamically re-partitioned to reassign at least some of the resources of the partitionable computer system from the underutilized one of the first and second partitions to another one of the first and second partitions10-06-2011
20110247001Resource Management In Computing Scenarios - This patent application pertains to urgency-based resource management in computing scenarios. One implementation can identify processes competing for resources on a system. The implementation can evaluate an urgency of individual competing processes. The implementation can also objectively allocate the resources among the competing processes in a manner that reduces a total of the urgencies of the competing processes.10-06-2011
20110247000Mechanism for Tracking Memory Accesses in a Non-Uniform Memory Access (NUMA) System to Optimize Processor Task Placement - A mechanism for tracking memory accesses in a non-uniform memory access (NUMA) system to optimize processor task placement is disclosed. A method of embodiments of the invention includes creating a page table (PT) hierarchy associated with a thread to be run on a processor of a computing device, collecting access bit information from the PT hierarchy associated with the thread, wherein the access bit information includes any access bits in the PT hierarchy that are set by a memory management unit (MMU) of the processor to identify a page of memory accessed by the thread, determining memory access statistics for the thread, and utilizing the memory access statistics for the thread in a determination of whether to migrate the thread to another processor.10-06-2011
20110265094LOGIC FOR SYNCHRONIZING MULTIPLE TASKS AT MULTIPLE LOCATIONS IN AN INSTRUCTION STREAM - Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task).10-27-2011
20080222645Process Execution Management Based on Resource Requirements and Business Impacts - Techniques are presented for managing execution of processes on a data processing system The data processing system comprises process instances that are each an execution of a corresponding process. Each process instance comprises activity instances. Business impacts are determined for the process instances, the activity instances, or both. Order of execution of the activity instances is managed by allocating resources to activity instances in order to achieve an objective defined in terms of the business impacts. In another embodiment, requests are received for the execution of the processes. For a given request, one or more of the operations of assigning, updating, aggregating, and weighting of first business impacts associated with the given request are performed to create second business impacts associated with the given request. Additionally, requests can be modified. Modification can include changing the process requested or process input as deemed appropriate, combining related requests into a single request, or both. Unmodified requests and any modified requests are managed.09-11-2008
20080222644RISK-MODULATED PROACTIVE DATA MIGRATION FOR MAXIMIZING UTILITY IN STORAGE SYSTEMS - The embodiments of the invention provide a method, computer program product, etc. for risk-modulated proactive data migration for maximizing utility. More specifically, a method of planning data migration for maximizing utility of a storage infrastructure that is running and actively serving at least one application includes selecting a plurality of potential data items for migration and selecting a plurality of potential migration destinations to which the potential data items can be moved. Moreover, the method selects a plurality of potential migration speeds at which the potential data items can be moved and selects a plurality of potential migration times at which the potential data items can be moved to the potential data migration destinations. The selecting of the plurality of potential migration speeds selects a migration speed below a threshold speed, wherein the threshold speed defines a maximum system utility loss permitted.09-11-2008
20080222643COMPUTING DEVICE RESOURCE SCHEDULING - Systems and methods for scheduling computing device resources include a scheduler that maintains multiple queues. Requests are placed in one of the multiple queues depending on how much resource time the requests are to receive and when they are to receive it. The queue that a request is placed into depends on a pool bandwidth defined for a pool that includes the request and a bandwidth request. A request has an importance associated therewith that is taken into account in the scheduling process. The scheduler proceeds through the queues in a sequential and circular fashion, taking a work item from a queue for processing when that queue is accessed.09-11-2008
20080222642Dynamic resource profiles for clusterware-managed resources - Allowing for resource attributes that may change dynamically while the resource is in use, provides for dynamic changes to the manner in which such resources are managed. Management of dynamic resource attributes by clusterware involves new entry points to clusterware agent modules, through which resource-specific user-specified instructions for discovering new values for resource attributes, and for performing a user-specified action in response to the new attribute values, are invoked. A clusterware policy manager may know ahead of time that a particular resource has dynamic attributes or may be notified when a resource's dynamic attribute has changed and, periodically or in response to the notification, request that the agent invoke the particular resource-specific instructions for discovering new values for attributes for the particular resource and/or for performing a user-specified action in response to the new attribute values. During the majority of this process, the resource remains available.09-11-2008
20080222641Executing applications - An application executing apparatus comprising including at least one execution resource configured to execute at least one application is disclosed. The apparatus is provided with at least one processor configured to detect events triggering execution of the at least one application and to dynamically control use of the at least one execution resource in handling of the detected events based on a variable reflective of the operating conditions of the apparatus.09-11-2008
20130179894PLATFORM AS A SERVICE JOB SCHEDULING - Systems and methods are presented for providing resources by way of a platform as a service in a distributed computing environment to perform a job. A user may submit a work item to the system that results in a job being processed on a pool of virtual machines. The pool may be automatically established by the system in response to the work item and other information associated with the work item, the user, and/or the account. Further, it is contemplated that resources associated with the pool, such as virtual machines, may be automatically allocated based, at least in part, on information associated with the work item, the user, the account, the pool, and/or the system.07-11-2013
20130179891SYSTEMS AND METHODS FOR USE IN PERFORMING ONE OR MORE TASKS - Systems and methods for performing a task are provided. One example method includes if the task allocation metric indicates load balancing associated with the processor is below a first threshold, determining whether the task is a reentrant task, if the task is a reentrant task, determining whether a stopping criteria is satisfied, re-entering the task into a queue of tasks if the stopping criteria is not satisfied and the task is a reentrant task, if the task allocation metric indicates core affinity associated with the at least one processor is below a second threshold, determining whether the task is a main task, if the task is not a main task, determining whether a stopping criteria is satisfied, and if the stopping criteria is satisfied and the task is not a main task, pulling a parent task associated with the task into the thread.07-11-2013
20130179892PROVIDING LOGICAL PARTIONS WITH HARDWARE-THREAD SPECIFIC INFORMATION REFLECTIVE OF EXCLUSIVE USE OF A PROCESSOR CORE - Techniques for simulating exclusive use of a processor core amongst multiple logical partitions (LPARs) include providing hardware thread-dependent status information in response to access requests by the LPARs that is reflective of exclusive use of the processor by the LPAR accessing the hardware thread-dependent information. The information returned in response to the access requests is transformed if the requestor is a program executing at a privilege level lower than the hypervisor privilege level, so that each logical partition views the processor as though it has exclusive use of the processor. The techniques may be implemented by a logical circuit block within the processor core that transforms the hardware thread-specific information to a logical representation of the hardware thread-specific information or the transformation may be performed by program instructions of an interrupt handler that traps access to the physical register containing the information.07-11-2013
20130179893Adaptation of Probing Frequency for Resource Consumption - Embodiments of the invention relate to dynamically assessing and managing probing of a system for resource availability. A predicted resource usage pattern is acquired, and critical points in the pattern pertaining to predicted changes in resource consumption are identified. Probing the system for resource availability is limited to the identified critical points, or to real-time changes in the resource usage pattern.07-11-2013
20130179895PAAS HIERARCHIAL SCHEDULING AND AUTO-SCALING - In various embodiments, systems and methods are presented for providing resources by way of a platform as a service in a distributed computing environment to perform a job. The system may be comprised of a number of components, such as a task machine, a task location service machine, and a high-level location service machines that in combination are useable to accomplish functions provided herein. It is contemplated that the system performs methods for providing resources by determining resources of the system, such as virtual machines, and applying auto-scaling rules to the system to scale those resources. Based on the determination of the auto-scaling rules, the resources may be allocated to achieve a desired result.07-11-2013
20120254886Reducing Overheads in Application Processing - A method, a system and a computer program of reducing overheads in multiple applications processing are disclosed. The method includes identifying resources interacting with each of the applications from a set of applications and grouping the applications from the set of applications, resulting in at least one application cluster, in response to the identified resources, wherein overheads associated with re-initialization of agents assigned to the identified resources are reduced. The method further includes assigning an agent corresponding to each of the identified resources and initializing the agent corresponding to each of the identified resources. The method further includes identifying parameters associated with the identified resources, pre-processing the identified parameters for each of the identified resources, and also includes selecting a clustering means for the clustering.10-04-2012
20120254883DYNAMICALLY SWITCHING THE SERIALIZATION METHOD OF A DATA STRUCTURE - Embodiments of the invention comprise a method for dynamically switching a serialization method of a data structure. If use of the serialization mechanism is desired, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is in use, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is not in use, an instruction to access the data structure without obtaining the serialization mechanism is received.10-04-2012
20130139170JOB SCHEDULING TO BALANCE ENERGY CONSUMPTION AND SCHEDULE PERFORMANCE - An energy-aware backfill scheduling method combines overestimation of job run-times and processor adjustments, such as dynamic voltage and frequency scaling, to balance overall schedule performance and energy consumption. Accordingly, some scheduled jobs are executed in a manner reducing energy consumption. A computer-implemented method comprises identifying job performance data for a plurality of representative jobs and running a simulation of backfill-based job scheduling of the jobs at various combinations of run-time over-estimation values and processor adjustment values. The simulation generates data including energy consumption and job delay. The method further identifies one of the combinations of values that optimizes the mathematical product of an energy consumption parameter and a job delay parameter using the simulation generated data for the plurality of jobs. Jobs submitted to a processor are then scheduled using the identified combination of a run-time over-estimation value and a processor adjustment value.05-30-2013
20130091508SYSTEM AND METHOD FOR STRUCTURING SELF-PROVISIONING WORKLOADS DEPLOYED IN VIRTUALIZED DATA CENTERS - The system and method for structuring self-provisioning workloads deployed in virtualized data centers described herein may provide a scalable architecture that can inject intelligence and embed policies into managed workloads to provision and tune resources allocated to the managed workloads, thereby enhancing workload portability across various cloud and virtualized data centers. In particular, the self-provisioning workloads may have a packaged software stack that includes resource utilization instrumentation to collect utilization metrics from physical resources that a virtualization host allocates to the workload, a resource management policy engine to communicate with the virtualization host to effect tuning the physical resources allocated to the workload, and a mapping that the resource management policy engine references to request tuning the physical resources allocated to the workload from a management domain associated with the virtualization host.04-11-2013
20130091507OPTIMIZING DATA WAREHOUSING APPLICATIONS FOR GPUS USING DYNAMIC STREAM SCHEDULING AND DISPATCH OF FUSED AND SPLIT KERNELS - Systems and methods for managing a processor and one or more co-processors for a database application whose queries have been processed into an intermediate form (IR) containing kernels of the database application that have been fused and split; dynamically scheduling such kernels on CUDA streams and further dynamically dispatching kernels to GPU devices by estimating execution time in order to achieve high performance.04-11-2013
20130097612DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node.04-18-2013
20130097611UNIFIED, WORKLOAD-OPTIMIZED, ADAPTIVE RAS FOR HYBRID SYSTEMS - A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes performing a first data computation by a first set of processors, the first set of processors having a first computer processor architecture. The method continues by performing a second data computation by a second processor coupled to the first set of processors, the second processor having a second computer processor architecture, the first computer processor architecture being different than the second computer processor architecture. Finally, the method includes dynamically allocating computational resources of the first set of processors and the second processor based on at least one metric while the first set of processors and the second processor are in operation such that the accuracy and processing speed of the first data computation and the second data computation are optimized.04-18-2013
20130097610DETERMINING SUITABLE NETWORK INTERFACE FOR PARTITION DEPLOYMENT/RE-DEPLOYMENT IN A CLOUD ENVIRONMENT - Migrating a logical partition (LPAR) from a first physical port to a first target physical port, includes determining a configuration of an LPAR having allocated resources residing on a computer and assigned to the first physical port of the computer. The configuration includes a label that specifies a network topology that is provided by the first physical port and the first target physical port has a port label that matches the label included in the configuration of the LPAR. The first target physical port with available capacity to service the LPAR is identified and the LPAR is migrated from the first physical port to the target physical port by reassigning the LPAR to the first target physical port.04-18-2013
20130097609System and Method for Determining Thermal Management Policy From Leakage Current Measurement - Various embodiments of methods and systems for determining the thermal status of processing components within a portable computing device (“PCD”) by measuring leakage current on power rails associated with the components are disclosed. One such method involves measuring current on a power rail after a processing component has entered a “wait for interrupt” mode. Advantageously, because a processing component may “power down” in such a mode, any current remaining on the power rail associated with the processing component may be attributable to leakage current. Based on the measured leakage current, a thermal status of the processing component may be determined and thermal management policies consistent with the thermal status of the processing component implemented. Notably, it is an advantage of embodiments that the thermal status of a processing component within a PCD may be established without the need to leverage temperature sensors.04-18-2013
20130097608Processor With Efficient Work Queuing - Work submitted to a co-processor enters through one of multiple input queues, used to provide various quality of service levels. In-memory linked-lists store work to be performed by a network services processor in response to lack of processing resources in the network services processor. The work is moved back from the in-memory inked-lists to the network services processor in response to availability of processing resources in the network services processor.04-18-2013
20130104142INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A CPU executes initialization for allocating a storage area of an auxiliary storage device for a program execution area after a particular application program is loaded into the program execution area and becomes executable. Subsequently, the CPU loads a plurality of application programs into the program execution area.04-25-2013
20130104140RESOURCE AWARE SCHEDULING IN A DISTRIBUTED COMPUTING ENVIRONMENT - Systems and methods for resource aware scheduling of processes in a distributed computing environment are described herein. One aspect provides for accessing at least one job and at least one resource on a distributed parallel computing system; generating a current reward value based on the at least one job and a current value associated with the at least one resource; generating a prospective reward value based on the at least one job and a prospective value associated with the at least one resource at a predetermined time; and scheduling the at least one job based on a comparison of the current reward value and the prospective reward value. Other embodiments and aspects are also described herein.04-25-2013
20130125132INFORMATION PROCESSING APPARATUS AND CONTROL METHOD - An information processing apparatus includes plural CPUs to operate in parallel, a logical CPU generating part to generate one or more logical CPUs from one of the CPUs, an operating frequency averaging part to change each of operating frequencies of the CPUs to match a mean of the operating frequencies, and a logical CPU allocation part to cause the logical CPU generating part to generate the logical CPU to eliminate an excess or a deficiency of a processing capability with respect to an information processing load associated with a partition to which the logical CPU belonging to the CPU is allocated, the excess or deficiency being generated due to a change in the operating frequencies of the CPUs made by the operating frequency averaging part, and to allocate the generated logical CPU to the partition associated with the excess or deficiency of the processing capability of the logical CPU.05-16-2013
20130125131MULTI-CORE PROCESSOR SYSTEM, THREAD CONTROL METHOD, AND COMPUTER PRODUCT - A multi-core processor system includes a first core configured to detect a state where a first thread that is allocated to a first core and a second thread that is allocated to a second core access a common resource; calculate, upon detecting the state and based on a first cycle for the first thread to be allocated to the first core and a second cycle for the second thread to be allocated to the second core, a contention cycle for the first and the second threads to cause access contention for the resource; and select a thread allocated at a time before or after the contention cycle of a core to which a given thread that is either the first or the second thread is allocated at the contention cycle; and a second core configured to switch the times at which the given thread and the selected thread are allocated.05-16-2013
20130125130CONSERVING POWER THROUGH WORK LOAD ESTIMATION FOR A PORTABLE COMPUTING DEVICE USING SCHEDULED RESOURCE SET TRANSITIONS - A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning the resources. At a scheduled start time, a process starts in which the states of one or more resources are switched from states indicated by the first resource state set to states indicated by the second resource state set. Scheduling the process of transitioning resource states to begin at a time that allows the process to be completed just in time for the resource states to be immediately available to the processor upon entering the second application state helps minimize adverse effects of resource latency. This calculation for the time that the process should be completed just in time may be enhanced when system states and transitions between states are measured accurately and stored in memory of the portable computing device.05-16-2013
20130125129GROWING HIGH PERFORMANCE COMPUTING JOBS - The preemption of running jobs by other running or queued jobs in a system that has processing resources. The system has running jobs, and queued jobs that are awaiting processing by the system. In a scheduling operation, preemptor jobs are identified, the preemptor jobs being jobs that are candidates for preempting one or more of the running jobs. The preemptor jobs include queued jobs, as well as running jobs that are capable of using more processing resource of the system. One of the other running jobs is preempted to free processing resources for the running job that was identified as a preemptor job. Accordingly, not only may queued jobs preempt running jobs, but currently running jobs may preempt other currently running jobs.05-16-2013
20110276980COMPUTING RESOURCE ALLOCATION DEVICE, COMPUTING RESOURCE ALLOCATION SYSTEM, COMPUTING RESOURCE ALLOCATION METHOD THEREOF AND PROGRAM - Provided is a computing resource allocation device capable of allocating computing resources to accommodate changing activity patterns. The device is equipped with an external environment recognition means that analyzes input values from sensors to specify the current environment, a memory means that stores a table in which the sensors required to specify the environment are correlated, a transition frequency computation means that computes the transition frequency at which a transition is made from an environment to another environment, and a computing resource allocation means that computes the amount of allocation of the computing resources to be used for the analysis based on the current environment by referencing the table and the transition frequency, and that allocates the computing resources for the analysis.11-10-2011
20130132966Video Player Instance Prioritization - A video player instance may be prioritized and decoding and rendering resources may be assigned to the video player instance accordingly. A video player instance may request use of a resource combination. Based on a determined priority a resource combination may be assigned to the video player instance. A resource combination may be reassigned to another video player instance upon detection that the previously assigned resource combination is no longer actively in use.05-23-2013
20130132968MECHANISM FOR ASYNCHRONOUS INPUT/OUTPUT (I/O) USING ALTERNATE STACK SWITCHING IN KERNEL SPACE - A mechanism for asynchronous input/output (I/O) using second stack switching in kernel space is disclosed. A method of the invention includes receiving, by a kernel executing in a computing device, an input/output (I/O) request from an application thread executing using a first stack, allocating a second stack in kernel space of the computing device, switching execution of the thread to the second stack, and processing the I/O request synchronously using the second stack.05-23-2013
20130132967OPTIMIZING DISTRIBUTED DATA ANALYTICS FOR SHARED STORAGE - Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server.05-23-2013
20130145375PARTITIONING PROCESSES ACROSS CLUSTERS BY PROCESS TYPE TO OPTIMIZE USE OF CLUSTER SPECIFIC CONFIGURATIONS - A system and method for virtualization and cloud security are disclosed. According to one embodiment, a system comprises a first multi-core processing cluster and a second multi-core processing cluster in communication with a network interface card and software instructions. When the software instructions are executed by the second multi-core processing cluster they cause the second multi-core processing cluster to receive a request for a service, create a new or invoke an existing virtual machine to service the request, and return a desired result indicative of successful completion of the service to the first multi-core processing cluster.06-06-2013
20080209433Adaptive Reader-Writer Lock - A method and computer system for dynamically selecting an optimal synchronization mechanism for a data structure in a multiprocessor environment. The method determines a quantity of read-side and write-side acquisitions, and evaluates the data to determine an optimal mode for efficiently operating the computer system while maintaining reduced overhead. The method incorporates data received from the individual units within a central processing system, the quantity of write-side acquisitions in the system, and data which has been subject to secondary measures, such as formatives of digital filters. The data subject to secondary measures includes, but is not limited to, a quantity of read-side acquisitions, a quantity of write-side acquisitions, and a quantity of read-hold durations. Based upon the individual unit data and the system-wide data, including the secondary measures, the operating system may select the most efficient synchronization mechanism from among the mechanisms available. Accordingly, efficiency of a computer system may be enhanced with the ability to selectively choose an optimal synchronization mechanism based upon selected and calculated parameters.08-28-2008
20080201716ON-DEMAND MULTI-THREAD MULTIMEDIA PROCESSOR - A device includes a multimedia processor that can concurrently support multiple applications for various types of multimedia such as graphics, audio, video, camera, games, etc. The multimedia processor includes configurable storage resources to store instructions, data, and state information for the applications and assignable processing units to perform various types of processing for the applications. The configurable storage resources may include an instruction cache to store instructions for the applications, register banks to store data for the applications, context registers to store state information for threads of the applications, etc. The processing units may include an arithmetic logic unit (ALU) core, an elementary function core, a logic core, a texture sampler, a load control unit, a flow controller, etc. The multimedia processor allocates a configurable portion of the storage resources to each application and dynamically assigns the processing units to the applications as requested by these applications.08-21-2008
20080201715METHOD AND SYSTEM FOR DYNAMICALLY CREATING AND MODIFYING RESOURCE TOPOLOGIES AND EXECUTING SYSTEMS MANAGEMENT FLOWS - The present invention replaces the prior art Systems Management Flow execution environments with a new Order Processing Environment. The Order Processing Environment consists of an Order Processing Container (“Container” in short), a Relationship Registry, and a Factory Registry. The Factory Registry supports creation of new resource instances. The Relationship Registry stores relationships between resources. The Container gets as input an Order and a start point address for the first resource. The Order is a document (e.g., XML) which includes a number of Tasks for each involved resource without arranging those tasks in a sequence. This differentiates Orders from workflow descriptions used by standard workflow engines. Each Task includes at least all input parameters for executing the Task. The sequence of the Task execution is derived by the Container by using the Relationship Registry which reflects all current Resource Topologies.08-21-2008
20100287560OPTIMIZING A DISTRIBUTION OF APPLICATIONS EXECUTING IN A MULTIPLE PLATFORM SYSTEM - Embodiments of the claimed subject matter are directed to methods and a system that allows the optimization of processes operating on a multi-platform system (such as a mainframe) by migrating certain processes operating on one platform to another platform in the system. In one embodiment, optimization is performed by evaluating the processes executing in a partition operating under a proprietary operating system, determining a collection of processes from the processes to be migrated, calculating a cost of migration for migrating the collection of processes, prioritizing the collection of processes in an order of migration and incrementally migrating the processes according to the order of migration to another partition in the mainframe executing a lower cost (e.g., open-source) operating system.11-11-2010
20110258633Information processing system and use right collective management method - Disclosed is an information processing system including plural information processing apparatuses that have respective hardware resources including hardware resources to be licensed, each information processing apparatus performing information processing using the licensed hardware resources in which use rights are allocated; and a management apparatus that is connected to the plural information processing apparatuses and manages the hardware resources of the plural information processing apparatuses. The management apparatus includes a use right information holding unit that holds use right information corresponding to the use rights of the hardware resources, and a use right allocation unit that allocates the use rights to the hardware resources on a hardware resource basis in accordance with the held use right information.10-20-2011
20130152101PREPARING PARALLEL TASKS TO USE A SYNCHRONIZATION REGISTER - A job may be divided into multiple tasks that may execute in parallel on one or more compute nodes. The tasks executing on the same compute node may be coordinated using barrier synchronization. However, to perform barrier synchronization, the tasks use (or attach) to a barrier synchronization register which establishes a common checkpoint for each of the tasks. A leader task may use a shared memory region to publish to follower tasks the location of the barrier synchronization register—i.e., a barrier synchronization register ID. The follower tasks may then monitor the shared memory to determine the barrier synchronization register ID. The leader task may also use a count to ensure all the tasks attach to the BSR. This advantageously avoids any task-to-task communication which may reduce overhead and improve performance.06-13-2013
20090019446DETERMINING A POSSIBLE LOT SIZE - The invention provides methods and apparatus, including computer program products, for of determining a possible lot size of units with respect to a fixed date for a chain of at least two process steps, each process step requiring a respective assigned resource, and consuming a respective time per unit for being performed by the respective assigned resource, where the process steps are sequentially dependent on each other. This is achieved by the following: 01-15-2009
20130152103PREPARING PARALLEL TASKS TO USE A SYNCHRONIZATION REGISTER - A job may be divided into multiple tasks that may execute in parallel on one or more compute nodes. The tasks executing on the same compute node may be coordinated using barrier synchronization. However, to perform barrier synchronization, the tasks use (or attach) to a barrier synchronization register which establishes a common checkpoint for each of the tasks. A leader task may use a shared memory region to publish to follower tasks the location of the barrier synchronization register—i.e., a barrier synchronization register ID. The follower tasks may then monitor the shared memory to determine the barrier synchronization register ID. The leader task may also use a count to ensure all the tasks attach to the BSR. This advantageously avoids any task-to-task communication which may reduce overhead and improve performance.06-13-2013
20130152102RUNTIME-AGNOSTIC MANAGEMENT OF APPLICATIONS - An application may be modeled as a collection of resource usage. The model allows the application to be elastic so that additional resource usage can be added when needed. Items may be added to and/or removed from applications at any time without regard to the state of the application. Existing items in the application may also be altered at any time regardless of the application state. A set of interfaces are used to manage the resources. The interface allow for the provisioning, configuration, deployment, monitoring and diagnostics of resources in a consistent way.06-13-2013
20080263559METHOD AND APPARATUS FOR UTILITY-BASED DYNAMIC RESOURCE ALLOCATION IN A DISTRIBUTED COMPUTING SYSTEM - In one embodiment, the present invention is a method for allocation of finite computational resources amongst multiple entities, wherein the method is structured to optimize the business value of an enterprise providing computational services. One embodiment of the inventive method involves establishing, for each entity, a service level utility indicative of how much business value is obtained for a given level of computational system performance. The service-level utility for each entity is transformed into a corresponding resource-level utility indicative of how much business value may be obtained for a given set or amount of resources allocated to the entity. The resource-level utilities for each entity are aggregated, and new resource allocations are determined and executed based upon the resource-level utility information. The invention is thereby capable of making rapid allocation decisions, according to time-varying need or value of the resources by each of the entities.10-23-2008
20120260259RESOURCE CONSUMPTION WITH ENHANCED REQUIREMENT-CAPABILITY DEFINITIONS - Enhanced requirement-capability definitions are employed for resource consumption and allocation. Business requirements can be specified with respect to content to be hosted, and a decision can be made as to whether, and how, to allocate resources for the content based on the business requirements and resource capabilities. Capability profiles can also be employed to hide underlying resource details while still providing information about resource capabilities.10-11-2012
20100318998System and Method for Out-of-Order Resource Allocation and Deallocation in a Threaded Machine - A system and method for managing the dynamic sharing of processor resources between threads in a multi-threaded processor are disclosed. Out-of-order allocation and deallocation may be employed to efficiently use the various resources of the processor. Each element of an allocate vector may indicate whether a corresponding resource is available for allocation. A search of the allocate vector may be performed to identify resources available for allocation. Upon allocation of a resource, a thread identifier associated with the thread to which the resource is allocated may be associated with the allocate vector entry corresponding to the allocated resource. Multiple instances of a particular resource type may be allocated or deallocated in a single processor execution cycle. Each element of a deallocate vector may indicate whether a corresponding resource is ready for deallocation. Examples of resources that may be dynamically shared between threads are reorder buffers, load buffers and store buffers.12-16-2010
20100318997ANNOTATING VIRTUAL APPLICATION PROCESSES - A virtualization system is described herein that facilitates communication between a virtualized application and a host operating system to allow the application to correctly access resources referenced by the application. When the operating system creates a virtualized application process, the virtualization system annotates a data structure associated with the process with an identifier that identifies the virtualized application environment associated with the process. When operating system components make requests on behalf of the originating virtual process, a virtualization driver checks the data structure associated with the process to determine that the helper process is doing work on behalf of the virtualized application process. Upon discovering that the thread is doing virtual process work, the virtualization driver directs the helper process's thread to the virtual application's resources, allowing the helper process to accomplish the requested work with the correct data.12-16-2010
20120284729PROCESSOR STATE-BASED THREAD SCHEDULING - Techniques for implementing processor state-based thread scheduling are described that improve processor performance or energy efficiency of a computing device. In one or more embodiments, a power configuration state of a processor is ascertained. The processor or another processor is selected to execute a thread based on the power configuration state of the processor. In other embodiments, power configuration states of processor cores are ascertained. Power configuration state criteria for the processor cores are defined based on the respective power configuration states. One of the processor cores is then selected based on the power configuration state criteria to execute a thread.11-08-2012
20120284732Time-variant scheduling of affinity groups on a multi-core processor - Methods and systems for scheduling applications on a multi-core processor are disclosed, which may be based on association of processor cores, application execution environments, and authorizations that permits efficient and practical means to utilize the simultaneous execution capabilities provided by multi-core processors. The algorithm may support definition and scheduling of variable associations between cores and applications (i.e., multiple associations can be defined so that the cores an application is scheduled on can vary over time as well as what other applications are also assigned to the same cores as part of an association). The algorithm may include specification and control of scheduling activities, permitting preservation of some execution capabilities of a multi-core processor for future growth, and permitting further evaluation of application requirements against the allocated execution capabilities.11-08-2012
20120284731TWO-PASS LINEAR COMPLEXITY TASK SCHEDULER - A method for two-pass scheduling of a plurality of tasks generally including steps (A) to (C). Step (A) may assign each of the tasks to a corresponding one or more of a plurality of processors in a first pass through the tasks. The first pass may be non-iterative. Step (B) may reassign the tasks among the processors to shorten a respective load on one or more of the processors in a second pass through the tasks. The second pass may be non-iterative and may begin after the first pass has completed. Step (C) may generate a schedule in response to the assigning and the reassigning. The schedule generally maps the tasks to the processors.11-08-2012
20120284730SYSTEM TO PROVIDE COMPUTING SERVICES - A system is provided. The system includes a computing device by which first and second commands are inputted, first and second resources disposed in communication with the computing device to be receptive of the first command and responsive to the first command with first and second energy demands in first and second response times, respectively and a managing unit. The managing unit is disposed in communication with the computing device to be receptive of the first and second commands and with the first and second resources to allocate tasks associated with the first command to one of the first and second resources. The tasks are allocated in accordance with the second command and the second command is based on the first and second energy demands and the first and second response times.11-08-2012
20130160023SCHEDULER, MULTI-CORE PROCESSOR SYSTEM, AND SCHEDULING METHOD - In an embodiment, a scheduler coordinates timings at which cores execute processes, for any two sequential processes to consecutively be executable. The processes are executed in order scheduled by the scheduler by concentrating on a specific core processes obstructing the consecutive execution such as an external interrupt and an internal interrupt. The scheduler does not always cause processes of another application to be executed during all standby time periods while the scheduler determines whether a length of a standby time period is shorter than a predetermined value, and does not cause any process of the other application to be executed when the length is shorter than that.06-20-2013
20130185729ACCELERATING RESOURCE ALLOCATION IN VIRTUALIZED ENVIRONMENTS USING WORKLOAD CLASSES AND/OR WORKLOAD SIGNATURES - Systems, methods, and apparatus for managing resources assigned to an application or service. A resource manager maintains a set of workload classes and classifies workloads using workload signatures. In specific embodiments, the resource manager minimizes or reduces resource management costs by identifying a relatively small set of workload classes during a learning phase, determining preferred resource allocations for each workload class, and then during a monitoring phase, classifying workloads and allocating resources based on the preferred resource allocation for the classified workload. In some embodiments, interference is accounted for by estimating and using an “interference index”.07-18-2013
20130185730MANAGING RESOURCES FOR MAINTENANCE TASKS IN COMPUTING SYSTEMS - Methods for managing resources for maintenance tasks in computing systems are provided. One system includes a controller and memory coupled to the controller, the memory configured to store a module. The controller, when executing the module, is configured to determine an amount of available resources for use by a plurality of maintenance tasks in a computing system and divide the available resources between the plurality of maintenance tasks based on a need for each maintenance task. One method includes determining, by a central controller, an amount of available resources for use by a plurality of maintenance tasks in a computing system and dividing the available resources between the plurality of maintenance tasks based on a need for each maintenance task. Computer storage mediums including a computer program product method for managing resources for maintenance tasks in computing systems are also provided.07-18-2013
20110283292ALLOCATION OF PROCESSING TASKS - Methods and systems for allocating processing tasks between a plurality of processing resources (11-17-2011
20110314478Allocation and Control Unit - An allocation and control unit for allocating execution threads for a task to a plurality of auxiliary processing units and for controlling the parallel execution of said execution threads by said auxiliary processing units, the task being executed in a sequential manner by a main processing unit. The allocation and control unit includes means for managing auxiliary logical processing units, means for managing auxiliary physical processing units each corresponding to an auxiliary processing unit, and means for managing the auxiliary processing units. The means for managing the auxiliary processing units include means for allocating an auxiliary logical processing unit to an execution thread to be executed, and means for managing the correspondence between the auxiliary logical processing units and the auxiliary physical processing units. The auxiliary processing units execute in parallel the execution threads for the task by way of the auxiliary logical processing units, which are allocated as late as possible and freed as early as possible.12-22-2011
20130191837FLEXIBLE TASK AND THREAD BINDING - A thread binding method includes generating a thread layout for processors in a computing system, allocating system resources for tasks of an application allocated to the processors, affinitizing the tasks and generating threads for the tasks. A thread count for each of the tasks is at least one and equal or unequal to that of any other of the tasks.07-25-2013
20130191838SYSTEM AND METHOD FOR SEPARATING MULTIPLE WORKLOADS PROCESSING IN A SINGLE COMPUTER OPERATING ENVIRONMENT - A computing system using a persistent, unique identifier may be used to authenticate the system that ensures software and configurations of systems are properly licensed while permitting hardware components to be replaced. The persistent, unique system identifier may be coupled to serial numbers or similar hardware identifiers of components within the computing system while permitting some of the hardware components to be deleted and changed. When components that are coupled to the persistent, unique identifier are removed or disabled, a predefined time period is provided to update the coupling of the persistent, unique identifier to alternate hardware component in the system.07-25-2013
20130191840RESOURCE ALLOCATION BASED ON ANTICIPATED RESOURCE UNDERUTILIZATION IN A LOGICALLY PARTITIONED MULTI-PROCESSOR ENVIRONMENT - A method, apparatus and program product for allocating resources in a logically partitioned multiprocessor environment. Resource usage is monitored in a first logical partition in the logically partitioned multiprocessor environment to predict a future underutilization of a resource in the first logical partition. An application executing in a second logical partition in the logically partitioned multiprocessor environment is configured for execution in the second logical partition with an assumption made that at least a portion of the underutilized resource is allocated to the second logical partition during at least a portion of the predicted future underutilization of the resource.07-25-2013
20130191841Method and Apparatus For Fine Grain Performance Management of Computer Systems - A system and method to control the allocation of processor (or state machine) execution resources to individual tasks executing in computer systems is described. By controlling the allocation of execution resources, to all tasks, each task may be provided with throughput and response time guarantees. This control is accomplished through workload metering shaping which delays the execution of tasks that have used their workload allocation until sufficient time has passed to accumulate credit for execution (accumulate credit over time to perform their allocated work) and workload prioritization which gives preference to tasks based on configured priorities.07-25-2013
20130191839INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER-READABLE STORAGE MEDIUM - When a process starts using a resource, management information is stored. Management information includes, in association with one another, process identification information indicating the process, resource identification information indicating the resource to be used by the process, and processor identification information indicating a processor allocated to the process. When waking up the process, a processor that is associated to the process to wake up in the management information is allocated to the process to wake up.07-25-2013
20120291039SYSTEM AND METHOD FOR MANAGING A RESOURCE - Systems and methods for managing a resource are disclosed. Resource may include vendors, suppliers, partners and the like. The systems allow users to conduct a weighted analysis of various resources and compare multiple resources on the same scale. Moreover, the systems are configured to grade various resources based on their strategic value to a business. This analysis and the resulting strategic value may be based on qualitative data provided by users and quantitative data captured from the business relationship between the business and the resource.11-15-2012
20120017219Multi-CPU Domain Mobile Electronic Device and Operation Method Thereof - A multi-CPU domain mobile electronic device, includes: a first CPU domain, comprising at least a first migration agent unit, the first migration agent unit detecting a task migration condition, determining whether to migrate a migratable task, and sending an associated migration event, and a second CPU domain, comprising at least a second migration agent unit, the second migration agent unit receiving the migratable task from the first migration agent unit.01-19-2012
20120017218DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node.01-19-2012
20120030685SYSTEM AND METHOD FOR PROVIDING DYNAMIC PROVISIONING WITHIN A COMPUTE ENVIRONMENT - The disclosure relates to systems, methods and computer-readable media for dynamically provisioning resources within a compute environment. The method aspect of the disclosure comprises A method of dynamically provisioning resources within a compute environment, the method comprises analyzing a queue of jobs to determine an availability of compute resources for each job, determining an availability of a scheduler of the compute environment to satisfy all service level agreements (SLAs) and target service levels within a current configuration of the compute resources, determining possible resource provisioning changes to improve SLA fulfillment, determining a cost of provisioning; and if provisioning changes improve overall SLA delivery, then re-provisioning at least one compute resource.02-02-2012
20120030684RESOURCE ALLOCATION - At least one candidate allocation time period is determined according to a resource benefit time step function. The resource benefit does not vary with time in the at least one candidate allocation time period. Resources and relations between the resources are converted into sub-resource groups according to the resource cost time step function. Each of the sub-resource groups comprise sub-resources that correspond to the resources and relations between the sub-resources. The resource benefits and resource costs of the sub-resources do not vary with time. With respect to the at least one candidate allocation time period, the sub-resource groups are input into a resource schedule optimizer to obtain optimized results with respect to the sub-resource groups. An optimized result, with respect to the at least one candidate allocation time period, is obtained from the optimized results with respect to the sub-resource groups.02-02-2012
20120030683Method of forming a personal mobile grid system and resource scheduling thereon - The method of forming a personal mobile grid system and resource scheduling thereon provides for the formation of a personal network, a personal area network or the like having a computational grid superimposed thereon. Resource scheduling in the personal mobile grid is performed through an optimization model based upon the nectar acquisition process of honeybees.02-02-2012
20130198753FULL EXPLOITATION OF PARALLEL PROCESSORS FOR DATA PROCESSING - For full exploitation of parallel processors for data processing, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step.08-01-2013
20130198754FULL EXPLOITATION OF PARALLEL PROCESSORS FOR DATA PROCESSING - Exemplary method, system, and computer program product embodiments for full exploitation of parallel processors for data processing are provided. In one embodiment, by way of example only, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step.08-01-2013
20130198755APPARATUS AND METHOD FOR MANAGING RESOURCES IN CLUSTER COMPUTING ENVIRONMENT - Disclosed herein are a resource manager node and a resource management method. The resource manager node includes a resource management unit, a resource policy management unit, a shared resource capability management unit, a shared resource status monitoring unit, and a shared resource allocation unit. The resource management unit performs an operation necessary for resource allocation when a resource allocation request is received. The resource policy management unit determines a resource allocation policy based on the characteristic of the task, and generates resource allocation information. The shared resource capability management unit manages the topology of nodes, information about the capabilities of resources, and resource association information. The shared resource status monitoring unit monitors and manages information about the status of each node and the use of allocated resources. The shared resource allocation unit sends a resource allocation request to at least one of the plurality of nodes.08-01-2013
20130198758TASK DISTRIBUTION METHOD AND APPARATUS FOR MULTI-CORE SYSTEM - The present invention relates generally to a task distribution method and apparatus for systems in a real-time Operating System (OS) environment using a multi-core Central Processing Unit (CPU). The present invention is configured to set roles of multiple cores included in the multi-core system in such a way as to divide the cores into real-time cores for executing real-time tasks and non-real-time cores for executing non-real-time tasks, allocate real-time tasks to cores, a role of which has been set to that of real-time cores, and non-real-time tasks to cores, a role of which has been set to that of non-real-time cores, based on the set roles of the cores, allow the respective cores to execute the tasks allocated thereto, and collect information about a procedure of executing the tasks as task execution procedure information, and change the set roles of the cores based on the collected information.08-01-2013
20130198756TRANSFERRING A PARTIAL TASK IN A DISTRIBUTED COMPUTING SYSTEM - A method begins by a dispersed storage (DS) processing module determining that partial task processing resources of a first DST execution unit are projected to be available. The method continues with the DS processing module ascertaining that partial task processing resources of a second DST execution unit are projected to be overburdened. The method continues with the DS processing module receiving, from the second DST execution unit, a partial task assigned to the second DST execution unit in accordance with a partial task allocation transfer policy to produce an allocated partial task and executing the allocated partial task.08-01-2013
20130198757RESOURCE ALLOCATION METHOD AND APPARATUS OF GPU - A resource allocation method and apparatus utilize the GPU resource efficiently by sorting the tasks using General Purpose GPU (GPGPU) into operations and combining the same operations into a request. The resource allocation method of a Graphic Processing Unit (GPU) according to the present disclosure includes receiving a task including at least one operation; storing the at least one operation in unit of request; merging data of same operations per request; and allocating GPU resource according to an execution order the request.08-01-2013
20130104141DIVIDED CENTRAL DATA PROCESSING, - A circuit configuration for a data processing system and a corresponding method for executing multiple tasks by way of a central processing unit having a processing capacity assigned to the processing unit, the circuit configuration being configured to distribute the processing capacity of the processing unit uniformly among the respective tasks, and to process the respective tasks in time-offset fashion until they are respectively executed.04-25-2013
20120066687RESOURCE MANAGEMENT SYSTEM - A resource management system for managing resources in a computing and/or communications resource infrastructure is disclosed. The system comprises a database for storing a model of the resource infrastructure. The database defines a set of resources provided by the infrastructure; a set of software applications operating within the infrastructure and utilising resources; and associations between given applications in the model and given resources to indicate utilisation of the given resources by the given applications. The model can be used to perform resource utilisation analysis and failure impact analysis.03-15-2012
20120066686DEMAND RESPONSE SYSTEM INCORPORATING A GRAPHICAL PROCESSING UNIT - A system and approach for utilizing a graphical processing unit in a demand response program. A demand response server may have numerous demand response resources connected to it. The server may have a main processor and an associated memory, and a graphic processing unit connected to the main processor and memory. The graphic processing unit may have numerous cores which incorporate processing units and associated memories. The cores may concurrently process demand response information and rules of the numerous resources, respectively, and provide signal values to the main processor. The main processor may the provide demand response signals based at least partially on the signal values, to each of the respective demand response resources.03-15-2012
20130205300METHOD AND SYSTEM FOR MANAGING RESOURCE - The present invention discloses a method and system for managing resources, wherein the method comprises: a resource editor accepts that a user adds a resource and defines an ID of the resource (S08-08-2013
20130205301SYSTEMS AND METHODS FOR TASK GROUPING ON MULTI-PROCESSORS - Embodiments of the present invention provide improved systems and methods for grouping instruction entities. In one embodiment, a system comprises a processing cluster to execute software, the processing cluster comprising a plurality of processing units, wherein the processing cluster is configured to execute the software as a plurality of instruction entities. The processing cluster is further configured to execute the plurality of instruction entities in a plurality of execution groups, each execution group comprising one or more instruction entities, wherein the processing cluster executes a group of instruction entities in the one or more instruction entities in an execution group concurrently. Further, the execution groups are configured so that a plurality of schedule-before relationships are established, each schedule-before relationship being established among a respective set of instruction entities by executing the plurality of instruction entities in the plurality of execution groups.08-08-2013
20130205302INFORMATION PROCESSING TERMINAL AND RESOURCE RELEASE METHOD - In an information processing terminal, a second screen activation monitoring unit that has received a focus OFF notification sends a domain switch request notification to a domain control unit, and the domain control unit that has received the notification sends a domain switch notification to a first OS. Then, the first OS sends a focus ON notification to a first screen activation monitoring unit and further sends the focus OFF notification to a first application. A resource is thereby released by the first application that is implemented to release an acquired resource upon receiving the focus OFF notification.08-08-2013
20120079492VECTOR THROTTLING TO CONTROL RESOURCE USE IN COMPUTER SYSTEMS - Embodiments are provided for managing the system performance of resources performing tasks in response to task requests from tenants. In one aspect, a system that comprises at least one resource configured to perform at least one admitted task with an impact under the control of a computer system. The computer system provides services to more than one tenant. The computer system comprises a strategist configured to assess the impact of the admitted task to create a cost function vector containing multiple cost function specifications and a budget policy vector containing multiple budget policies and an actuator. The actuator receives the cost function vector and the budget policy vector from the strategist, receives a task request one of the more than one tenants, and calculates cost functions based upon the cost function vector to predict the impact of the task request on the resources for each of the task requests. The actuator throttles the task requests based upon the budget policies for the impact on the resources to create at least one of the admitted task performed by the resource and a delayed task request.03-29-2012
20120304189COMPUTER SYSTEM AND ITS CONTROL METHOD - It is an object of this invention to provide a computer system and its control method capable of preventing allocation of a resource(s), which is not intended by a superior administrator, to a certain storage administrator even when the superior administrator sets a certain authority to that storage administrator and intends to allocate a resource(s), which is required to enable this authority, to the storage administrator.11-29-2012
20120084787APPARATUS AND METHOD FOR CONTROLLING A RESOURCE UTILIZATION POLICY IN A VIRTUAL ENVIRONMENT - An apparatus and method for controlling a resource utilization policy in a virtual environment are provided. The apparatus may increase network throughput by dynamically adjusting the resource utilization policies of a driver domain that can directly access a shared device, and a guest driver that cannot directly access the shared device. In addition, the apparatus may improve the efficiency of the use of CPU resources by appropriately adjusting the CPU occupancy rates of the driver and guest domains.04-05-2012
20130212593Controlled Growth in Virtual Disks - A method, an apparatus and an article of manufacture for controlling growth in virtual disk size. The method includes limiting a guest virtual machine file in a hypervisor from allocating a new disk block as allocated space, wherein a virtual disk on a virtual machine is mapped to the guest virtual machine file, and facilitating the virtual disk to reuse a previously allocated and freed disk block for the allocated space to control growth in virtual disk size.08-15-2013
20130212594METHOD OF OPTIMIZING PERFORMANCE OF HIERARCHICAL MULTI-CORE PROCESSOR AND MULTI-CORE PROCESSOR SYSTEM FOR PERFORMING THE METHOD - Disclosed is a multi-core processor, and more particularly, a method of optimizing performance of a multi-core processor having a hierarchical structure and a multi-core processor system for performing the method. To this end, the method of optimizing performance of a hierarchical multi-core processor including a plurality of kernel cores, each kernel core including a plurality of cores sharing a memory, the method includes calculating a correlation between a plurality of threads by a thread correlation managing module within a main processor; grouping the plurality of threads into two or more threads according to information on the calculated correlation by the main processor; and allocating each of the grouped threads within an equal group to each core within an equal kernel core of the hierarchical multi-core processor by a scheduler of the main processor.08-15-2013