Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Priority scheduling

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

718102000 - Process scheduling

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20130031558Scheduling Mapreduce Jobs in the Presence of Priority Classes - Techniques for scheduling one or more MapReduce jobs in a presence of one or more priority classes are provided. The techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs.01-31-2013
20130031557System To Profile And Optimize User Software In A Managed Run-Time Environment - Method, apparatus, and system for monitoring performance within a processing resource, which may be used to modify user-level software. Some embodiments of the invention pertain to an architecture to allow a user to improve software running on a processing resources on a per-thread basis in real-time and without incurring significant processing overhead.01-31-2013
20090133027Systems and Methods for Project Management Task Prioritization - A project management task prioritization system is provided to refine the prioritization factors for tasks in a project based on changes to the order of performing the tasks. The initial proposed order for performing the tasks is provided by the system to the person responsible for the task in a graphical format that allows the person to drag and drop the tasks, adjusting the order of the tasks to their preferred order. A neural network comparator is used to compare the task prioritization factors associated with each pair of tasks that are altered in order to determine a relative priority. The neural network system updates the task prioritization factors based on the changes in order the tasks are to be performed.05-21-2009
20120266175METHOD AND DEVICE FOR BALANCING LOAD OF MULTIPROCESSOR SYSTEM - A method and a device for balancing load of a multiprocessor system relate to the resource allocation field of the multiprocessor system, for achieving the object of reducing the number of accessing a remote node memory or the amount of copying data when the processes migrated into a target Central Processing Unit (CPU) are performed. The method for balancing the load of the multiprocessor system comprises: determining the local CPU and the target CPU in the multiprocessor system; sequencing the migration priorities, based on the size of memory space occupied by the processes in the queue of the local CPU; wherein the less the memory space occupied by the process is, the higher the migration priority of the process is; and migrating the process whose migration priority is the highest other than the processes being performed in the queue of the local CPU into the target CPU.10-18-2012
20130031556DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. In another embodiment, if a request to add one or more processing elements may cause future backpressure, the request may be refused.01-31-2013
20090193424METHOD OF PROCESSING INSTRUCTIONS IN PIPELINE-BASED PROCESSOR AND CORRESPONDING PROCESSOR - The present invention discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, the base pipeline stages being activated all the while, and the enhanced pipeline stages being activated or shutdown according to requirements for performance of a workload. The present invention further discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, each pipeline stage being partitioned into a base module and at least one enhanced module, the base module being activated all the while, and the enhanced module being activated or shutdown according to requirements for performance of a workload.07-30-2009
20110202924Asynchronous Task Execution - Techniques for asynchronous task execution are described. In an implementation, tasks may be initiated and executed asynchronously, thereby allowing a plurality of calls to be made in parallel. Each task may be associated with a respective timeout that triggers an end to execution of the task. If a timeout for a low priority task expires without completing both the low priority task and a relatively higher priority task, then the low priority task may use the relatively higher priority task to extend execution time of the low priority task in order to allow additional time to perform the low priority task.08-18-2011
20120180060PREDICTION BASED PRIORITY SCHEDULING - Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing.07-12-2012
20120180059TIME-VALUE CURVES TO PROVIDE DYNAMIC QoS FOR TIME SENSITIVE FILE TRANSFERS - A method and apparatus has been shown and described which allows Quality of Service to be controlled at a temporal granularity. Time-value curves, generated for each task, ensure that mission resources are utilized in a manner which optimizes mission performance. It should be noted, however, that although the present invention has shown and described the use of time-value curves as applied to mission workflow tasks, the present invention is not limited to this application; rather, it can be readily appreciated by one of skill in the art that time-value curves may be used to optimize the delivery of any resource to any consumer by taking into account the dynamic environment of the consumer and resource.07-12-2012
20080256544Stateless task dispatch utility - Computer resource management techniques involving receiving notification of an available resource, generating a set of tasks that could be performed by the resource, and dispatching one of the tasks on the resource. Related systems and software are also discussed. Some techniques can be used for automatic software building and testing.10-16-2008
20100077399Methods and Systems for Allocating Interrupts In A Multithreaded Processor - A multithreaded processor capable of allocating interrupts is described. In one embodiment, the multithreaded processor includes an interrupt module and threads for executing tasks. The interrupt module can identify a priority for each thread based on a task priority for tasks being executed by the threads and assign an interrupt to a thread based at least on its priority.03-25-2010
20130081041Circuit arrangement for execution planning in a data processing system - A circuit arrangement and method for a data processing system for executing a plurality of tasks with a central processing unit having a processing capacity allocated to the processing unit; the circuit arrangement being configured to allocate the processing unit to the specific tasks in a time-staggered manner for processing, so that the tasks are processed in an order to be selected and tasks not having a current processing request are skipped over in the order during the processing; the circuit arrangement including a prioritization order control unit to determine the order in which the tasks are executed; and in response to each selection of a task for processing, the order of the tasks being redetermined and the selection being controlled so that for a number N of tasks, a maximum of N time units elapse until an active task is once more allocated processing capacity by the processing unit.03-28-2013
20130081040MANUFACTURING PROCESS PRIORITIZATION - A manufacturing process prioritization system. In one embodiment, the system includes at least one computing device adapted to prioritize a very large scale integration (VLSI) process, by performing actions including: querying a database for task-based data associated with a set of manufacturing tasks; applying at least one rule to the task-based data to prioritize a first one of the set of manufacturing tasks over a second one of the set of manufacturing tasks; and providing a set of processing instructions for processing a manufactured product according to the prioritization.03-28-2013
20130081039Resource allocation using entitlements - A data handling apparatus are adapted to facilitate resource allocation, allocating resources upon which objects execute. A data handling apparatus can comprise resource allocation logic and a scheduler. The resource allocation logic can be operable to dynamically set entitlement value for a plurality of resources comprising physical/logical resources and operational resources. The entitlement value are specified as predetermined rights wherein a process of a plurality of processes is entitled to a predetermined percentage of operational resources. The scheduler can be operable to monitor the entitlement value and schedule the processes based on priority of the entitlement values.03-28-2013
20130081042DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained.03-28-2013
20130036423SYSTEMS AND METHODS FOR BOUNDING PROCESSING TIMES ON MULTIPLE PROCESSING UNITS - Embodiments of the present invention provide improved systems and methods for processing multiple tasks. In one embodiment a method comprises: selecting a processing unit as a master processing unit from a processing cluster comprising multiple processing units, the master processing unit selected to execute master instruction entities; reading a master instruction entity from memory; scheduling the master instruction entity to execute on the master processing unit; identifying an execution group containing the master instruction entity, the execution group defining a set of related entities; when the execution group contains at least one slave instruction entity, scheduling the at least one slave instruction entity to execute on a processing unit other than the master processing unit during the execution of the master instruction entity; and terminating execution of instruction entities related by the execution group when a master instruction entity is executed that is not a member of the execution group.02-07-2013
20130042251Technique of Scheduling Tasks in a System - A technique for scheduling tasks in a system is provided. A method implementation of this technique comprises the steps of providing at least one association between a task and a range of priorities for the task and using the at least one association for the task scheduling. The task scheduling may be provided by a task scheduling unit having access to a memory unit.02-14-2013
20130042249Processing resource allocation within an integrated circuit supporting transaction requests of different priority levels - An integrated circuit 02-14-2013
20130042250METHOD AND APPARATUS FOR IMPROVING APPLICATION PROCESSING SPEED IN DIGITAL DEVICE - A method and apparatus for improving application processing speed in a digital device which improve application processing speed for a digital device running in an embedded environment where processor performance may not be sufficiently powerful by detecting an execution request for an application, identifying a group to which the requested application belongs, among preset groups with different priorities and scheduling the requested application according to the priority assigned to the identified group, and executing the requested application based on the scheduling result.02-14-2013
20100043004METHOD AND SYSTEM FOR COMPUTER SYSTEM DIAGNOSTIC SCHEDULING USING SERVICE LEVEL OBJECTIVES - A system and method for automatically scheduling health diagnostics within a computer system is disclosed. In one embodiment, a method for automatically scheduling health diagnostics within a computer system using service level objectives (SLOs) includes reviewing the SLOs associated with each managed server, invoking each managed server for diagnosing computer system based on the associated SLOs, receiving diagnostic status data and computer system health data from each managed server, and analyzing the received diagnostic status data and computer system health data and implementing any needed one or more corrective actions based on the analysis and a predetermined configuration corrective action criteria.02-18-2010
20100043003SPEEDY EVENT PROCESSING - A method for event positioning includes categorizing events into event groups based on a priority level, buffering the events in each event group into a group event queue, and determining an optimized position for events within each queue based, at least in part, on a processing time and an expected response time for each event in the group event queue.02-18-2010
20090158288METHOD AND APPARATUS FOR MANAGING SYSTEM RESOURCES - A computer implemented method, apparatus, and computer usable program product for system management. The process schedules a set of application tasks to form a schedule of tasks in response to receiving the set of application tasks from a registration module. The process then performs a feasibility analysis on the schedule of tasks to identify periods of decreased system activity. Thereafter, the process schedules a set of system management tasks during the periods of decreased system activity to form a prioritized schedule of tasks.06-18-2009
20100107170GROUP WORK SORTING IN A PARALLEL COMPUTING SYSTEM - A “group work sorting” technique is used in a parallel computing system that executes multiple items of work across multiple parallel processing units, where each parallel processing unit processes one or more of the work items according to their positions in a prioritized work queue that corresponds to the parallel processing unit. When implementing the technique, one or more of the parallel processing units receives a new work item to be placed into a first work queue that corresponds to the parallel processing unit and receives data that indicates where one or more other parallel processing units would prefer to place the new work item in the prioritized work queues that correspond to the other parallel processing units. The parallel processing unit uses the received data as a guide in placing the new work item into the first work queue.04-29-2010
20100107169PERIODICAL TASK EXECUTION APPARATUS, PERIODICAL TASK EXECUTION METHOD, AND STORAGE MEDIUM - A periodical task execution apparatus executes one or more periodical tasks to be executed in a predetermined sequence, including a comparison section configured to compare, when an activation request for any one of the one or more periodical tasks is made, priority of a task 04-29-2010
20100107168Scheduling for Real-Time Garbage Collection - Techniques are disclosed for schedule management. By way of example, a method for managing performance of tasks in threads associated with at least one processor comprises the following steps. One or more units of a first task type are executed. A count of the one or more units of the first task type executed is maintained. The count represents one or more credits accumulated by the processor for executing the one or more units of a first task type. One or more units of a second task type are executed. During execution of the one or more units of a second task type, a request to execute at least one further unit of the first task type is received. The amount of credits in the count is checked. When it is determined that there is sufficient credit in the count, the request to execute the at least one further unit of the first task type is forgone, and execution of the one or more units of the second task type continues. When it is determined that there is insufficient credit in the count, the at least one further unit of the first task type is executed. The first task type may be an overhead task type such as a garbage collection task type, and the second task type may be an application task type.04-29-2010
20090119671REGISTERS FOR DATA TRANSFERS - A system and method for employing registers for data transfer in multiple hardware contexts and programming engines to facilitate high performance data processing. The system and method includes a processor that includes programming engines with registers for transferring data from one of the registers residing in an executing programming engine to a subsequent one of the registers residing in an adjacent programming engine.05-07-2009
20120185862Managing Scheduling of Processes - A mechanism dynamically modifies the base-priority of a spawned set of processes according to their actual resource utilization (CPU or I/O wait time) and to a priority class assigned to them during their startup. In this way it is possible to maximize the CPU and I/O resource usage without at the same time degrading the interactive experience of the users currently logged on the system.07-19-2012
20090125909Device, system, and method for multi-resource scheduling - A method, apparatus and system for selecting a highest prioritized task for executing a resource from one of a first and second expired scheduling arrays, where the first and second expired scheduling arrays may prioritize tasks for using the resource, and where tasks in the first expired scheduling array may be prioritized according to a proportionality mechanism and tasks in the second expired scheduling array may be prioritized according to an importance factor determined, for example, based on user input, and executing the task. Other embodiments are described and claimed.05-14-2009
20120167110INFORMATION PROCESSING APPARATUS CAPABLE OF SETTING PROCESSING PRIORITY OF ACCESS, METHOD OF CONTROLLING THE INFORMATION PROCESSING APPARATUS, PROGRAM, AND STORAGE MEDIUM - An information processing apparatus that gives priority to an access made by a usual manual operation for execution of original functions of the apparatus, even when automatically programmed access for index creation from an external apparatus to the storage and the access for execution of original functions occur concurrently. A CPU causes a priority to be set to each processing requested by an request. The CPU executes the processing based on the set priority, and causes a processing result to a requesting source. If the received request is a specific request, the CPU causes calculation of a number of times that a time period elapsed after returning of the response until receipt of a next processing is within a predetermined time period. The CPU determines whether or not to change the priority based on the calculated number of times.06-28-2012
20120167109FRAMEWORK FOR RUNTIME POWER MONITORING AND MANAGEMENT - Systems and methods of managing power in a computing platform may involve monitoring a runtime power consumption of two or more of a plurality of hardware components in the platform to obtain a plurality of runtime power determinations. The method can also include exposing one or more of the plurality of runtime power determinations to an operating system associated with the platform.06-28-2012
20090313631AUTONOMIC WORKLOAD PLANNING - A method of automatically optimizing workload scheduling. Target values for workload characteristics and constraint specifications are received. Generation of a first execution plan is initiated. Initial constraint values conforming to the constraint specifications are selected. Each constraint value constrains tasks included in the workload. The first execution plan is executed, thereby determining measurements of workload characteristics. Contributions indicating differences between workload characteristic measurements and target values are determined and stored. Generation of a next execution plan is initiated. Modified constraint values conforming to the constraint specifications are selected. Changes in the workload characteristics based on the modified constraint values are evaluated. An optimal or acceptable sub-optimal solution in a space of solutions defined by the constraint specifications is determined, resulting in new values for the constraints. After replacing the initial values with the new values, the next execution plan is generated and executed.12-17-2009
20090307700Multithreaded processor and a mechanism and a method for executing one hard real-time task in a multithreaded processor - The invention relates to a mechanism for executing one Hard Real-Time (HRT) task in a multithreaded processor comprising means for determining the slack time of the HRT task; means for starting the execution of the HRT task; means for verifying if the HRT task requires using a resource that is being used by at least one Non Hard Real-Time (NHRT) task; means for determining the delay caused by the NHRT task; means for subtracting the determined delay from the slack time of the HRT task; means for verifying if the new value of the slack time is lower than a critical threshold; and means for stopping the NHRT tasks.12-10-2009
20120222036IMAGE FORMING APPARATUS - An MFP is provided with a main CPU for controlling operation of the MFP according an operating condition set to the MFP, a job management table for sequentially registering input jobs by priority, and a job execution control portion for determining whether or not to permit execution of the job according to the order of registration from a job with high priority that is registered in the job management table. The job execution control portion calculates, based on a job condition of a job intended for permission determination, utilization of the CPU associated with execution of the job, then restricts an operating condition of the MFP in a case where the calculated CPU utilization exceeds a predetermined value, and permits execution of the job according to the restricted operating condition in a case where the CPU utilization when the operating condition is restricted becomes the predetermined value or lower.08-30-2012
20120192197AUTOMATED CLOUD WORKLOAD MANAGEMENT IN A MAP-REDUCE ENVIRONMENT - A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device.07-26-2012
20120192196COMMAND EXECUTION DEVICE, COMMAND EXECUTION SYSTEM, COMMAND EXECUTION METHOD AND COMMAND EXECUTION PROGRAM - In order to improve processing efficiency, a command execution device includes: a behavior type decision unit which decides a behavior type indicating the content of a data input/output operation, according to the content of data processing executed by an entered command; a command storage unit which refers to setting information set in advance for each of the behavior types, and stores the command in a command queue created for each priority level, based on the priority level included in the setting information; and a command execution unit which fetches, out of commands stored in the command queue, a command stored in a section of the command queue having the highest priority level from the command queue, and executes the command.07-26-2012
20120192195SCHEDULING THREADS - Scheduling threads in a multi-threaded/multi-core processor having a given instruction window, and scheduling a predefined number N of threads among a set of M active threads in each context switch interval are provided. The actual power consumption of each running thread during a given context switch interval is determined, and a predefined priority level is associated with each thread among the active threads based on the actual power consumption determined for the threads. The power consumption expected for each active thread during the next context switch interval in the current instruction window (CIW_Power_Th) is predicted, and a set of threads to be scheduled among the active threads are selected from the priority level associated with each active thread and the power consumption predicted for each active thread in the current instruction window.07-26-2012
20120192194Lock Free Acquisition and Release of a Semaphore in a Multi-Core Processor Environment - A method for an acquisition of a semaphore for a thread includes decrementing a semaphore count, storing a current thread context of the semaphore when the semaphore count is less than a first predetermined value, determining a release count of a pending queue associated with the semaphore where the pending queue indicates unpended threads of the semaphore, and adding the thread to the pending queue when the release count is less than a second predetermined value.07-26-2012
20130074089METHOD AND APPARATUS FOR SCHEDULING RESOURCES IN SYSTEM ARCHITECTURE - The present invention relates to a method and apparatus for scheduling resources in system architecture. In one embodiment, this can be accomplished by storing temporarily jobs form a plurality of queues, where each queue a weight is set up, forming a set of elements, wherein the set size is based on the weights assigned to each queue, selecting one element from the formed set in an order, wherein the order can be predefined or random order and serving at least one job from the plurality of queues, wherein selection of the job is from the queue that corresponds to element of the formed set.03-21-2013
20130074088SCHEDULING AND MANAGEMENT OF COMPUTE TASKS WITH DIFFERENT EXECUTION PRIORITY LEVELS - One embodiment of the present invention sets forth a technique for dynamically scheduling and managing compute tasks with different execution priority levels. The scheduling circuitry organizes the compute tasks into groups based on priority levels. The compute tasks may then be selected for execution using different scheduling schemes, such as round-robin, priority, and partitioned priority. Each group is maintained as a linked list of pointers to compute tasks that are encoded as queue metadata (QMD) stored in memory. A QMD encapsulates the state needed to execute a compute task. When a task is selected for execution by the scheduling circuitry, the QMD is removed for a group and transferred to a table of active compute tasks. Compute tasks are then selected from the active task table for execution by a streaming multiprocessor.03-21-2013
20130074087METHODS, SYSTEMS, AND PHYSICAL COMPUTER STORAGE MEDIA FOR PROCESSING A PLURALITY OF INPUT/OUTPUT REQUEST JOBS - Methods, systems, and physical computer-readable storage medium for processing a plurality of IO request jobs are provided. The method includes determining whether one or more request jobs are not meeting a QoS target, each job of the one or more request jobs having a corresponding priority, selecting a highest priority job from the one or more request jobs, if one or more request jobs are not meeting the QoS target, determining whether the highest priority job has a corresponding effective rate limit imposed thereon, if so, relaxing the corresponding effective rate limit, and if not, selecting one or more lower priority jobs from the one or more request jobs and tightening a corresponding effective limit on the one or more lower priority jobs from the one or more request jobs in accordance with a delay factor limit.03-21-2013
20110067032METHOD AND SYSTEM FOR RESOURCE MANAGEMENT USING FUZZY LOGIC TIMELINE FILLING - In one or more embodiments, a method and system for scheduling resources is provided. The method includes receiving, in a processor, a plurality of concurrent processing requests. Each concurrent processing request is associated with at least one device configured to perform one or more different tasks at a given time. The at least one device has a predefined processing capacity. If one or more of the plurality of concurrent processing requests exceeds the predefined capacity of the at least one device at the given time, the processor determines a priority score for each concurrent processing request based, at least in part, on a time value associated with each concurrent processing request and whether any one of the concurrent processing requests is currently being processed at the given time. Responsive to the determined priority score at the given time, a highest priority processing request is executed for the at least one device.03-17-2011
20130061233EFFICIENT METHOD FOR THE SCHEDULING OF WORK LOADS IN A MULTI-CORE COMPUTING ENVIRONMENT - A computer in which a single queue is used to implement all of the scheduling functionalities of shared computer resources in a multi-core computing environment. The length of the queue is determined uniquely by the relationship between the number of available work units and the number of available processing cores. Each work unit in the queue is assigned an execution token. The value of the execution token represents an amount of computing resources allocated for the work unit. Work units having non-zero execution tokens are processed using the computing resources allocate to each one of them. When a running work unit is finished, suspended or blocked, the value of the execution token of at least one other work unit in the queue is adjusted based on the amount of computing resources released by the running work unit.03-07-2013
20130061232Method And Device For Maintaining Data In A Data Storage System Comprising A Plurality Of Data Storage Nodes - A method and device for maintaining data in a data storage system, comprising a plurality of data storage nodes, the method being employed in a storage node in the data storage system and comprising: monitoring and detecting, conditions in the data storage system that imply the need for replication of data between the nodes in the data storage system; initiating replication processes in case such a condition is detected, wherein the replication processes include sending multicast and unicast requests to other storage nodes, said requests including priority flags, receiving multicast and unicast requests from other storage nodes, wherein the received requests include priority flags, ordering the received requests in different queues depending on their priority flags, and dealing with requests in higher priority queues with higher frequency than requests in lower priority queues.03-07-2013
20130061234Media Player Instance Managed Resource Reduction - Techniques and systems are disclosed for managing computer resources available to multiple running instances of a media player program. The methods include monitoring consumption of computing resources of multiple running instances of a media player program to render respective media content in a graphical user interface of a computing device. The graphical user interface associated with an additional program configured to render additional content, different from the media content, to the graphical user interface. The additional program can be a browser. The methods further include instructing the multiple instances to reduce respective portions of the computing resources consumption upon determining that a requested increase in computer resources consumption of the media player program would cause the computer resources consumption of the media player program to exceed a first predetermined level.03-07-2013
20090271796INFORMATION PROCESSING SYSTEM AND TASK EXECUTION CONTROL METHOD - An information processing system includes a master processor and a slave processor. The master processor operates in a multitasking environment capable of executing request source tasks for making processing requests to the slave processor in parallel by task scheduling based on execution priorities of the tasks. The slave processor operates in a multitasking environment capable of executing a communication processing task and child tasks created by the communication processing task for executing processing requested by the processing requests in parallel by task scheduling. The processing requests contain priority information associated with the execution priorities of the request source tasks in the master processor. The slave processor activates the communication processing task in common for the processing requests from the different request source tasks. The communication processing task creates the child tasks with execution priorities allocated corresponding to the execution priorities of the request source tasks based on the priority information.10-29-2009
20090271795Method and apparatus for scheduling the processing of commands for execution by cryptographic algorithm cores in a programmable network processor - A method and apparatus for scheduling the processing of commands by a plurality of cryptographic algorithm cores in a network processor.10-29-2009
20090271794Global avoidance of hang states in multi-node computing system - Systems, methods, and other embodiments associated with avoiding resource blockages and hang states are described. One example computer-implemented method for a clustered computing system includes determining that a first process is waiting for a resource and is in a blocked state. The resource that the first process is waiting for is identified. A blocking process that is holding the resource is then identified. A priority of the blocking process is compared with a priority the first process. If the priority of the blocking process is lower than the priority of the first process, the priority of the blocking process is increase. In this manner the blocking process can be scheduled for execution sooner and thus release the resource.10-29-2009
20090271793Mechanism for priority inheritance for read/write locks - In one embodiment, a mechanism for priority inheritance for read/write locks (RW locks) is disclosed. In one embodiment, a method includes setting a maximum number of read/write locks (RW locks) allowed to be held for read by one or more tasks, maintaining an array in each of the one or more tasks to track the RW locks held for read, linking a RW lock with the array of each of the tasks that own the RW lock, and boosting a priority of each of the tasks that own the RW lock according to a priority inheritance algorithm implemented by the RW lock.10-29-2009
20090271792METHOD AND APPARATUS FOR ALERT PRIORITIZATION ON HIGH VALUE END POINTS - A method and system for prioritizing alerts on end points include an aggregator agent that monitors a plurality of end point agents and receives a signal indicating an out of band operating tolerance from an end point. The aggregator agent locally determines the priority of the received signal based on a rules engine local to the aggregator agent. The aggregator agent transmits the priority of said signal and information associated with said signal to a remote host computer for appropriate handling.10-29-2009
20130067484INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, RECORDING MEDIUM AND INFORMATION PROCESSING SYSTEM - There is provided an information processing apparatus including a receiver configured to receive a request to perform processing related to a task, from a first information processing apparatus which functions as a client on a network; a scheduler configured to, when a rank of a priority of the scheduler of the information processing apparatus among information processing apparatuses on the network is a first predetermined rank or higher, assign the task to one or a plurality of second information processing apparatuses which function as nodes on the network; and a transmitter configured to transmit a request to execute processing related to the task assigned to the one or the plurality of second information processing apparatuses.03-14-2013
20090235264DISTRIBUTED SYSTEM - The allocation of hardware resources to distribution applications is enabled without using effective task priority available only in field devices. A distribution system makes a plurality of field devices connected with each other through a network (N) operate a plurality of distribution applications (distribution AP) in parallel. The distribution system is provided with an importance adjustment unit (09-17-2009
20090083746METHOD FOR JOB MANAGEMENT OF COMPUTER SYSTEM - A method for job management of a computer system, a job management system, and a computer-readable recording medium are provided. The method includes selecting, as a second job, a running job which is lower in priority than a first job and a number of computing nodes required for execution of which is not smaller than a deficient number of computing nodes due to execution of the first job when a number of free computing nodes in a cluster of the computer system is smaller than a number of computing nodes required for the first job, suspending all processes of the second job and executing the first job in the computing nodes which were used by the second job and the free computing nodes, and resuming execution of the second job after execution of the first job is completed.03-26-2009
20080295105Data processing apparatus and method for managing multiple program threads executed by processing circuitry11-27-2008
20080276241DISTRIBUTED PRIORITY QUEUE THAT MAINTAINS ITEM LOCALITY - A method of administering a distributed priority queue structure that includes removing a highest priority item from a current root node of a tree structure to create a temporary root node, determining for each subtree connected to the temporary root node a subtree priority comprising the priority of the highest priority data item in the each subtree, determining as the highest priority subtree connected to the temporary root node the subtree connected to the temporary root node having the highest subtree priority, determining whether any of the one or more data items stored at the temporary root node has a higher priority than the highest subtree priority and directing an arrow to the subtree having the highest priority or to the temporary root itself if the priority of the data items stored at temporary root is higher than the priorities of the connected subtrees.11-06-2008
20080288947SYSTEMS AND METHODS OF DATA STORAGE MANAGEMENT, SUCH AS DYNAMIC DATA STREAM ALLOCATION - A system and method for choosing a stream to transfer data is described. In some cases, the system reviews running data storage operations and chooses a data stream based on the review. In some cases, the system chooses a stream based on the load of data to be transferred.11-20-2008
20110283288PROCESSOR AND PROGRAM EXECUTION METHOD CAPABLE OF EFFICIENT PROGRAM EXECUTION - A processor for sequentially executing a plurality of programs using a plurality of register value groups stored in a memory that correspond one-to-one with the programs. The processor includes a plurality of register groups; a select/switch unit operable to select one of the plurality of register groups as an execution target register group on which a program execution is based, and to switch the selection target every time a first predetermined period elapses; a restoring unit operable to restore, every time the switching is performed, one of the register value groups into one of the register groups that is not selected as the execution target register group; a saving unit operable to save, prior to the restoring, register values in the register group targeted for restoring, by overwriting a register value group in the memory that corresponds to the register values; and a program execution unit operable to execute, every time the switching is performed, a program corresponding to a register value group in the execution target register group.11-17-2011
20110302588Assigning Priorities to Threads of Execution - Systems and processes may be implemented to receive threads of execution and assign priorities to the threads of execution. Threads of execution may include nonvolatile memory input/output threads, other input/output threads, and/or other non-input/output threads. A lower priority may be assigned to nonvolatile memory input/output threads than other input/output threads. An algorithm may determine an order of execution of the threads of execution. An order of execution may be at least partially based on assigned priorities.12-08-2011
20110302587INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A system-level management unit generates a system processing and makes a processing request to a task allocation unit of a user-level management unit. The task allocation unit schedules the system processing according to a procedure of an introduced user-level scheduling. A processing unit assigned to execute the system processing sends a notification of acceptability of the system processing to a main processing unit, by halts an application task in appropriate timing or when the processing of the current task is completed. When the notification is received within the time limit for execution, the system-level management unit has the processing unit start the system processing.12-08-2011
20110302586MULTITHREAD APPLICATION-AWARE MEMORY SCHEDULING SCHEME FOR MULTI-CORE PROCESSORS - A device may include a memory controller that identifies a multithread application, and adjusts a memory scheduling scheme for the multithread application based on the identification of the multithread application.12-08-2011
20080271027Fair share scheduling with hardware multithreading - An embodiment of the invention provides an apparatus and method for fair share scheduling with hardware multithreading. The apparatus and method include the acts of: executing, by a first hardware thread in a processor core, a first software thread belonging to a fair share group; and permitting a second hardware thread in the processor core to execute a second software thread if that second software thread belongs to the fair share group.10-30-2008
20100070977CONTROL OF THE RUNTIME BEHAVIOR OF PROCESSES - A method for controlling runtime behavior of processes of an automation system is provided. A priority is assigned to each of the processes, wherein an operating system of the automation system assigns runtime to the processes as a function of their priority. A scheduling service monitors starting and ending of all processes, wherein the highest priority available in the operating system is assigned to the scheduling service. Metadata is assigned to at least one process, the data including at least one rule on the priority of the process. The scheduling service analyzes the metadata and registers the process for monitoring when starting a process to which metadata is assigned, wherein the scheduling service monitors the registered processes for compliance with the at least one rule per process, and wherein the scheduling service modifies the priorities of the registered processes, the at least one rule of which is in non-compliance, according to the rule.03-18-2010
20100169890URGENCY BASED SCHEDULING - The present invention relates to a method of scheduling for multi-function radars. Specifically, the present invention relates to an efficient urgency-based scheduling method.07-01-2010
20100269117Method for Monitoring System Resources and Associated Electronic Device - A method, for monitoring resources of a system for performing a first task and a second task, includes calculating a first completion count of the first task; calculating a second completion count of the second task; and determining whether the resources of the system are exhausted according to the first completion count and the second completion count.10-21-2010
20100269115Managing Threads in a Wake-and-Go Engine - A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism detects a thread running on a first processing unit within a plurality of processing units that is waiting for an event that modifies a data value associated with a target address. The wake-and-go mechanism creates a wake-and-go instance for the thread by populating a wake-and-go storage array with the target address. The operating system places the thread in a sleep state. Responsive to detecting the event that modifies the data value associated with the target address, the wake-and-go mechanism assigns the wake-and-go instance to a second processing unit within the plurality of processing units. The operating system on the second processing unit places the thread in a non-sleep state.10-21-2010
20120233624APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - An apparatus includes a monitoring unit configured to monitor memory usage of a process in which multiple application programs are running, and a control unit configured to terminate one or more of the application programs when the memory usage of the process exceeds a first threshold.09-13-2012
20100115525METHOD FOR DYNAMICALLY ENABLING THE EXPANSION OF A COMPUTER OPERATING SYSTEM - A method for scheduling tasks in a computer operating system comprises a background task creating at least one registered service. The background task provides an execution presence and a data present to a registered service and ranks the registered services according to the requirements of each registered service. The background task also allocates an execution presence and a data presence according to each of the registered services such that each of the registered services is given an opportunity to be scheduled in the dedicated pre-assigned time slice.05-06-2010
20100115523METHOD AND APPARATUS FOR ALLOCATING TASKS AND RESOURCES FOR A PROJECT LIFECYCLE - The present invention relates to the allocation of resources to address scope items against an iteration of a project based on a rule set described by a decision matrix and threshold values. Rather than changing work item start and end dates based on resource availability, the present invention adds, modifies, and removes content from a collection of scope item items and allocates them to resources based on the skills required, the priority, estimated work and target iteration of the scope item items.05-06-2010
20120233623USING A YIELD INDICATOR IN A HIERARCHICAL SCHEDULER - A method and system for scheduling the use of CPU time among processes using a scheduling tree having a yielding indicator. A scheduling tree represents a hierarchy of groups and processes that share central processing unit (CPU) time. A computer system assigns a yield indicator to a first node of the scheduling tree, which represents a first process that temporarily yields the CPU time. The computer system also assigns the yield indicator to each ancestor node of the first node in the scheduling tree. Each ancestor node represents a group to which the first process belongs. The computer system then selects a second process to run on the computer system based on the yield indicator in the scheduling tree.09-13-2012
20090150892Interrupt controller for invoking service routines with associated priorities - An interrupt controller efficiently manages execution of tasks by a multiprocessor computing system. The interrupt controller has inputs for receiving service requests for invoking service routines. The service routines have higher priorities than the tasks executed on the processors. Associated with each processor is a register for storing the priority of the task executing on the processor. A comparator coupled to the processors determines the processor executing the task having a lower priority among the priorities of the tasks executing on the processors. For each service request received, a distributor generates an interrupt request for invoking the service routine of the service request on the processor with the lower priority. The register with the lower priority is set to the higher priority of the service routine in response to the interrupt request. For each processor, the interrupt controller has an output for transmitting the interrupt request to the processor.06-11-2009
20100122260Preventing Delay in Execution Time of Instruction Executed by Exclusively Using External Resource - Disclosed are computer systems, a plurality of methods and a computer program for preventing a delay in execution time of one or more instructions. The computer system includes: a lock unit for executing an instruction to acquire exclusive-use of the external resource and an instruction to release the exclusive-use of the external resource in the one or more threads; a counter unit for increasing or decreasing a value of a corresponding one of counters respectively associated with the threads; and a controller for controlling an execution order of the instructions to be executed by exclusively using the external resource and instructions that causes a delay in the execution time of the instructions to be executed by exclusively using the external resource.05-13-2010
20100125849Idle Task Monitor - A system and method are provided for determining processor usable idle time in a system employing a software instruction processor. The method establishes an idle task with a lowest processor priority for a processor executing application software instructions, and uses the processor to execute an idle task. The method ceases to execute the idle task in response to the processor executing application software instructions. The amount of periodic idle task execution is determined and stored in a tangible memory medium. For example, idle time amounts can be determined per a unit of time, i.e. a percentage per second. In one aspect, the method generates an idle task report. The report can be a periodic report expressing the duration of idle task execution per time period, or a course of execution report expressing idle task start times, idle task stop times, and durations between the corresponding start and stop times.05-20-2010
20120079490DISTRIBUTED WORKFLOW IN LOOSELY COUPLED COMPUTING - A method that can be used in a distributed workflow system that uses loosely coupled computation of stateless nodes to bring computation tasks to the compute nodes is disclosed. The method can be employed in a computing system, such as cloud computing system, that can generate a computing task separable into work units and performed by a set of distributed and decentralized workers. In one example, the method arranges the work units into a directed acyclic graph representing execution priorities between the work units. The plurality of distributed and decentralized workers query the directed acyclic graph for work units ready for execution based upon the directed acyclic graph. In one example, the method is included in a computer readable storage medium as a software program.03-29-2012
20110173627INFORMATION-PROCESSING DEVICE AND PROGRAM - When executing plural application programs in parallel, a control unit assigns a small storage area to each application program so that a part of a function implemented by execution of each application program is provided. When providing a service of high value to a user, a control unit assigns a large storage area to any one of the application programs so that a full function that is implemented by execution of the application program is provided.07-14-2011
20100083265SYSTEMS AND METHODS FOR SCHEDULING ASYNCHRONOUS TASKS TO RESIDUAL CHANNEL SPACE04-01-2010
20100088706User Tolerance Based Scheduling Method for Aperiodic Real-Time Tasks - An apparatus comprising at least one processor configured to implement a method comprising analyzing a plurality of tasks, determining a privilege level for each of the task, determining a schedule for each of the tasks, and scheduling the tasks for execution based on the privilege level and the schedule of each task. Included is a memory comprising instructions for determining a privilege level for each of a plurality of tasks, wherein the privilege levels comprise periodic real-time, aperiodic real-time, and non-real time, determining an execution time for each of the tasks, and scheduling the tasks for execution on a processor based on the privilege level and the execution time of each task.04-08-2010
20100083267Multi-thread processor and its hardware thread scheduling method - A multi-thread processor in accordance with an exemplary aspect of the present invention includes a plurality of hardware threads each of which generates an independent instruction flow, a first thread scheduler that outputs a first thread selection signal designating a hardware thread to be executed in the next execution cycle, a first selector that outputs an instruction generated by the selected hardware thread according to the first thread selection signal, and an execution pipeline that executes an instruction output from the first selector, wherein whenever a hardware thread is executed in the execution pipeline, the first thread scheduler updates the priority rank of the executed hardware thread and outputs the first thread selection signal in accordance with the updated priority rank.04-01-2010
20100083264Processing Batch Database Workload While Avoiding Overload - Processing batch database workload while avoiding overload. A method for efficiently processing a database workload in a computer system comprises receiving the workload, which comprises a batch of queries directed toward the database. Each query within the batch of queries is assigned a priority. Resources of the computer system are assigned in accordance with the priority. The batch of queries is executed in unison within the computer system in accordance with the priority of each query thereby resolving a conflict within the batch of queries for the resources of the computer system, hence efficiently processing the database workload and avoiding overload of the computer system.04-01-2010
20100125848MECHANISMS TO DETECT PRIORITY INVERSION - A method, computer program product, and device are provided for detecting and identifying priority inversion. A higher priority thread and a lower priority thread are received. A debugging application for debugging is executed. The lower priority thread requests and holds a resource. A break point is hit by the lower priority thread. The lower priority thread is preempted by the higher priority thread, and debugging stops until the higher priority thread completes. The higher priority thread requests the resource being held by the lower priority thread. It is determined whether priority inversion occurs.05-20-2010
20090144740Application-based enhancement to inter-user priority services for public safety market - A system and method for application based enhancement to the traditional per-user based inter-user priority services is provided. This method includes provisioning a user's profile, not only with an assigned inter-user priority, but also with zero, one or more specified and provisioned applications that are considered as critical applications which require special preferential treatment by the access network. The method continues with accessing the inter-user priority profile associated for sessions established for the user. The system then recognizes that a session may have been assigned to at least one provisioned critical application. The system may then provide inter-user priority services operative to provide the specified preferential treatment for at least the critical applications associated with the session when the critical application(s) are activated. In this form, the critical applications are better served including protection again congestion and availability of resources whenever they are needed. This system may grant preferential treatment on a session and/or application basis so that there will be no impact on other general applications when no critical applications are activated. This is especially useful for public safety implementation where protecting the mission-critical communication is a fundamental requirement.06-04-2009
20090282414Prioritized Resource Access Management - Middleware may dynamically restrict or otherwise allocate computer resources in response to changing demand and based on prioritized user access levels. Users associated with a relatively low priority may have their resource access delayed in response to high demand, e.g., processor usage. Users having a higher priority may experience uninterrupted access during the same period and until demand subsides.11-12-2009
20090187912Method and apparatus for migrating task in multi-processor system - A method and apparatus for migrating a task in a multi-processor system. The method includes examining whether a second process has been allocated to a second processor, the second process having a same instruction to execute as a first process and having different data to process in response to the instruction from the first process, the instruction being to execute the task; selecting a method of migrating the first process or a method of migrating a thread included in the first process based on the examining and migrating the task from a first processor to the second processor using the selected method. Therefore, cost and power required for task migration can be minimized. Consequently, power consumption can be maintained in a low-power environment, such as an embedded system, which, in turn, optimizes the performance of the multi-processor system and prevents physical damage to the circuit of the multi-processor system.07-23-2009
20090178045Scheduling Memory Usage Of A Workload - Described herein is a method for scheduling memory usage of a workload, the method comprising: receiving the workload, wherein the workload includes a plurality of jobs; determining a memory requirement to execute each of the plurality of jobs; arranging the plurality of jobs in an order of the memory requirements of the plurality of jobs such that the job with the largest memory requirement is at one end of the order and the job with the smallest memory requirement is at the other end of the order; assigning in order a unique priority to each of the plurality of jobs in accordance with the arranged order such that the job with the largest memory requirement is assigned the highest priority for execution and the job with the smallest memory requirement is assigned the lowest priority for execution; and executing the workload by concurrently executing the jobs in the workload in accordance with the arranged order of the plurality of jobs and the unique priority assigned to each of the plurality of jobs.07-09-2009
20100125850Method and Systems for Processing Critical Control System Functions - A method for processing critical control system functions is described. The method includes determining a level of criticality of at least one data packet and directing critical data packets to at least one of a critical computational job queue and a critical memory portion. The method also includes directing non-critical data packets to at least one of a non-critical computational job queue and a non-critical memory portion and executing control system functions corresponding to critical data packets stored in the critical computational job queue. The method also includes executing control system functions corresponding to non-critical data packets stored in the non-critical computational job queue when no critical control system functions are stored in the critical computational job queue.05-20-2010
20090089786SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE FOR REAL-TIME PROCESSING - A technology capable of efficiently performing the processes by using limited resources in an LSI where a plurality of real-time applications are parallelly processed is provided. To provide such a technology, a mechanism is provided in which a plurality of processes to be executed on a plurality of processing units in an LSI are managed throughout the LSI in a unified manner. For each process to be managed, a priority is calculated based on the state of progress of the process, and the execution of the process is controlled according to the priority. A resource management unit IRM or program that collects information such as a process state from each of the processing units executing the processes and calculates a priority for each process is provided. Also, a programmable interconnect unit and storage means for controlling a process execution sequence according to the priority are provided.04-02-2009
20080209426APPARATUS FOR RANDOMIZING INSTRUCTION THREAD INTERLEAVING IN A MULTI-THREAD PROCESSOR - A processor interleaves instructions according to a priority rule which determines the frequency with which instructions from each respective thread are selected and added to an interleaved stream of instructions to be processed in the data processor. The frequency with which each thread is selected according to the rule may be based on the priorities assigned to the instruction threads. A randomization is inserted into the interleaving process so that the selection of an instruction thread during any particular clock cycle is not based solely by the priority rule, but is also based in part on a random or pseudo random element. This randomization is inserted into the instruction thread selection process so as to vary the order in which instructions are selected from the various instruction threads while preserving the overall frequency of thread selection (i.e. how often threads are selected) set by the priority rule.08-28-2008
20110173626EFFICIENT MAINTENANCE OF JOB PRIORITIZATION FOR PROFIT MAXIMIZATION IN CLOUD SERVICE DELIVERY INFRASTRUCTURES - Systems and methods are disclosed for efficient maintenance of job prioritization for profit maximization in cloud-based service delivery infrastructures with multi-step cost structure support by breaking multiple steps in the SLA of a job into corresponding cost steps; generating a segmented cost function for each cost step; creating a cost-based-scheduling (CBS)-priority value associated with a validity period for each segment based on the segmented cost function; and choosing the job with the highest CBS priority value.07-14-2011
20110173625Wake-and-Go Mechanism with Prioritization of Threads - A hardware private array is a thread state storage that is embedded within the processor or within logic associated with a bus or wake-and-go logic. The hardware private array and/or wake-and-go array may have a limited storage area. Therefore, each thread may have an associated priority. If there is insufficient space in the hardware private array, then the wake-and-go mechanism may compare the priority of the thread to the priorities of the threads already stored in the hardware private array and wake-and-go array. If the thread has a higher priority than at least one thread already stored in the hardware private array and wake-and-go array, then the wake-and-go mechanism may remove a lowest priority thread, meaning the thread is removed from hardware private array and wake-and-go array and converted to a flee model.07-14-2011
20090276782RESOURCE MANAGEMENT METHODS AND SYSTEMS - Resource management methods and systems are provided. First, it is determined whether a resource is currently being used. When the resource is currently being used by a first program, a release notification is transmitted to the first program to release the resource.11-05-2009
20090288089METHOD FOR PRIORITIZED EVENT PROCESSING IN AN EVENT DISPATCHING SYSTEM - A method for dynamically prioritizing event processing in an event dispatching system includes steps of: organizing input/output requests in a plurality of activity sets ordered from most active to least active, wherein a highest priority level is associated with the most active activity set and the lowest priority level is associated with the least active activity set; organizing event descriptors corresponding to the input/output requests into event descriptor sets; creating an event descriptor cache; duplicating the event descriptor of the input/output request found to be most active into the event descriptor cache; monitoring the event descriptor cache more frequently than the event descriptor set; and invoking an event dispatching routine from the event descriptor cache.11-19-2009
20090300631DATA PROCESSING SYSTEM AND METHOD FOR CACHE REPLACEMENT - A data processing system is provided with at least one processing unit (12-03-2009
20080276242Method For Dynamic Scheduling In A Distributed Environment - A method and system is provided for assigning programs in a workflow to one or more nodes for execution. Prior to the assignment, a priority of execution of each program is calculated in relation to its dependency upon data received and transmitted data. Based upon the calculated priority and the state of each of the nodes, the programs in the workflow are dynamically assigned to one or more nodes for execution. In addition to the node assignment based upon priority, preemptive execution of the programs in the workflow is determined so that the programs in the workflow may not preemptively be executed at a selected node in response to the determination.11-06-2008
20130219401PRIORITIZING JOBS WITHIN A CLOUD COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach to prioritize jobs (e.g., within a cloud computing environment) so as to maximize positive financial impacts (or to minimize negative financial impacts) for cloud service providers, while not exceeding processing capacity or failing to meet terms of applicable Service Level Agreements (SLAs). Specifically, under the present invention a respective income (i.e., a cost to the customer), a processing need, and set of SLA terms (e.g., predetermined priorities, time constraints, etc.) will be determined for each of a plurality of jobs to be performed. The jobs will then be prioritized in a way that: maximizes cumulative/collective income; stays within the total processing capacity of the cloud computing environment; and meets the SLA terms.08-22-2013
20100005470METHOD AND SYSTEM FOR PERFORMING DMA IN A MULTI-CORE SYSTEM-ON-CHIP USING DEADLINE-BASED SCHEDULING - A direct memory access (DMA) engine schedules data transfer requests of a system-on-chip data processing system according to both an assigned transfer priority and the deadline for completing a transfer. Transfer priority is based on a hardness representing the penalty for missing a deadline. Priorities are also assigned to zero-deadline transfer requests in which there is a penalty no matter how early the transfer completes. If desired, transfer requests may be scheduled in timeslices according to priority in order to bound the latency of lower priority requests, with the highest priority hard real-time transfers wherein the penalty for missing a deadline is severe are given the largest timeslice. Service requests for preparing a next data transfer are posted while a current transaction is in progress for maximum efficiency. Current transfers may be preempted whenever a higher urgency request is received.01-07-2010
20090007124Method and mechanism for memory access synchronization - The present invention is a method and mechanism of multiple processors synchronization. Calling global memory fence (GMF) service raises asynchronous memory fence being executed on other processors. By guarantee that asynchronous memory fence (AMF) or equivalence on other processors are executed within the window of global memory fence (GMF) service call, the expensive memory ordering semantics can be removed from the critical path of frequently-executed application code. Therefore, the overall performance is improved in modern processor architectures.01-01-2009
20090007123Dynamic Application Scheduler in a Polling System - A dynamic scheduling system is provided that comprises a processor, a polling task, a work task, and a scheduler assistant task. The polling task is configured for execution by the processor, wherein the polling task executes during a first CPU time window and sleeps during a second CPU time window. The work task is configured for an execution during the second CPU time window. The scheduler assistant (SA) task has an execution state to indicate to the polling task a status of the execution of the work task to the polling task. The SA task is configured to run if the work task runs to completion within the second CPU time window.01-01-2009
20080250415Priority based throttling for power/performance Quality of Service - A method and apparatus for throttling power and/or performance of processing elements based on a priority of software entities is herein described. Priority aware power management logic receives priority levels of software entities and modifies operating points of processing elements associated with the software entities accordingly. Therefore, in a power savings mode, processing elements executing low priority applications/tasks are reduced to a lower operating point, i.e. lower voltage, lower frequency, throttled instruction issue, throttled memory accesses, and/or less access to shared resources. In addition, utilization logic potentially trackes utilization of a resource per priority level, which allows the power manager to determine operating points based on the effect of each priority level on each other from the perspective of the resources themselves. Moreover, a software entity itself may assign operating points, which the power manager enforces.10-09-2008
20090265712Auto-Configuring Workload Management System - A multi-partition computer system provides a configuration inspector for inspecting partitions to determine their identities and configuration information. The system also includes a policy controller for automatically setting said workload-management policies at least in part as a function of said configuration information in response to a command.10-22-2009
20100281485Method For Changing Over A System Having Multiple Execution Units - A system having multiple execution units and a method for its changeover are provided. The system having multiple execution units has at least two execution units, and may be changed over between a performance operating mode, in which the execution units execute different programs, and a comparison operating mode, in which the execution units execute the same program. The system has a scheduler, which is called by an execution unit to ascertain the next program to be executed. The remaining execution units are prompted to also call the scheduler if the program ascertained by the first called scheduler is to be executed in a comparison operating mode. A changeover unit changes over the system having multiple execution units from the performance operating mode into the comparison operating mode if the program to be executed ascertained by the last called scheduler is to be executed in the comparison operating mode, this ascertained program to be executed being executed as the program having the highest priority by all execution units after the changeover of the system into the comparison operating mode.11-04-2010
20090049447METHODS AND SYSTEMS FOR CARE READINESS - Provided are methods and systems for generating a care plan. The methods, which can be implemented as a Parent Care Readiness Program (PCR-P), can use information and resources to improve caregiving readiness for imminent and active care givers. In an aspect, the Parent Care Readiness program can comprise two, complementary, automated, comprehensive, evidence-based assessments of the landscape of caregiving tasks, one from adult child's and one from parent's perspective, and a tailored intervention program that care givers and care receivers can discuss and implement.02-19-2009
20080282251THREAD DE-EMPHASIS INSTRUCTION FOR MULTITHREADED PROCESSOR - A technique for scheduling execution of threads at a processor is disclosed. The technique includes executing a thread de-emphasis instruction of a thread that de-emphasizes the thread until the number of pending memory transactions, such as cache misses, associated with the thread are at or below a threshold. While the thread is de-emphasized, other threads at the processor that have a higher priority can be executed or assigned system resources. Accordingly, the likelihood of a stall in the processor is reduced.11-13-2008
20080288945METHOD AND SYSTEM FOR ANALYZING INTERRELATED PROGRAMS - A method for analyzing a program having a budget, an implementation schedule and a deployment plan includes: (a) in no particular order: (1) providing a digital representation of the budget including first entries; (2) providing a digital representation of the schedule including second entries having a first relation with the first entries; and (3) providing a digital representation of the deployment plan including third entries having at least one second relation with at least one of the first and second entries; (b) establishing an expression embodying the first and second relations; (c) exercising the expression to alter at least one altered entry of the selected first second and third entries; and (d) observing at least one entry of the selected first second and third entries other than the at least one altered entry.11-20-2008
20080288948SYSTEMS AND METHODS OF DATA STORAGE MANAGEMENT, SUCH AS DYNAMIC DATA STREAM ALLOCATION - A system and method for choosing a stream to transfer data is described. In some cases, the system reviews running data storage operations and chooses a data stream based on the review. In some cases, the system chooses a stream based on the load of data to be transferred.11-20-2008
20080295104Realtime Processing Software Control Device and Method11-27-2008
20120297394LOCK CONTROL IN MULTIPLE PROCESSOR SYSTEMS - A computer system comprising a plurality of processors and one or more storage devices. The system is arranged to execute a plurality of tasks, each task comprising threads and each task being assigned a priority from 1 to a whole number greater than 1, each thread of a task assigned the same priority as the task and each thread being executed by a processor. The system also provides lock and unlock functions arranged to lock and unlock data stored by a storage device responsive to such a request from a thread. A method of operating the system comprises maintaining a queue of threads that require access to locked data, maintaining an array comprising, for each priority, duration and/or throughput information for threads of the priority, setting a wait flag for a priority in the array according to a predefined algorithm calculated from the duration and/or throughput information in the array.11-22-2012
20080271029Thread Scheduling with Weak Preemption Policy - Thread scheduling with a weak preemption policy is provided. The scheduler receives requests from newly ready work. The scheduler adds a “preempt value” to the current work's priority so that it is somewhat increased for preemption purposes. The preempt value can be adjusted in order to make it more, or less, difficult for newly ready work to preempt the current work. A “less strict” preemption policy allows current work to complete rather than interrupting the current work and resume it at a later time, thus saving system overhead. Newly ready work that is queued with a better priority than the current work is queued in a favorable position to be executed after the current work is completed but before other work that has been queued with the same priority of the current work.10-30-2008
20100299670SELECTIVE I/O PRIORITIZATION BY SYSTEM PROCESS/THREAD - Systems, methods, and apparatus to identify and prioritize application processes in one or more subsystems. Some embodiments identifying applications and processes associated with each application executing on a system, apply one or more priority rules to the identified applications and processes to generate priority information, and transmit the priority information to a subsystem. The subsystem then matches received requests with the priority information and services the processes according to the priority information.11-25-2010
20080244592MULTITASK PROCESSING DEVICE AND METHOD - There is provided with a multitask processing device for processing a plurality of tasks by multitask, the tasks being each split into at least two sections, including: a stable set storage configured to store a stable set including one or more section combinations; a program execution state calculator configured to calculate, for each of the tasks, a program execution state including a section where execution is to start when the task is next executed and current sections of other tasks different from the task among the tasks; a distance calculating unit configured to calculate a distance between each of the program execution states and the stable set; and a task execution unit configured to select and execute a next task to be executed next based on calculated distances.10-02-2008
20120060163METHODS AND APPARATUS ASSOCIATED WITH DYNAMIC ACCESS CONTROL BASED ON A TASK/TROUBLE TICKET - In some embodiments, an apparatus includes a memory, a processing device, a task division module implemented within at least one of the memory or the processing device, and a dynamic authentication module implemented within at least one of the memory or the processing device. The task division module is operable to receive a request associated with a task to be performed and to divide the task into multiple subtasks. The dynamic authentication module is operable to provide an access right to an operator from a set of operators assigned a subtask from the multiple subtasks. The access right for the operator is an access right to complete the subtask assigned to that operator from the set of operators.03-08-2012
20100146512Mechanisms for Priority Control in Resource Allocation - Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.06-10-2010
20100146511POLICY BASED DATA PROCESSING METHOD AND SYSTEM - Provided is a policy-based data processing system and method. A pattern analyzer of the data processing system generates a pattern handler based on a pattern, schedules the generated pattern handler based on a policy to filter and group data, generates a processing function corresponding to a process type for each data type of event data into an object module, and uses it by handling it through a pattern handler.06-10-2010
20120198462WORKFLOW CONTROL OF RESERVATIONS AND REGULAR JOBS USING A FLEXIBLE JOB SCHEDULER - A scheduler receives at least one flexible reservation request for scheduling in a computing environment comprising consumable resources. The flexible reservation request specifies a duration and at least one required resource. The consumable resources comprise at least one machine resource and at least one floating resource. The scheduler creates a flexible job for the at least one flexible reservation request and places the flexible job in a prioritized job queue for scheduling, wherein the flexible job is prioritizes relative to at least one regular job in the prioritized job queue. The scheduler adds a reservation set to a waiting state for the at least one flexible reservation request. The scheduler, responsive to detecting the flexible job positioned in the prioritized job queue for scheduling next and detecting a selection of consumable resources available to match the at least one required resource for the duration, transfers the selection of consumable resources to the reservation and sets the reservation to an active state, wherein the reservation is activated as the selection of consumable resources become available and has uninterrupted use of the selection of consumable resources for the duration by at least one job bound to the flexible reservation.08-02-2012
20120198461METHOD AND SYSTEM FOR SCHEDULING THREADS - A method for scheduling a new thread involves identifying a criticality level of the new thread, selecting a processor group according to the criticality level of the new thread and an existing assigned utilization level of the processor group to obtain a selected processor group, increasing an assigned utilization level of the selected processor group based on the new thread, and executing the new thread by the selected processor group.08-02-2012
20090100434TRANSACTION MANAGEMENT - A method and transaction processing system for managing transaction processing tasks are provided. The transaction processing system comprises a transaction log, a log management policy, a log manager and a dispatcher. The method comprises maintaining a transaction log of recoverable changes made by transaction processing tasks and storing a log management policy including at least one log threshold. Usage of the log by transaction processing tasks is then monitored to determine when a log threshold is reached. When a log threshold is reached the active task having the oldest log entry of all active tasks is identified and its dispatching priority is increased. This increases the likelihood that the identified task will be dispatched, and should mean that the task will more quickly reach normal completion.04-16-2009
20110209155SPECULATIVE THREAD EXECUTION WITH HARDWARE TRANSACTIONAL MEMORY - In an embodiment, if a self thread has more than one conflict, a transaction of the self thread is aborted and restarted. If the self thread has only one conflict and an enemy thread of the self thread has more than one conflict, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread and the enemy thread only conflicts with the self thread and the self thread has a key that has a higher priority than a key of the enemy thread, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread, the enemy thread only conflicts with the self thread, and the self thread has a key that has a lower priority than the key of the enemy thread, the transaction of the self thread is aborted.08-25-2011
20110209154THREAD SPECULATIVE EXECUTION AND ASYNCHRONOUS CONFLICT EVENTS - In an embodiment, asynchronous conflict events are received during a previous rollback period. Each of the asynchronous conflict events represent conflicts encountered by speculative execution of a first plurality of work units and may be received out-of-order. During a current rollback period, a first work unit is determined whose speculative execution raised one of the asynchronous conflict events, and the first work unit is older than all other of the first plurality of work units. A second plurality of work units are determined, whose ages are equal to or older than the first work unit, wherein each of the second plurality of work units are assigned to respective executing threads. Rollbacks of the second plurality of work units are performed. After the rollbacks of the second plurality of work units are performed, speculative executions of the second plurality of work units are initiated in age order, from oldest to youngest.08-25-2011
20120036512ENHANCED SHORTEST-JOB-FIRST MEMORY REQUEST SCHEDULING - In at least one embodiment of the invention, a method includes scheduling a memory request associated with a thread executing on a processing system. The scheduling is based on a job length of the thread and a priority step function of job length. The thread is one of a plurality of threads executing on the processing system. In at least one embodiment of the method, the priority step function is a function of ┌x/2n┐ for x<=m and P(x)=m/202-09-2012
20090165007TASK-LEVEL THREAD SCHEDULING AND RESOURCE ALLOCATION - Task schedulers endeavor to share computing resources, such as the CPU, among many threads. However, the task scheduler may be unable to identify the resources that will be utilized by a thread, and may allocate resources inefficiently due to incorrect predictions of resource utility. Task scheduling may be improved by identifying the rate determining factors for various thread tasks comprising a thread, e.g., a first task that is rate-limited by a communications bus, a second task that is rate-limited by the CPU, and a third task that is rate-limited by a communications network. If the instructions are so identified, the operating system may be able to schedule tasks and to allocate resources based on the resources to be utilized by the threads, which may improve efficiency and computing performance.06-25-2009
20090165008APPARATUS AND METHOD FOR SCHEDULING COMMANDS FROM HOST SYSTEMS - A scheduling apparatus and method thereof are disclosed. The scheduling apparatus includes a command-collecting module, a sorting module and a command-executing module. The command-collecting module collects the commands issued from the host systems. The sorting module sorts the collected commands from the command-collecting module based on a plurality of data addresses. The data addresses within the storage unit are associated with the commands. The command executing module executes the sorted commands from the sorting module.06-25-2009
20090288090PRIORITY CONTROL PROGRAM, APPARATUS AND METHOD - A disclosed priority control program recorded in a computer-readable medium causes a computer to execute, in job allocation for computational resources, a first step of lowering a job allocation priority of a user based on an estimated utilization amount of a job associated with the user, the job allocation priority indicating a degree of priority of the user in obtaining an allocation of the computational resource, and the estimated utilization amount being an amount of the computational resources estimated to be used for the job and being submitted to and recorded in a memory device on a job-to-job basis; and a second step of increasing the job allocation priority over time at a restoration rate which corresponds to a user-specific amount of the computational resources available for the user per unit time, the user-specific amount being recorded in the memory device on a user-to-user basis.11-19-2009
20090172682SERIALIZATION IN COMPUTER MANAGEMENT - Processes are programmatically categorized into a plurality of categories, which are prioritized. Serialization is used to control execution of the processes of the various categories. The serialization ensures that processes of higher priority categories are given priority in execution. This includes temporarily preventing processes of lower priority categories from being executed.07-02-2009
20090064154IMAGE RECONSTRUCTION SYSTEM WITH MULTIPLE PARALLEL RECONSTRUCTION PIPELINES - In a method, system, computer-readable medium and watchdog module to control a number of medical technology processes that are executed in multiple computerized pipelines according to a predetermined organizational structure, a priority is associated with an incoming process, with a high priority and multiple low priorities being provided. A process with a high priority is executed in a priority pipeline among the multiple pipelines.03-05-2009
20090064153COMMAND SELECTION METHOD AND ITS APPARATUS, COMMAND THROW METHOD AND ITS APPARATUS - When selecting one command within a processor from a plurality of command queues vested with order of priority, the order of priority assigned to the plurality of command queues is dynamically changed so as to select a command, on a priority basis, from a command queue vested with a higher priority from among the plurality of command queues in accordance with the post-change order of priority.03-05-2009
20090049446PROVIDING QUALITY OF SERVICE VIA THREAD PRIORITY IN A HYPER-THREADED MICROPROCESSOR - A method and apparatus for providing quality of service in a multi-processing element environment based on priority is herein described. Consumption of resources, such as a reservation station and a pipeline, are biased towards a higher priority processing element. In a reservation station, mask elements are set to provide access for higher priority processing elements to more reservation entries. In a pipeline, bias logic provides a ratio of preference for selection of a high priority processing02-19-2009
20090254915SYSTEM AND METHOD FOR PROVIDING FAULT RESILIENT PROCESSING IN AN IMPLANTABLE MEDICAL DEVICE - A system and method for providing fault resilient processing in an implantable medical device is provided. A processor and memory store are provided in an implantable medical device. Separate times on the processor are scheduled to a plurality of processes. Separate memory spaces in the memory store are managed by exclusively associating one such separate memory space with each of the processes. Data is selectively validated prior to exchange from one of the processes to another of the processes during execution in the separate processor times.10-08-2009
20110225591HYPERVISOR, COMPUTER SYSTEM, AND VIRTUAL PROCESSOR SCHEDULING METHOD - A hypervisor calculates the total number of processor cycles (the number of processor cycles of one or more physical processors) in a first length of time based on the sum of the operating frequencies of the respective physical processors and the first length of time for each first length of time (for example, a scheduling initialization cycle T09-15-2011
20090210879METHOD FOR DISTRIBUTING COMPUTING TIME IN A COMPUTER SYSTEM - The invention relates to a method for distributing computing time in a computer system on which run a number of partial processes or threads to which an assignment process or scheduler assigns computing time as required, priorities being associated with individual threads and the assignment of computing time being carried out according to the respective priorities. According to said method, the individual threads are respectively associated with a number of time priority levels. A first time priority level contains threads to which computing time is assigned as required at any time. A first scheduler respectively allocates a time slice to the individual time priority levels, and respectively activates one of the time priority levels for the duration of the time slice thereof. A second scheduler monitors the threads of the first time priority level and the threads of the respectively activated time priority level, and assigns computing time to said threads according to the priorities thereof.08-20-2009
20090199191Notification to Task of Completion of GSM Operations by Initiator Node - In a global shared memory (GSM) environment, a method provides local notification of completion of a global shared memory (GSM) operation processed by a first task executing at a local node of the distributed system. The system includes multiple nodes on which different tasks of a single job execute and perform GSM operations that are received from a second task via a via host fabric interface (HFI) and associated HFR window assigned to the first tasks. The local task initiates execution of a GSM operation on the local node. The task then monitors for and detects a completion of the execution of the GSM operation on the local node. When the task detects completion of the execution of the GSM operation, the task issues an internal notification to inform the locally-executing tasks of the completion of the GSM operation.08-06-2009
20090083745Techniques for Maintaining Task Sequencing in a Distributed Computer System - A technique for operating a distributed computer system includes receiving one or more current processing task elements. Each of the one or more respective current processing elements is associated with a different task that is currently being processed in a server cluster. A first task element is selected from the one or more respective current processing task elements and respective servers in the server cluster are requested to update pending task elements, including the one or more respective current processing task elements, based on the first task element.03-26-2009
20120291037METHOD AND APPARATUS FOR PRIORITIZING PROCESSOR SCHEDULER QUEUE OPERATIONS - A method and processor are described for implementing programmable priority encoding to track relative age order of operations in a scheduler queue. The processor may comprise a scheduler queue configured to maintain an ancestry table including a plurality of consecutively numbered row entries and a plurality of consecutively numbered columns. Each row entry includes one bit in each of the columns. Pickers are configured to pick an operation that is ready for execution based on the age of the operation as designated by the ancestry table. The column number of each bit having a select logic value indicates an operation that is older than the operation associated with the number of the row entry that the bit resides in.11-15-2012
20120144396Creating A Thread Of Execution In A Computer Processor - Creating a thread of execution in a computer processor, including copying, by a hardware processor opcode called by a user-level process, with no operating system involvement, register contents from a parent hardware thread to a child hardware thread, the child hardware thread being in a wait state, and changing, by the hardware processor opcode, the child hardware thread from the wait state to an ephemeral run state.06-07-2012
20090260013Computer Processors With Plural, Pipelined Hardware Threads Of Execution - Computer processors and methods of operation of computer processors that include a plurality of pipelined hardware threads of execution, each thread including a plurality of computer program instructions; an instruction decoder that determines dependencies and latencies among instructions of a thread; and an instruction dispatcher that arbitrates, in the presence of resource contention and in accordance with the dependencies and latencies, priorities for dispatch of instructions from the plurality of threads of execution.10-15-2009
20090064155TASK MANAGER AND METHOD FOR MANAGING TASKS OF AN INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to.03-05-2009
20110231853METHOD AND APPARATUS FOR MANAGING REALLOCATION OF SYSTEM RESOURCES - A capability is provided for reallocating, to a first borrower that is requesting resources, resources presently allocated to a second borrower. A method for allocating a resource of a system includes receiving a request for a system resource allocation from a first borrower, determining a request priority of the first borrower based on a present resource allocation associated with the first borrower, determining a hold priority of a second borrower based on a present resource allocation associated with the second borrower, and determining, using the first borrower request priority and the second borrower hold priority, whether to reallocate any of the second borrower resource allocation to the first borrower.09-22-2011
20080263555Task Processing Scheduling Method and Device for Applying the Method - The invention relates to a method for scheduling the processing of tasks and to the associated device, the processing of a task comprising a step for configuring resources required for executing the task and a step for executing the task on the thereby configured resources, the method comprising a selection (10-23-2008
20110231856System and method for dynamically managing tasks for data parallel processing on multi-core system - A dynamic task management system and method for data parallel processing on a multi-core system are provided. The dynamic task management system may generate a registration signal for a task to be parallel processed, may generate a dynamic management signal used to dynamically manage at least one task, in response to the generated registration signal, and may control the at least one task to be created or cancelled in at least one core in response to the generated dynamic management signal.09-22-2011
20110231854Method and Infrastructure for Optimizing the Utilization of Computer System's Resources - The present invention optimizes the utilization of computer system resources by considering predefined performance targets of multithreaded applications using the resources. The performance and utilization information for a set of multithreaded applications is provided. Using the performance and utilization information, the invention determines overutilized resources. Using the performance information, the invention also identifies threads and corresponding applications using an overutilized resource. The priority of the identified threads using said overutilized resource is adjusted to maximise a number of applications meeting their performance targets. The adjustments of priorities are executed via a channel that provides the performance and utilization information.09-22-2011
20090100432FORWARD PROGRESS MECHANISM FOR A MULTITHREADED PROCESSOR - A processing device includes a storage component configured to store instructions associated with a corresponding thread of a plurality of threads, and an execution unit configured to fetch and execute instructions. The processing device further includes a period timer comprising an output to provide an indicator in response to a count value of the period timer reaching a predetermined value based on a clock signal. The processing device additionally includes a plurality of thread forward-progress counter components, each configured to adjust a corresponding execution counter value based on an occurrence of a forward-progress indicator while instructions of a corresponding thread are being executed. The processing device further includes a thread select module configured to select threads of the plurality of threads for execution by the execution unit based a state of the period timer and a state of each of the plurality of thread forward-progress counter components.04-16-2009
20090106762Scheduling Threads In A Multiprocessor Computer - Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register.04-23-2009
20090254914OPTIMIZED USAGE OF COLLECTOR RESOURCES FOR PERFORMANCE DATA COLLECTION THROUGH EVEN TASK ASSIGNMENT - A method of balancing computer resources on a network of computers is provided employing a two-tier network architecture of at least one High Level Collector as a scheduler/load balancing server, and a plurality Low level Collectors which gather task data and execute instructions. Tasks are assigned priority and weight scores and sorted prior to assignment to Low Level Collectors. Also provided is a computer readable medium including instructions, wherein execution of the instructions by at least one computing device balances computer resources on a network of computers.10-08-2009
20120198464SAFETY CONTROLLER AND SAFETY CONTROL METHOD - The present invention relates to time partitioning to prevent a failure of processing while suppressing execution delay of interrupt processing even when the interrupt processing is executed. A safety controller includes: a processor; a system program for controlling allocation of an execution time of the processor to a safety-related task, a non-safety-related task, and an interrupt processing task; and an interrupt handler. Upon generation of an interrupt, the processor executes the interrupt handler to reserve execution of the interrupt processing task as an execution reserved task, and executes the system program to schedule the tasks in accordance with scheduling information on a safety-related TP to which the safety-related task belongs, a non-safety-related TP to which the non-safety-related task belongs, and a reservation execution TP to which the execution reserved task belongs. When execution of a task in a previous TP is finished before the period of the previous TP prior to the execution reservation TP has expired, the execution time in the previous TP is allocated to the execution reserved task.08-02-2012
20090222830Methods for Multi-Tasking on Media Players - This invention provides a method for multi-tasking on a media player in a time-slice-circular manner. The method comprises the step of: dividing each of different functions of the media player to a plurality of tasks by a controller unit; setting a priority to each of the tasks by the controller unit; checking the priority of said each of the tasks, and changing a state of a task from “READY” to “EXECUTING” according to the priority of the task by the controller unit; and executing the tasks alternately by using time slices associated therewith by the controller unit. Since all the tasks are executed within a short time, from the user's point of view, all the tasks are executed simultaneously. Thus, multi-tasking on the media player is achieved.09-03-2009
20090222831Scheduling network distributed jobs - A method and apparatus for scheduling processing jobs is described. In one embodiment, a scheduler receives a request to process one or more computation jobs. The scheduler generates a size metric corresponding to a size of an executable image of each computation job and a corresponding data set associated with each computation job. The scheduler adjusts a priority of each computation job based on a system configuration setting and schedules the process of each computation job according to the priority of each computation job. In another embodiment, the scheduler distributes the plurality of computation jobs on one or more processor of a computing system, where the system configuration setting prioritizes a computation job with a smaller size metric than a computation job with a larger size metric. In another embodiment, the scheduler distributes the computation jobs across a network of computing systems with one or more computation jobs distributed over one or more computing systems, where the system configuration setting prioritizes a computation job with a smaller size metric than a computation job with a larger size metric.09-03-2009
20120198463PIPELINE NETWORK DEVICE AND RELATED DATA TRANSMISSION METHOD - A pipeline structure having a plurality of pipelines with varying data rates is used for transmitting data between different layers in a network device. Important data is transmitted by a faster pipeline, while less important data is transmitted by a slower pipeline. The size of each pipeline may be dynamically adjusted according the transmission status of each pipeline for improving the overall data efficiency.08-02-2012
20090254913Information Processing System - An information processing system is provided to alleviate excessive load on a master node, thereby allowing the master node to efficiently perform the process of assigning jobs to nodes. A client 10-08-2009
20090249349Power-Efficient Thread Priority Enablement - A mechanism for controlling instruction fetch and dispatch thread priority settings in a thread switch control register for reducing the occurrence of balance flushes and dispatch flushes for increased power performance of a simultaneous multi-threading data processing system. To achieve a target power efficiency mode of a processor, the illustrative embodiments receive an instruction or command from a higher-level system control to set a current power consumption of the processor. The illustrative embodiments determine a target power efficiency mode for the processor. Once the target power mode is determined, the illustrative embodiments update thread priority settings in a thread switch control register for an executing thread to control balance flush speculation and dispatch flush speculation to achieve the target power efficiency mode.10-01-2009
20100180279FIELD CONTROL DEVICE AND FIELD CONTROL METHOD - A field control device is provided. The field control device includes: a task executing unit configured to selectively and sequentially execute a control task relating to a field control and other tasks in a same control period; and a priority switching unit configured to switch a relative priority of the control task relative to the other tasks in the control period, wherein the priority is a priority of an execution sequence of tasks in the task executing unit. The priority switching unit is configured to: i) set the priority higher than a certain priority, before the control task is started; and ii) set the priority lower than the certain priority, after the control task is ended.07-15-2010
20100162255DEVICE FOR RECONFIGURING A TASK PROCESSING CONTEXT - The present invention pertains to the field of onboard flight management systems embedded in aircraft. The invention relates to a reconfiguration device (06-24-2010
20100192154SEPARATION KERNEL WITH MEMORY ALLOCATION, REMOTE PROCEDURE CALL AND EXCEPTION HANDLING MECHANISMS - A computer-implemented system (07-29-2010
20090300632WORK REQUEST CONTROL SYSTEM - A work request control system for receiving work requests from input devices provides a priority queuing mechanism for performance of tasks by a finite pool of heterogeneous resources. An input receives work requests from input devices and an attribute mechanism receives the work requests and determines the values of each of multiple attributes for each work request. A queue mechanism calculates using the multiple attributes and by considering each request as a multi-dimensional eigenvector the relative distance of each eigenvector in relation to a reference eigenvector and asserts the work requests in a priority order determined by the relative distance of each eigenvector.12-03-2009
20100192153SELECTING EXECUTING REQUESTS TO PREEMPT - Requests that are executing when an application is determined to be in an overload condition are preempted. To select the executing requests to preempt, a value for each executing request is determined. Then, executing requests are selected for preemption based on the values.07-29-2010
20100211955CONTROLLING 32/64-BIT PARALLEL THREAD EXECUTION WITHIN A MICROSOFT OPERATING SYSTEM UTILITY PROGRAM - A method of programming operating system (O/S) utility C and C++ programs within the Microsoft professional development 32/64-bit parallel threads environment, includes providing a computer unit, which can be a 32/64-bit Microsoft PC O/S, or a 32/64-bit Microsoft Server O/S, a Microsoft development tool, which is the Microsoft Visual Studio Development Environment for C and C++ for either the 32-bit O/S or the 64-bit O/S.08-19-2010
20110239220FINE GRAIN PERFORMANCE RESOURCE MANAGEMENT OF COMPUTER SYSTEMS - Execution of a plurality of tasks by a processor system are monitored. Based on this monitoring, tasks requiring adjustment of performance resources are identified by calculating at least one of a progress error or a progress limit error for each task. Thereafter, performance resources of the processor system allocated to each identified task are adjusted. Such adjustment can comprise: adjusting a clock rate of at least one processor in the processor system executing the task, adjusting an amount of cache and/or buffers to be utilized by the task, and/or adjusting an amount of input/output (I/O) bandwidth to be utilized by the task. Related systems, apparatus, methods and articles are also described.09-29-2011
20100242041Real Time Multithreaded Scheduler and Scheduling Method - In a particular embodiment, a method is disclosed that includes receiving an interrupt at a first thread, the first thread including a lowest priority thread of a plurality of executing threads at a processor at a first time. The method also includes identifying a second thread, the second thread including a lowest priority thread of a plurality of executing threads at a processor at a second time. The method further includes directing a subsequent interrupt to the second thread.09-23-2010
20100235842WORKFLOW PROCESSING SYSTEM, AND METHOD FOR CONTROLLING SAME - According to the present invention, any deficiency caused by the use of a resource, which is in a different state from that assumed upon workflow registration, can be prevented. The workflow processing method of the present invention acquires and holds a resource or feature quantity, which is required upon workflow execution, so as to employ it upon workflow execution. In this manner, after execution of the workflow, the present invention can avoid the workflow execution result which is not intended by a user who has registered the workflow.09-16-2010
20100251250LOCK-FREE SCHEDULER WITH PRIORITY SUPPORT - Techniques for implementing a lock-free scheduler with ordering support are described herein. In addition to the foregoing, other aspects are described in the claims, drawings, and text forming a part of the present disclosure. It can be appreciated by one of skill in the art that one or more various aspects of the disclosure may include but are not limited to circuitry and/or programming for effecting the herein-referenced aspects of the present disclosure; the circuitry and/or programming can be virtually any combination of hardware, software, and/or firmware configured to effect the herein-referenced aspects depending upon the design choices of the system designer.09-30-2010
20100211954PRACTICAL CONTENTION-FREE DISTRIBUTED WEIGHTED FAIR-SHARE SCHEDULER - Embodiments of the invention provide a method, system and computer program product for scheduling tasks in a computer system. In an embodiment, the method comprises receiving a multitude of sets of tasks, and placing the tasks in one or more task queues. The tasks are taken from the one or more task queues and placed in a priority queue according to a first rule. The tasks in the priority queue are assigned to a multitude of working threads according to a second rule based, in part, on share values given to the tasks. In an embodiment, the tasks of each of the sets are placed in a respective one task queue; and all of the tasks in the priority queue from each of the task queues, are assigned as a group to one of the working threads.08-19-2010
20100251251APPARATUS AND METHOD FOR CPU LOAD CONTROL IN MULTITASKING ENVIRONMENT - An apparatus and a method for a Central Processing Unit (CPU) load control in a portable terminal capable of multitasking are provided. The method includes determining, by an application, an expected CPU load from a load table, requesting, by the application, a determination whether the expected CPU load is acceptable by providing the expected CPU load to a CPU load manager, providing, by the CPU load manager, a response including a result indicating whether the expected CPU load is acceptable or not to the application and executing, by the CPU, the application based on the result.09-30-2010
20100199282LOW BURDEN SYSTEM FOR ALLOCATING COMPUTATIONAL RESOURCES IN A REAL TIME CONTROL ENVIRONMENT - A low processing overhead resource manager for a control system uses control system state as a proxy for processing resource capacity, making judgments about execution of asynchronous services based on empirically derived data linked to the states.08-05-2010
20120144395Inter-Thread Data Communications In A Computer Processor - Inter-thread data communications in a computer processor with multiple hardware threads of execution, each hardware thread operatively coupled for communications through an inter-thread communications controller, where inter-thread communications is carried out by the inter-thread communications controller and includes: registering, responsive to one or more RECEIVE opcodes, one or more receiving threads executing the RECEIVE opcodes; receiving, from a SEND opcode of a sending thread, specifications of a number of derived messages to be sent to receiving threads and a base value; generating the derived messages, incrementing the base value once for each registered receiving thread so that each derived message includes a single integer as a separate increment of the base value; sending, to each registered receiving thread, a derived message; and returning, to the sending thread, an actual number of derived messages received by receiving threads.06-07-2012
20110113432COMPRESSED STORAGE MANAGEMENT - Compressed storage management includes assigning a selection priority and a priority level to multiple data units stored in an uncompressed portion of a storage resource. The management can further include compressing data units and storing the compressed data units in a compressed portion of the storage resource. The data units in the compressed portion are stored in regions, which each store data units having the same selection priority or the same selection priority level.05-12-2011
20090276781SYSTEM AND METHOD FOR MULTI-LEVEL PREEMPTION SCHEDULING IN HIGH PERFORMANCE PROCESSING - A computing system configured to handle preemption events in an environment having jobs with high and low priorities. The system includes a job queue configured to receive job requests from users, the job queue storing the jobs in an order based on the priority of the jobs, and indicating whether a job is a high priority job or a low priority job. The system also includes a plurality of node clusters, each node cluster including a plurality of nodes and a scheduler coupled to the job queue and to the plurality of node clusters and configured to assign jobs from the job queue to the plurality of node clusters. The scheduler is configured to preempt a first low priority job running in a first node cluster with a high priority job that appears in the job queue after the low priority job has started and, in the event that a second low priority job from the job queue may run on a portion of the plurality of nodes in the first node cluster during a remaining processing time for the high priority job, backfill the second low priority job into the portion of the plurality of nodes and, in the event a second high priority job is received in the job queue and may run on the portion of the plurality of nodes, return the second low priority job to the job queue.11-05-2009
20090055829METHOD AND APPARATUS FOR FINE GRAIN PERFORMANCE MANAGEMENT OF COMPUTER SYSTEMS - A system and method to control the allocation of processor (or state machine) execution resources to individual tasks executing in computer systems is described. By controlling the allocation of execution resources, to all tasks, each task may be provided with throughput and response time guarantees. This control is accomplished through workload metering shaping which delays the execution of tasks that have used their workload allocation until sufficient time has passed to accumulate credit for execution (accumulate credit over time to perform their allocated work) and workload prioritization which gives preference to tasks based on configured priorities.02-26-2009
20090187914SYSTEM AND METHOD FOR LOAD SHEDDING IN DATA MINING AND KNOWLEDGE DISCOVERY FROM STREAM DATA - Load shedding schemes for mining data streams. A scoring function is used to rank the importance of stream elements, and those elements with high importance are investigated. In the context of not knowing the exact feature values of a data stream, the use of a Markov model is proposed herein for predicting the feature distribution of a data stream. Based on the predicted feature distribution, one can make classification decisions to maximize the expected benefits. In addition, there is proposed herein the employment of a quality of decision (QoD) metric to measure the level of uncertainty in decisions and to guide load shedding. A load shedding scheme such as presented herein assigns available resources to multiple data streams to maximize the quality of classification decisions. Furthermore, such a load shedding scheme is able to learn and adapt to changing data characteristics in the data streams.07-23-2009
20090320033DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan.12-24-2009
20090187913ORDERING MULTIPLE RESOURCES - A method of ordering multiple resources in a transaction comprising receiving a transaction for a plurality of resources; determining, for each resource, the work embodied by the transaction; ordering the resources according to the determination of the work; committing the transaction; and invoking the resources in the selected order. The step of ordering the resources may comprise specifying the resource to be invoked last. Alternatively, or additionally, the step of ordering the resources may also comprise specifying that each resource carrying out read-only work is to be invoked first.07-23-2009
20090320034DATA PROCESSING APPARATUS - A data processing apparatus has a memory element array (12-24-2009
20090320032SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREVENTING STARVATIONS OF TASKS IN A MULTIPLE PROCESSING ENTITY SYSTEM - A system, computer program and a method for preventing starvations of tasks in a multiple-processing entity system, the method includes: examining, during each scheduling iteration, an eligibility of each task data structure out of a group of data structures to be moved from a sorted tasks queue to a ready for execution task; updating a value, during each scheduling iteration, of a queue starvation watermark value of each task data structure that is not eligible to move to a running tasks queue, until a queue starvation watermark value of a certain task data structure out of the group reaches a queue starvation watermark threshold; and generating a task starvation indication if during an additional number of scheduling iterations, the certain task data structure is still prevented from being moved to a running tasks queue, wherein the additional number is responsive to a task starvation watermark.12-24-2009
20110107342PROCESS SCHEDULER EMPLOYING ORDERING FUNCTION TO SCHEDULE THREADS RUNNING IN MULTIPLE ADAPTIVE PARTITIONS - A system is set forth that includes a processor, one or more memory storage units, and software code stored in the one or more memory storage units. The software code is executable by the processor to generate a plurality of adaptive partitions that are each associated with one or more process threads. Each of the plurality, of adaptive partitions has one or more corresponding scheduling attributes that are assigned to it. The software code further includes a scheduling system that is executable by the processor for selectively allocating the processor to run the process threads based on a comparison between ordering function values for each adaptive partition. The ordering function value for each adaptive partition is calculated using one or more of the scheduling attributes of the corresponding adaptive partition. The scheduling attributes that may be used to calculate the ordering function value include, for example, 1) the process budget, such as a guaranteed time budget, of the adaptive partition, 2) the critical budget, if any, of the adaptive partition, 3) the rate at which the process threads of an adaptive partition consume processor time, or the like. For each adaptive partition that is associated with a critical thread, a critical ordering function value also may be calculated. The scheduling system may compare the ordering function value with the critical ordering function value of the adaptive partition to determine the proper manner of billing the adaptive partition for the processor allocation used to run its associated critical threads. Methods of implementing various aspects of such a system are also set forth.05-05-2011
20090113439Method and Apparatus for Processing Data - Methods and apparatuses for processing data are provided. In one embodiment, a data processing operation which is assigned a predefined maximum duration is started. The progress of the data processing operation is checked at a predefined point in time and a priority of the data processing operation is changed on the basis of the progress of the data processing operation.04-30-2009
20090113438OPTIMIZATION OF JOB DISTRIBUTION ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus optimizes job and data distribution on a multi-node computing system. A job scheduler distributes jobs and data to compute nodes according to priority and other resource attributes to ensure the most critical work is done on the nodes that are quickest to access and with less possibility of node communication failure. In a tree network configuration, the job scheduler distributes critical jobs and data to compute nodes that are located closest to the I/O nodes. Other resource attributes include network utilization, constant data state, and class routing.04-30-2009
20120246659TECHNIQUES TO OPTIMIZE UPGRADE TASKS - Techniques to prioritize and optimize the execution of upgrade operations are described. A technique may include determining the size of data blocks that are to be copied from one storage medium to another, and the dependencies of upgrade tasks on the data blocks and on other tasks. A task may be prioritized according to a weight that includes the cumulative sizes of the data blocks that it and its dependent tasks depend on. A data block copying may be prioritized according to the cumulative weights of the tasks that depend on that data block. Some embodiments may perform several data copying and/or tasks in parallel, rather than sequentially. Other embodiments are described and claimed.09-27-2012
20080244593TASK ROSTER - A task roster. A task roster can include a visual list of component tasks, the component tasks collectively forming a high-level task; a specified sequence in which the component tasks are to be performed; and, one or more visual status indicators, each visual status indicator having a corresponding component task, each visual status indicator further indicating whether the corresponding component task has been performed in the specified sequence. The task roster also can include a component task initiator configured to launch a selected component task in the visual list of component tasks upon a user-selection of the selected component task.10-02-2008
20110119674SCHEDULING METHOD, SCHEDULING APPARATUS AND MULTIPROCESSOR SYSTEM - A thread status managing unit organizes a plurality of threads into groups and manages the status of the thread groups. A ready queue queues thread groups in a ready state or a running state in the order of priority and, within the same priority level, in the FIFO order. An assignment list generating unit sequentially retrieves the thread groups from the ready queue. The assignment list appends a retrieved thread group to a thread assignment list only when all threads belonging to the retrieved thread group are assignable to the respective processors at the same time. A thread assigning unit assigns all threads belonging to the thread groups stored in the thread assignment list to the respective processors.05-19-2011
20080229317Method for optimizing a link schedule - A method for improving a link schedule used in a communications network is disclosed. While the method applies generally to networks that operate on a scheduled communications basis, it is described in the context of a Foundation FIELDBUS. The method includes: scheduling sequences and their associated publications according to their relative priority, per application; minimizing delays between certain function blocks, and between certain function blocks and publications; and grouping certain publications. Accordingly, advantages such as latency reduction, schedule length reduction, and improved communications capacity are gained.09-18-2008
20080229316DATA PROCESSING DEVICE AND ELECTRONIC DEVICE - A data processing device includes: an execution unit; and a memory unit, wherein the memory unit stores a plurality of pre-processing data on which a processing is to be rendered at a plurality of times prior to a specified time; (1) when a value of specified pre-processing data at the specified time is in a range between a maximum value and a minimum value among values of the plurality of pre-processing data, the execution unit renders the processing on the specified pre-processing data; and (2) when the value of the specified pre-processing data is greater than the maximum value or smaller than the minimum value, the execution unit renders the processing on an arbitrary value that is deemed substantively in the range between the maximum value and the minimum value, instead of the value of the specified pre-processing data.09-18-2008
20090037918Thread sequencing for multi-threaded processor with instruction cache - Execution of the first thread of a new program is prioritized ahead of older threads for a previously running program. The new program is invoked during the execution of a thread of the previous program. The first thread of the program is prioritized ahead of the remaining threads of the previous program. In an embodiment of the invention, additional threads of the new program are also prioritized ahead of the older threads. A thread's context may include a table of constant values that can be referenced by each program and are shared by multiple threads. Changing the values in a constant table for a new thread is time intensive. To avoid changes to the constant table (and thereby save time), a higher priority status is conferred to the first thread that follows a change to the constant table.02-05-2009
20090070765XML-BASED CONFIGURATION FOR EVENT PROCESSING NETWORKS - An event server running an event driven application implementing an event processing network the event processing network can include at least one processor to implement a rule an at least one input stream. Priority for parts of the event processing network can be settable by a user.03-12-2009
20130132965STATUS TOOL TO EXPOSE METADATA READ AND WRITE QUEUES - A method to expose status information is provided. The status information is associated with metadata extracted from multimedia files and stored in a metadata database. The metadata information that is extracted from the multimedia files is stored in a read queue to allow a background thread to process the metadata and populate the metadata database. Additionally, the metadata database may be updated to include user-define metadata, which is written back to the multimedia files. The user-defined metadata is included in a write queue and is written to the multimedia files associated with the user-defined metadata. The status of the read and write queues are exposed to a user through a graphical user interface. The status may include the list of multimedia files included in the read and write queues, the priorities of each multimedia file, and the number of remaining multimedia files.05-23-2013
20100306779WORKFLOW MANAGEMENT SYSTEM AND METHOD - Systems and methods improve the equitable distribution the processing capacity of a computing device processing work items retrieved from multiple queues in a workflow system. A retrieval priority is determined for each of the plurality of queues and work items are retrieved from each of the multiple queues according to the retrieval priority. The retrieved work items are then stored in a central data structure. Multiple processing components process the work items stored in the central data structure. The number of processing components is selectively adjusted to maximize efficiency.12-02-2010
20100306778LOCALITY-BASED SCHEDULING IN CONTINUATION-BASED RUNTIMES - A computer system establishes an execution environment for executing activities in a continuation based runtime including instantiating an activity scheduler configured to perform the following: scheduling activities for execution in the CBR. The activity scheduler resolves the scheduled activity's arguments and variables prior to invoking the scheduled activity using the activity's unique context. The activity scheduler also determines, based on the activity's unique context, whether the scheduled activity comprises a work item that is to be queued at the top of the execution stack and, based on the determination, queues the work item to the execution stack. The computer system executes the work items of the scheduled activity as queued in the execution stack of the established execution environment in the CBR.12-02-2010
20100325634Method of Deciding Migration Method of Virtual Server and Management Server Thereof - Occupancy amount of physical resource of a virtual server(VS) is calculated based on maximum physical resource amount indicating performance of a physical server(PS), the occupied virtual resource coefficient indicating relation of physical resource amount used by the VS to the physical resource amount allocated to the VS and the allocated physical resource coefficient indicating relation of the allocated physical resource to the maximum physical resource amount of the PS, and change value of the occupied physical resource amount from a predetermined occupied physical resource amount is calculated based on the calculated occupancy amount and the predetermined occupied physical resource amount. The migration time required of the VS is calculated based on the calculated change value, variation ratio indicating degree of influence exerted by change of the occupied virtual resource coefficient of the VS on the required migration time and reference execution time set based on the predetermined occupied physical resource amount.12-23-2010
20100325633Searching Regular Expressions With Virtualized Massively Parallel Programmable Hardware - Logic and state information suitable for execution on a programmable hardware device may be generated from a task, such as evaluating a regular expression against a corpus. Hardware capacity requirements of the logic and state information on the programmable hardware device may be estimated. Once estimated, a plurality of the logic and state information generated from a plurality of tasks may be distributed into sets such that the logic and state information of each set fits within the hardware capacity of the programmable hardware device. The tasks within each set may be configured to execute in parallel on the programmable hardware device. Sets may then be executed in series, permitting virtualization of the resources.12-23-2010
20100325635Method for correct-by-construction development of real-time-systems - Methods and implementations for constructing a real-time system are disclosed. The real-time system includes at least one module, each module having at least one mode. According to an embodiment, a method comprises: defining a mode period for each mode for a repeated execution of the respective mode by the corresponding module; for each mode, defining one or more synchronous tasks to be executed by the real-time system, whereby each synchronous task is associated with a logical execution time during which the task execution has to be completed; defining an integer number of time-slots for the mode period of each mode; assigning to each task at least one time slot during which the task is to be executed.12-23-2010
20090138880Method for organizing a multi-processor computer - The invention relates to computer engineering and can be used for developing new-architecture multiprocessor multithreaded computers. The aim of the invention is to produce a novel method for organizing a computer, devoid of the disadvantageous feature of existing multithreaded computers, i.e., overhead costs due to the reload of thread descriptors. The inventive method encompasses using a distributed presentation which does not require loading the thread descriptors in the computer multi-level virtual memory, whereby providing, together with current synchronizing hardware, the uniform representation of all independent activities in the form of threads, the multi-program control of which is associated with a priority pull-down with an accuracy of individual instructions and is totally carried out by means of hardware.05-28-2009
20100083266 METHOD AND APPARATUS FOR ACCESSING A SHARED DATA STRUCTURE IN PARALLEL BY MULTIPLE THREADS - A method of accessing a shared data structure in parallel by multiple threads in a parallel application program is disclosed, in which a lock of the shared data structure is granted to one thread of the multiple threads, an operation of the thread which acquires the lock is performed on the shared data structure, then an operation of each thread of the multiple threads which does not acquire the lock is buffered, and finally the buffered operations are performed on the shared data structure when another thread of the multiple threads subsequently acquires the lock. By using this method, the operations of other threads which do not acquire the lock of the shared data structure can be buffered automatically when the shared data structure is locked by one thread, and all the buffered operations can be performed when another thread acquires the lock. Therefore when the shared data structure is modified, the occurences of an element shift in the shared data structure can be greatly reduced and the access performance of the multiple threads can be improved. A corresponding apparatus and program product are also disclosed.04-01-2010
20100333099MESSAGE SELECTION FOR INTER-THREAD COMMUNICATION IN A MULTITHREADED PROCESSOR - A method and circuit arrangement process a workload in a multithreaded processor that includes a plurality of hardware threads. Each thread receives at least one message carrying data to process the workload through a respective inbox from among a plurality of inboxes. A plurality of messages are received at a first inbox among the plurality of inboxes, wherein the first inbox is associated with a first thread among the plurality of hardware threads, and wherein each message is associated with a priority. From the plurality of received messages, a first message is selected to process in the first thread based on that first message being associated with the highest priority among the received messages. A second message is selected to process in the first thread based on that second message being associated with the earliest time stamp among the received messages and in response to processing the first message.12-30-2010
20100333102Distributed Real-Time Operating System - A distributed control system and methods of operating such a control system are disclosed. In one embodiment, the distributed control system is operated in a manner in which interrupts are at least temporarily inhibited from being processed to avoid excessive delays in the processing of non-interrupt tasks. In another embodiment, the distributed control system is operated in a manner in which tasks are queued based upon relative timing constraints that they have been assigned. In a further embodiment, application programs that are executed on the distributed control system are operated in accordance with high-level and/or low-level requirements allocated to resources of the distributed control system.12-30-2010
20100333101VIRTUALISED RECEIVE SIDE SCALING - A method for receiving packet data by means of a data processing system having a plurality of processing cores and supporting a network interface device and a set of at least two software domains, each software domain carrying a plurality of data flows and each supporting at least two delivery channels, the method comprising: receiving at the network interface device packet data that is part of a particular data flow; selecting in dependence on one or more characteristics of the packet data a delivery channel of a particular one of the software domains, said delivery channel being associated with a particular one of the processing cores of the system; and mapping the incoming packet data into said selected delivery channel such that receive processing of the packet is performed by the same processing core that performed receive processing for preceding packets of that data flow.12-30-2010
20100333098DYNAMIC TAG ALLOCATION IN A MULTITHREADED OUT-OF-ORDER PROCESSOR - Various techniques for dynamically allocating instruction tags and using those tags are disclosed. These techniques may apply to processors supporting out-of-order execution and to architectures that supports multiple threads. A group of instructions may be assigned a tag value from a pool of available tag values. A tag value may be usable to determine the program order of a group of instructions relative to other instructions in a thread. After the group of instructions has been (or is about to be) committed, the tag value may be freed so that it can be re-used on a second group of instructions. Tag values are dynamically allocated between threads; accordingly, a particular tag value or range of tag values is not dedicated to a particular thread.12-30-2010
20110010721Managing Virtualized Accelerators Using Admission Control, Load Balancing and Scheduling - A system and method is shown that includes an admission control module that resides in a management/driver domain, the admission control module to admit a domain that is part of a plurality of domains, into the computer system based upon one of a plurality of accelerators satisfying a resource request of the domain. The system and method also includes a load balancer module, which resides in the management/driver domain, the load balancer to balance at least one load from the plurality of domains across the plurality of accelerators. Further, the system and method also includes a scheduler module that resides in the management/driver domain, the scheduler to multiplex multiple requests from the plurality of domains to one of the plurality of accelerators.01-13-2011
20090144739PERSISTENT SCHEDULING TECHNIQUES - Techniques for persistent scheduling are provided. A principal registers a schedule with a network-based scheduling service. The scheduling service determines when a trigger is to be sent to a client associated with the principal for purposes of having that client process a particular action. The trigger is sent when the client is detected as being online; and when the client is offline, the trigger is sent as soon as the client comes online. Furthermore, once a trigger is successfully sent, a current date and time that the trigger was sent is maintained with the schedule for the client.06-04-2009
20110010722MEMORY SWAP MANAGEMENT METHOD AND APPARATUS, AND STORAGE MEDIUM - A memory swap management method that can preferentially place in a primary storage device a process that has a high possibility of being executed next, thereby shortening the time to start executing the next process. A planned execution sequence of jobs is stored when there are a plurality of jobs waiting to be executed. A process as a swap-out candidate and a process as a swap-in candidate are determined based on the execution sequence and types of processes stored in the primary storage device. According to the determination, the process as the swap-out candidate is swapped out from the primary storage device to a secondary storage device, and the process as the swap-in candidate is swapped in from the secondary storage device into an area of the primary storage device freed as a result of the swap-out.01-13-2011
20110029979Systems and Methods for Task Execution on a Managed Node - Systems and methods for executing tasks on a managed node remotely coupled to a management node are provided. A management controller of the management node may be configured to determine at least one execution policy for a task, schedule the task for execution, receive system information data from the managed node, based at least on the received system information, determine if the received system information complies with the at least one execution policy, and if the received information complies with the at least one execution policy, forward the task from the management controller to the managed node for execution.02-03-2011
20110029978DYNAMIC MITIGATION OF THREAD HOGS ON A THREADED PROCESSOR - Systems and methods for efficient thread arbitration in a processor. A processor comprises a multi-threaded resource. The resource may include an array 8of entries which may be allocated by threads. A thread arbitration table corresponding to a given thread stores a high and a low threshold value in each table entry. A thread history shift register (HSR) indexes the table, wherein each bit of the HSR indicates whether the given thread is a thread hog. When the given thread has more allocated entries in the array than the high threshold of the table entry, the given thread is stalled from further allocating array entries. Similarly, when the given thread has fewer allocated entries in the array than the low threshold of the selected table entry, the given thread is permitted to allocate entries. In this manner, threads that hog dynamic resources can be mitigated such that more resources are available to other threads that are not thread hogs. This can result in a significant increase in overall processor performance.02-03-2011
20100180278Resource management apparatus and computer program product - Provided is a resource management apparatus for determining allocation of a resource to be consumed or supplied by each of a plurality of applications within a predetermined unit time in a bidding process. The resource management apparatus includes a bid value calculating unit configured to calculate a bid value representing a hypothetical price of the resource, a CPU price adjusting unit configured to adjust the bid value supplied by an application, which has a smaller resource consumption amount than another application, to be greater than the bid value of the another application, and a bid managing unit configured to allocate the resource to each of the plurality of applications taking the adjusted bid value into account.07-15-2010
20110035751Soft Real-Time Load Balancer - The present disclosure is based on a multi-core or multi-processor virtualized environment that comprises both time-sensitive and non-time-sensitive tasks. The present disclosure describes techniques that use a plurality of criteria to choose a processing resource that is to execute tasks. The present disclosure further describes techniques to re-schedule queued tasks from one processing resource to another processing resource, based on a number of criteria. Through load balancing techniques, the present invention both (i) favors the processing of soft real-time tasks arising from media servers and applications, and (ii) prevents “starvation” of the non-real-time general computing applications that co-exist with the media applications in a virtualized environment. These techniques, in the aggregate, favor the processing of soft real-time tasks while also reserving resources for non-real-time tasks. These techniques manage multiple processing resources to balance the competing demands of soft real-time tasks and of non-real-time tasks.02-10-2011
20110113431Method and apparatus for scheduling tasks to control hardware devices - In a method of scheduling tasks for controlling hardware devices, a specified task having the execution right in a current time slice is terminated by depriving the execution right therefrom, when a time during which the execution right continues reaches the activation time given to the specified task. An identification process is performed when each reference cycle has been completed or each task has been terminated. In the identification process, i) when there remain, time-guaranteed tasks which have not been terminated in the current time slice, a time-guaranteed task whose priority is maximum among the remaining tasks is identified, and ii) when there remain no un-terminated time-guaranteed tasks in the current slice, of remaining non-time-guaranteed tasks which are not terminated yet in the current time slice, a non-time-guaranteed task whose priority is maximum is identified. The execution right is assigned to the identified task through the identification process.05-12-2011
20110055840METHOD FOR MANAGING THE SHARED RESOURCES OF A COMPUTER SYSTEM, A MODULE FOR SUPERVISING THE IMPLEMENTATION OF SAME AND A COMPUTER SYSTEM HAVING ONE SUCH MODULE - The disclosure aims to solve the general problem of managing the system with multiple resources of different types. In particular, the disclosure is intended for the sharing of resources between multiple applications that can be executed on a computer platform for situations involving the addition of new resources that were not initially provided in order to achieve these objectives, conflicts are avoided between shared resources starting at the application, with access rights being allocated for each application, while an opening is maintained for the addition of new applications and resources. More specifically, according to this method for managing the resources of a computer system, that are shared between multiple applications, allocation rules are provided during the execution of the applications and the rules generate access rights for each application in relation to each shared resource in the form of successive steps. The steps are controlled for each shared resource by a specific control module and, with each command, a decision criteria module parameterization step checks the rule for allocating access rights, whereby the decision criteria can be shared between at least parts of the control modules.03-03-2011
20090106761Programmable Controller with Multiple Processors Using a Scanning Architecture - Operating a programmable controller with a plurality of processors. The programmable controller may utilize a first subset of the plurality of processors for a scanning architecture. The first subset of the plurality of processors may be further subdivided for execution of periodic programs or asynchronous programs. The programmable controller may utilize a second subset of the plurality of processors for a data acquisition architecture. Execution of the different architectures may occur independently and may not introduce significant jitter (e.g., for the scanning architecture) or data loss/response time lag (e.g., for the data acquisition architecture). However, the programmable controller may operate according to any combination of the divisions and/or architectures described herein.04-23-2009
20100269116SCHEDULING AND/OR ORGANIZING TASK EXECUTION FOR A TARGET COMPUTING PLATFORM - Techniques are generally described relating to methods, apparatuses and articles of manufactures for scheduling and/or organizing execution of tasks on a computing platform. In various embodiments, the method may include identifying successively one or more critical time intervals, and scheduling and/or organizing task execution for each of the one or more identified critical time intervals. In various embodiments, one or more tasks to be executed may be scheduled to execute based in part on their execution completion deadlines. In various embodiments, organizing one or more tasks to execute may include selecting a virtual operating mode of the platform using multiple operating speeds lying on a convexity energy-speed envelope of the platform. Intra-task delay caused by switching operating mode may be considered. Other embodiments may also be described and/or claimed.10-21-2010
20110126206OPERATIONS MANAGEMENT APPARATUS OF INFORMATION-PROCESSING SYSTEM - Information processing equipment and power/cooling facilities are managed together for power savings without degrading system processing performance. An operations management apparatus 05-26-2011
20100218191Apparatus and Method for Processing Management Requests - Embodiments of the present invention provide a method of processing a management request, comprising determining a priority level of the management request based upon one or more predetermined priority criteria. In some embodiments, the management requests are based on a Common Information Model (CIM) and control or monitor operation of an entity.08-26-2010
20100131955Highly distributed parallel processing on multi-core device - There is provided a highly distributed multi-core system with an adaptive scheduler. By resolving data dependencies in a given list of parallel tasks and selecting a subset of tasks to execute based on provided software priorities, applications can be executed in a highly distributed manner across several types of slave processing cores. Moreover, by overriding provided priorities as necessary to adapt to hardware or other system requirements, the task scheduler may provide for low-level hardware optimizations that enable the timely completion of time-sensitive workloads, which may be of particular interest for real-time applications. Through this modularization of software development and hardware optimization, the conventional demand on application programmers to micromanage multi-core processing for optimal performance is thus avoided, thereby streamlining development and providing a higher quality end product.05-27-2010
20090031318APPLICATION COMPATIBILITY IN MULTI-CORE SYSTEMS - Scheduling of threads in a multi-core system running various legacy applications along with multi-core compatible applications is configured such that threads from older single thread applications are assigned fixed affinity. Threads from multi-thread/single core applications are scheduled such that one thread at a time is made available to the cores based on the thread priority preventing conflicts and increasing resource efficiency. Threads from multi-core compatible applications are handled regularly.01-29-2009
20110093859MULTIPROCESSOR SYSTEM, MULTIPLE THREADS PROCESSING METHOD AND PROGRAM - Conventionally, when the amount of data to be processed increases only for a part of threads, the processing efficiency of the whole transaction degrades. A multiprocessor system of the invention includes a plurality of processors executing multiple threads to process data; and a means which, based on an amount of data to be processed for each thread, determines a condition which an order in which the plurality of processors execute the threads should satisfy and starts to execute each thread so that the condition is satisfied.04-21-2011
20100037230METHOD FOR EXECUTING A PROGRAM RELATING TO SEVERAL SERVICES, AND THE CORRESPONDING ELECTRONIC SYSTEM AND DEVICE - The invention relates to a method for executing at least one program pertaining to at least one service included in a device having at least one memory space intended to be allocated for executing at least one of the services, and at least two access points for accessing services accessible from a network external to the device. The device associates a centralizing service with at least two access points and allocates a memory space to a service for receiving a request to connect to one of the services. The centralizing service is executed, making it possible to await reception of a connection request. In the absence thereof, only the centralizing service has the use of an allocated memory space. The invention also relates to a corresponding electronic device and system.02-11-2010
20100037228THREAD CONTROLLER FOR SIMULTANEOUS PROCESS OF DATA TRANSFER SESSIONS IN A PERSONAL TOKEN - The invention relates to a personal token running a series of applications, wherein said personal token includes a thread controller which transmits data from the applications to an external device in a cyclic way, a cycle being constituted of a series of data transfers from the applications and to the external device, a cycle comprising a respective number of data transfers dedicated to each respective application which is different according to the respective application, the number of data transfers for a respective application in a cycle corresponding to a priority level of the application as taken into account by the thread controller.02-11-2010
20090037919Information-Theoretic View of the Scheduling Problem in Whole-Body Computer Aided Detection/Diagnosis (CAD) - A method for automatically scheduling tasks in whole-body computer aided detection/diagnosis (CAD), including: (a) receiving a plurality of tasks to be executed by a whole-body CAD system; (b) identifying a task to be executed, wherein the task to be executed has an expected information gain that is greater than that of each of the other tasks; (c) executing the task with the greatest expected information gain and removing the executed task from further analysis; and (d) repeating steps (b) and (c) for the remaining tasks.02-05-2009
20100037229Method and Device for Determining a Target State - In a method for determining a target state in a system having multiple components, system states of different priorities being selectable in the system as a function of an availability of the components, the following steps are provided: ascertaining whether a highest-priority system state is selectable; determining the highest-priority system state as the target state if the highest-priority system state is selectable; and ascertaining whether a next-higher-priority system state is selectable if the highest-priority system state is not selectable, and determining the next-higher-priority system state as the target state if said state is selectable.02-11-2010
20100064290COMPUTER-READABLE RECORDING MEDIUM STORING A CONTROL PROGRAM, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - A computer-readable recording medium stores a control program that causes a computer to execute a process that includes: an obtaining procedure for obtaining work procedure manual information about a plurality of ordered works and one or more unordered works associated with a range of a predetermined order; an input step of receiving an input; a recognizing procedure for recognizing whether the first work matches a second work that is initially-ordered in unexecuted ordered works among the plurality of ordered works or a third work associated with a range including the order of the second work among the one or more unordered works; and a control procedure for allowing execution of the first work if the first work matches the second work or the third work and denying execution of the first work if the first work does not match any of the second and third works.03-11-2010
20090217278PROJECT MANAGEMENT SYSTEM - A method and apparatus for managing a project are described. According to one embodiment, the method includes the steps of ranking the plurality of tasks to produce a first list; assigning a task cost to each of the plurality of tasks; setting a planned velocity, the planned velocity determining the rate at which task costs are planned to be completed per time segment; and dynamically assigning each of the plurality of tasks to one of the sequence of time segments in the order indicated by the first list based on the planned velocity. In other embodiments, the apparatus includes a machine-readable medium that provides instructions for a processor, which when executed by the processor cause the processor to perform a method of the present invention.08-27-2009
20100050178METHOD AND APPARATUS TO IMPLEMENT SOFTWARE TO HARDWARE THREAD PRIORITY - The invention relates to a method and apparatus for execution scheduling of a program thread of an application program and executing the scheduled program thread on a data processing system. The method includes: providing an application program thread priority to a thread execution scheduler; selecting for execution the program thread from a plurality of program threads inserted into the thread execution queue, wherein the program thread is selected for execution using a round-robin selection scheme, and wherein the round-robin selection scheme selects the program thread based on an execution priority associated with the program thread bit; placing the program thread in a data processing execution queue within the data processing system; and removing the program thread from the thread execution queue after a successful execution of the program thread by the data processing system.02-25-2010
20110252428Virtual Queue Processing Circuit and Task Processor - A queue control circuit controls the placement and retrieval of a plurality of tasks in a plurality of types of virtual queues. State registers are associated with respective tasks. Each of the state registers stores a task priority order, a queue ID of a virtual queue, and the order of placement in the virtual queue. Upon receipt of a normal placement command ENQ_TL, the queue control circuit establishes, in the state register for the placed task, QID of the virtual queue as the destination of placement and an order value indicating the end of the queue. When a reverse placement command ENQ_TP is received, QID of the destination virtual queue and an order value indicating the start of the queue are established. When a retrieval command DEQ is received, QID is cleared in the destination virtual queue.10-13-2011
20110154346TASK SCHEDULER FOR COOPERATIVE TASKS AND THREADS FOR MULTIPROCESSORS AND MULTICORE SYSTEMS - In a computer system with a multi-core processor, the execution of tasks is scheduled in that a first queue for new tasks and a second queue for suspended tasks are related to a first core, and a third queue for new tasks and a fourth queue for suspended tasks are related to a second core. The tasks have instructions, the new tasks are tasks where none of the instructions have been executed by any of the cores, and the suspended tasks are tasks where at least one of the instructions has been executed by any of the cores. New tasks are popped from the first queue to the first core; and in case the first queue being empty, tasks are popped to the first queue in the following preferential order: suspended tasks from the second queue, new task from the third queue, and new tasks from the fourth queue.06-23-2011
20110154345Multicore Processor Including Two or More Collision Domain Networks - Implementations and techniques for multicore processors having a domain interconnection network configured to associate a first collision domain network with a second collision domain network in communication are generally disclosed.06-23-2011
20080271028INFORMATION PROCESSING APPARATUS - According to one embodiment, an information processing apparatus, executing a process including a plurality of threads for reproduction of moving image data, includes a storage unit which stores priority information indicating a priority of a process of each of threads upon executing the process of the plurality of threads, and a processing unit which reads the priority information from the storage unit, reads the read priority information as a definition file, and executes the process of each of the threads in accordance with the priority information of the definition file.10-30-2008
20110078693METHOD FOR REDUCING THE WAITING TIME WHEN WORK STEPS ARE EXECUTED FOR THE FIRST TIME - A method and a medical computer system for executing the method are disclosed for reducing the waiting time for at least one user of the computer system when they first execute at least one work step in the computer system. The method includes pre-starting a process which is not yet assigned to a user, and loading the services into the process, which the applications initiated by a user to execute the at least one work step are very likely to call without the user already being assigned to the process.03-31-2011
20120304187DYNAMIC TASK ASSOCIATION - An apparatus, system, and method are disclosed for dynamic task association. The method includes maintaining a plurality of projects. Each project may include a plurality of tasks specific to the project. The method may also include detecting a change in a particular task of a first project that affects one or more tasks of a second project. The first project and the second project may be of the plurality of projects and the second project may be independent from the first project. The method may also include updating one or more tasks of the second project affected by the change in response to detecting the change in the particular task of the first project.11-29-2012
20120304186Scheduling Mapreduce Jobs in the Presence of Priority Classes - Techniques for scheduling one or more MapReduce jobs in a presence of one or more priority classes are provided. The techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs.11-29-2012
20090150891RESPONSIVE TASK SCHEDULING IN COOPERATIVE MULTI-TASKING ENVIRONMENTS - Task scheduling in cooperative multi-tasking environments is accomplished by a task scheduler that evaluates the relative priority of an executing task and tasks in a queue waiting to be executed. The task scheduler may issue a suspend request to lower priority tasks so that high priority tasks can be executed. Tasks are written or compiled with checks located at opportune locations for suspending and resuming the given task. The tasks under a suspend request continue operation until they reach a check, at which point the task will suspend operation depending on specific criteria. By allowing both the task and the task scheduler to assist in determining the precise timing of the suspension, the multi-tasking environment becomes highly efficient and responsive.06-11-2009
20110078691STRUCTURED TASK HIERARCHY FOR A PARALLEL RUNTIME - The present invention extends to methods, systems, and computer program products for a structured task hierarchy for a parallel runtime. The parallel execution runtime environment permits flexible spawning and attachment of tasks to one another to form a task hierarchy. Parent tasks can be prevented from completing until any attached child sub-tasks complete. Exceptions can be aggregated in an exception array such that any aggregated exceptions for a task are available when the task completes. A shield mode is provided to prevent tasks from attaching to another task as child tasks.03-31-2011
20110078694CONTROL APPARATUS, CONTROL SYSTEM AND COMPUTER PROGRAM - A system management layer changes a current program with a program (door lock failure diagnosis judgment program, security judgment program, door lock judgment program, keyless entry judgment program or the like) to be executed by an application layer, in accordance with an operation mode of on-vehicle equipment. Priorities of programs are previously stored for each operation mode, and a priority judgment program contributes to judge the priority of operation request based on the operation mode. Thus, plural programs of each hierarchal layer are categorized into groups per operation mode, although complicating in the single hierarchal layer. Therefore, it is possible to prevent the priority judgment processing from complicating for the operation request output by each computer program03-31-2011
20110078692COALESCING MEMORY BARRIER OPERATIONS ACROSS MULTIPLE PARALLEL THREADS - One embodiment of the present invention sets forth a technique for coalescing memory barrier operations across multiple parallel threads. Memory barrier requests from a given parallel thread processing unit are coalesced to reduce the impact to the rest of the system. Additionally, memory barrier requests may specify a level of a set of threads with respect to which the memory transactions are committed. For example, a first type of memory barrier instruction may commit the memory transactions to a level of a set of cooperating threads that share an L1 (level one) cache. A second type of memory barrier instruction may commit the memory transactions to a level of a set of threads sharing a global memory. Finally, a third type of memory barrier instruction may commit the memory transactions to a system level of all threads sharing all system memories. The latency required to execute the memory barrier instruction varies based on the type of memory barrier instruction.03-31-2011
20110072435PRIORITY CONTROL APPARATUS AND PRIORITY CONTROL METHOD - A priority control apparatus according to the present invention includes: an OS execution unit which executes first tasks that run on a first OS and second tasks that run on a second OS; a task priority obtainment unit which obtains the priority of an execution task which is a first task being executed by the OS execution unit and the priority of a requested task which is a second task whose execution is being requested to the OS execution unit; and a priority changing unit which, in the case where the priority of the requested task is higher than the priority of the execution task, changes the priorities of the first tasks to be lower than the priority of the requested task and higher than the next lower priority to the requested task among the second tasks, while maintaining the relative order of the priorities among the first tasks.03-24-2011
20110035752Dynamic Techniques for Optimizing Soft Real-Time Task Performance in Virtual Machines - Methods are disclosed that dynamically improve soft real-time task performance in virtualized computing environments under the management of an enhanced hypervisor comprising a credit scheduler. The enhanced hypervisor analyzes the on-going performance of the domains of interest and of the virtualized data-processing system. Based on the performance metrics disclosed herein, some of the governing parameters of the credit scheduler are adjusted. Adjustments are typically performed cyclically, wherein the performance metrics of an execution cycle are analyzed and, if need be, adjustments are applied in a later execution cycle. In alternative embodiments, some of the analysis and tuning functions are in a separate application that resides outside the hypervisor. The performance metrics disclosed herein include: a “total-time” metric; a “timeslice” metric; a number of “latency” metrics; and a “count” metric. In contrast to prior art, the present invention enables on-going monitoring of a virtualized data-processing system accompanied by dynamic adjustments based on objective metrics.02-10-2011
20110061056PORTABLE DEVICE AND METHOD FOR PROVIDING SHORTCUTS IN THE PORTABLE DEVICE - A method and a portable device provide shortcuts in an operating system of the portable device. The method displays the shortcuts in a user interface of the operating system on a display unit of the portable device when a first process is operating in the portable device. An application menu corresponding to the shortcut is displayed when the shortcut is activated, where the application menu comprises a list of a plurality of applications. The first process is executed as a background process when one of the applications is selected on the user interface as a second process, and the second process is executed as a foreground process.03-10-2011
20110041135DATA PROCESSOR AND DATA PROCESSING METHOD - A data processing method has a device control thread for each peripheral device capable of an independent operation, a CPU processing thread for each data processing that is performed by a CPU, a control thread equipped with a processing part for constructing an application. The control thread checks an output from the thread related with each processing part, performs with a higher priority from the processing part in which output data of the preprocessing part as a configuration of the application exists and that is near termination, and instructs execution of the each device control thread and the CPU processing thread, and data input/output. Each of device control thread and CPU processing thread processes the data according to the instructions, and sends a processing result and a notification to the control thread.02-17-2011
20120204185WORKFLOW CONTROL OF RESERVATIONS AND REGULAR JOBS USING A FLEXIBLE JOB SCHEDULER - A scheduler receives flexible reservation requests for scheduling in a computing environment comprising consumable resources. The flexible reservation request specifies a duration and a required resource. The consumable resources comprise machine resources and floating resources. The scheduler creates a flexible job for the flexible reservation request and places the flexible job in a prioritized job queue for scheduling, wherein the flexible job is prioritizes relative to at least one regular job in the prioritized job queue. The scheduler adds a reservation set to a waiting state for the flexible reservation request. The scheduler, responsive to detecting the flexible job positioned in the prioritized job queue for scheduling next and detecting a selection of consumable resources available to match the at least one required resource for the duration, transfers the selection of consumable resources to the reservation and sets the reservation to an active state.08-09-2012
20130160017Software Mechanisms for Managing Task Scheduling on an Accelerated Processing Device (APD) - Embodiments describe herein provide a method of for managing task scheduling on a accelerated processing device. The method includes executing a first task within the accelerated processing device (APD), monitoring for an interruption of the execution of the first task, and switching to a second task when an interruption is detected.06-20-2013
20130160018METHOD AND SYSTEM FOR THE DYNAMIC ALLOCATION OF RESOURCES BASED ON A MULTI-PHASE NEGOTIATION MECHANISM - A system and method for the dynamic allocation of resources based on multi-phase negotiation mechanism. A resource allocation decision can be made based on an index value computed by a selection index function. A negotiation process can be performed based on a schedule, a number of resources, and a price of resources. A user requesting a resource for a low priority task can negotiate based on the schedule, the user demanding the resource for a medium priority task can negotiate based on the schedule and/or the number of resources, and filially the user requesting the resource for a high priority job can successfully negotiate based on per unit resource price. The multi-phase negotiation mechanism motivates the users to be cooperative among them and improves a cooperative behavior coefficient and an overall user satisfaction rate.06-20-2013
20110252429Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. By way of example, thread priority levels may be overlapped between the foreground and background states. In addition, system resource availability may be restricted based on whether a process is receiving user input. In some instances, an application may be suspended rather than being placed into the background state. Implementation of the disclosed services may be substantially transparent to the executing user applications and, in some cases, may be performed without the user application's explicit cooperation.10-13-2011
20090241120SYSTEM AND METHOD FOR CONTROLLING PRIORITY IN SCA MULTI-COMPONENT AND MULTI-PORT ENVIRONMENT - A system for controlling priority in a SCA-based application having a plurality of components wherein each of the components has a plurality of ports, includes: a priority component scheduler, interworking with the plurality of components wherein component priority order of the components is arranged therein; and a priority port scheduler that is provided in each of the components including the plurality of the ports which are associated with connections between the components, wherein port priority order of the ports included in each of the components is arranged therein. The priority component scheduler may be generated by using domain profiles in which component priority values of the components are set and the priority port scheduler may be generated by using domain profiles in which port priority values of the ports included in each of the components are set. Further, the domain profiles may be XML files.09-24-2009
20090241119Interrupt and Exception Handling for Multi-Streaming Digital Processors - A multi-streaming processor has a plurality of streams for streaming one or more instruction threads, a set of functional resources for processing instructions from streams, and interrupt handler logic. The logic detects and maps interrupts and exceptions to one or more specific streams. In some embodiments, one interrupt or exception may be mapped to two or more streams, and in others two or more interrupts or exceptions may be mapped to one stream. Mapping may be static and determined at processor design, programmable, with data stored and amendable, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping. Interrupts may be external interrupts generated by devices external to the processor software (internal) interrupts generated by active streams, or conditional, based on variables. After interrupts are acknowledged, streams to which interrupts or exceptions are mapped are vectored to appropriate service routines. In a synchronous method, no vectoring occurs until all streams to which an interrupt is mapped acknowledge the interrupt.09-24-2009
20080320481Method and Apparatus for Playing Dynamic Content - A method for playing dynamic content includes: allocating and occupying playing resources for playing of dynamic contents by dynamic content priority; preempting playing resources occupied by dynamic contents of lower priorities to play back dynamic contents of higher priorities in precedence. The dynamic contents of which the playing resources are preempted can be handled as appropriate in accordance with the preset processing policy. A playing apparatus for playing dynamic content includes a content receiving module, a storage unit, a play scheduling module, a content playing module, and a user configuration module. The present invention supports automatic playing of dynamic contents by priority and in accordance with the policy preset by the user, and can be implemented simply and conveniently.12-25-2008
20090113437TRANSLATING DECLARATIVE MODELS - The present invention extends to methods, systems, and computer program products for translating declarative models. Embodiments of the present invention facilitate processing declarative models to perform various operations on applications, such as, for example, application deployment, application updates, application control such as start and stop, application monitoring by instrumenting the applications to emit events, and so on. Declarative models of applications are processed and realized onto a target environment, after which they can be executed, controlled, and monitored.04-30-2009
20090100433DISK SCHEDULING METHOD AND APPARATUS - The present invention relates to a method and apparatus for scheduling requests having priorities and deadlines for an I/O operation on a disk storage medium. Requests are normally arranged and processed in deadline order, and requests whose process times based on deadlines overlap each other are processed in priority order. Therefore, it is possible to prevent processing of any requests having relatively higher priorities from being delayed due to a process based on deadline order. Further, in order to minimize seek time, the requests may also be processed in the scanning order. Furthermore, in order to minimize a time required for performing request search and arrangement in the scanning order and the deadline order, a deadline queue where requests are arranged in deadline order and a scan order queue where requests are arranged in the scanning order may be separately prepared.04-16-2009
20110161971Method and Data Processing Device for Processing Requests - Disclosed are a data processing device, a method and a computer program product for processing requests in the data processing device. The data processing device includes at least one processor and at least one memory. The at least one memory includes a set of data including information for processing requests received from at least one client and computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the data processing device at least to perform: notify, prior to processing a request, a client making the request to optionally update data associated with the request; and process the request based on the updated data, if the data is updated by the client.06-30-2011
20110161970METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN COMPUTE NODES - Disclosed are a method, a system and a computer program product of operating a data processing system that can include or be coupled to multiple processor cores. The multiple processor cores can be coupled to a memory that can include multiple priority queues associated with multiple respective priorities and store multiple work items. Work items stored in the multiple priority queues can be associated with a bit mask which is associated with a respective priority queue and can be routed to respective groups of one or more processors based on the associated bit mask. In one or more embodiments, at least two groups of processor cores can include at least one processor core that is common to both of the at least two groups of processor cores.06-30-2011
20080301687SYSTEMS AND METHODS FOR ENHANCING PERFORMANCE OF A COPROCESSOR - Techniques for minimizing coprocessor “starvation,” and for effectively scheduling processing in a coprocessor for greater efficiency and power. A run list is provided allowing a coprocessor to switch from one task to the next, without waiting for CPU intervention. A method called “surface faulting” allows a coprocessor to fault at the beginning of a large task rather than somewhere in the middle of the task. DMA control instructions, namely a “fence,” a “trap” and a “enable/disable context switching,” can be inserted into a processing stream to cause a coprocessor to perform tasks that enhance coprocessor efficiency and power. These instructions can also be used to build high-level synchronization objects. Finally, a “flip” technique is described that can switch a base reference for a display from one location to another, thereby changing the entire display surface.12-04-2008
20120311596INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM - A program reception task and an advertisement reception task are set. The program reception task defines an execution content which includes an execution schedule of a reception process for program data including video data and audio data of programs, and the advertisement reception task defines an execution content including an execution schedule of a reception process for advertisement data including at least one of video data, still image data, and audio data of advertisements. Then, the program reception task and the advertisement reception task are executed based on the execution schedules set in the program reception task and the advertisement reception task, respectively, to perform reception of the program data and reception of the advertisement data from a server independently from each other.12-06-2012
20120204184SIMULATION APPARATUS, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A simulation apparatus is disclosed, including a group switching part. The group switching part refers to a priority management table, which manages priority information of priorities to assign a CPU for multiple groups of tasks stored in a storage area, and changes the priorities of the multiple groups of tasks, when an event occurs to activate a task to be executed in verifying of software by using a simulation.08-09-2012
20100333100VIRTUAL MACHINE CONTROL DEVICE, VIRTUAL MACHINE CONTROL METHOD, AND VIRTUAL MACHINE CONTROL PROGRAM - In a case where a task execution unit (12-30-2010
20110265091SYSTEM AND METHOD FOR NORMALIZING JOB PROPERTIES - This disclosure provides a system and method for normalizing job properties. In one embodiment, a job manager is operable to identify a property of a job, with the job being associated with an operating environment. The job manager is further operable to normalize the property of the job and present the normalized property of the job to a user.10-27-2011
20110265090MULTIPLE CORE DATA PROCESSOR WITH USAGE MONITORING - A data processor with a plurality of processor cores. Accumulated usage information of each of the plurality of processor cores is stored in a storage device within the data processor, wherein the accumulated usage information is indicative of accumulated usage of each processor core of the plurality of processor cores. Accumulated usage information for a core of the plurality of processor cores is updated in response to a determined use of the core.10-27-2011
20110126204SCALABLE THREAD LOCKING WITH CUSTOMIZABLE SPINNING - Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system.05-26-2011
20110154347Interrupt and Exception Handling for Multi-Streaming Digital Processors - A multi-streaming processor has a plurality of streams for streaming one or more instruction threads, a set of functional resources for processing instructions from streams, and interrupt handler logic. The logic detects and maps interrupts and exceptions to one or more specific streams. In some embodiments, one interrupt or exception may be mapped to two or more streams, and in others two or more interrupts or exceptions may be mapped to one stream. Mapping may be static and determined at processor design, programmable, with data stored and amendable, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping. Interrupts may be external interrupts generated by devices external to the processor software (internal) interrupts generated by active streams, or conditional, based on variables. After interrupts are acknowledged, streams to which interrupts or exceptions are mapped are vectored to appropriate service routines.06-23-2011
20110055841ACCESS CONTROL APPARATUS, ACCESS CONTROL PROGRAM, AND ACCESS CONTROL METHOD - When a new program is set to start processing using a resource such as a memory, and the resource has been allocated to another program, which is currently running, an access control apparatus 03-03-2011
20100115524SYSTEM AND METHOD FOR THREAD PROCESSING ROBOT SOFTWARE COMPONENTS - An apparatus for thread processing robot software components includes a data port unit for storing input data in a buffer and then processing the data in a periodic execution mode or in a dedicated execution mode; an event port unit for processing an input event in a passive execution mode; and a method port unit for processing an input method call in the passive execution mode by calling a user-defined method corresponding to the method call. In the periodic execution mode, the data is processed by using an execution thread according to a period of a corresponding component. In the dedicated execution mode, a dedicated thread for the data is created and the data is processed by using the dedicated thread.05-06-2010
20100115522MECHANISM TO CONTROL HARDWARE MULTI-THREADED PRIORITY BY SYSTEM CALL - A method, a system and a computer program product for controlling the hardware priority of hardware threads in a data processing system. A Thread Priority Control (TPC) utility assigns a primary level and one or more secondary levels of hardware priority to a hardware thread. When a hardware thread initiates execution in the absence of a system call, the TPC utility enables execution based on the primary level. When the hardware thread initiates execution within a system call, the TPC utility dynamically adjusts execution from the primary level to the secondary level associated with the system call. The TPC utility adjusts hardware priority levels in order to: (a) raise the hardware priority of one hardware thread relative to another; (b) reduce energy consumed by the hardware thread; and (c) fulfill requirements of time critical hardware sections.05-06-2010
20110088037SINGLE-STACK REAL-TIME OPERATING SYSTEM FOR EMBEDDED SYSTEMS - A real time operating system (RTOS) for embedded controllers having limited memory includes a continuations library, a wide range of macros that hide continuation point management, nested blocking functions, and a communications stack. The RTOS executes at least a first and second task and uses a plurality of task priorities. The tasks share only a single stack. The task scheduler switches control to the highest-priority task. The continuations library provides macros to automatically manage the continuation points. The yield function sets a first continuation point in the first task and yields control to the task scheduler, whereupon the task scheduler switches to the second task and wherein at a later time the task scheduler switches control back to the first task at the first continuation point. The nested blocking function invokes other blocking functions from within its body and yields control to the task scheduler.04-14-2011
20110093858Semi-automated reciprocal scheduling - Schedules which include reciprocal events, such as schedules for youth hockey leagues, can be created using a system in which users can invite one another to schedule games based on information selected through an interface and reciprocal dates which are automatically identified by a suitably programmed computer. Information related to games and schedules can be stored in a database which can be accessed and modified by different users depending on their roles and the permissions associated with those roles.04-21-2011
20100100883SYSTEM AND METHOD FOR SCHEDULING TASKS IN PROCESSING FRAMES - Methods and systems for implementing methods for allocating available service capacity to a plurality of tasks in a data processing system having a plurality of processing channels is provided, where each processing channel is utilized in accordance with a time division multiplex processing scheme. A method can include receiving in the data processing system the plurality of tasks to be allocated to the available service capacity and determining a task from among an unassigned set of the plurality of tasks having a requirement for available service capacity which is greatest. The method can also include identifying at least one of the plurality of processing channels that has an available service capacity greater than or equal to the requirement and selectively assigning the task to the processing channel having a remaining service capacity which least exceeds the requirement.04-22-2010
20110185363TASK SWITCHING APPARATUS, METHOD AND PROGRAM - A method of assigning task management blocks for first type tasks to time slot information on a one-by-one basis, assigning a plurality of task management blocks for second type tasks to time slot information, selecting a task management block according to a priority classification when switching to the time slot of the time slot information, and switching to the time slot except the time slot information. Additionally a task switching apparatus selects the task management block assigned to the time slot and executes the task.07-28-2011
20120210327Method for Packet Flow Control Using Credit Parameters with a Plurality of Limits - The present invention relates to a processor and a method for processing a data packet, the method including steps of decreasing a value of a first credit parameter when the data packet is admitted to a processor at least partly based on the value of the first credit parameter and a first limit of the first credit parameter, and increasing the value of the first credit parameter, in dependence on a data storage level in a buffer in which the data packet is stored before being admitted to the processor, the value of the first credit parameter not being increased, so as to become larger than a second limit of the first credit parameter, when the buffer is empty.08-16-2012
20090300633Method and System for Scheduling and Controlling Backups in a Computer System - A method, system, and article to manage a backup procedure of one or more backup tasks in a computing system. A backup window within which the backup tasks are to be executed is defined, and the backup tasks within the backup window are scheduled. The process of the backup procedure is controlled during execution. The process of controlling the backup procedure includes calculating the prospective duration of all actually running and all future backup tasks, and cancelling low priority backup tasks in case a higher priority backup task is projected to continue beyond an end time (T12-03-2009
20090293061Structural Power Reduction in Multithreaded Processor - A circuit arrangement and method utilize a plurality of execution units having different power and performance characteristics and capabilities within a multithreaded processor core, and selectively route instructions having different performance requirements to different execution units based upon those performance requirements. As such, instructions that have high performance requirements, such as instructions associated with primary tasks or time sensitive tasks, can be routed to a higher performance execution unit to maximize performance when executing those instructions, while instructions that have low performance requirements, such as instructions associated with background tasks or non-time sensitive tasks, can be routed to a reduced power execution unit to reduce the power consumption (and associated heat generation) associated with executing those instructions.11-26-2009
20100017806FINE GRAIN OS SCHEDULING - The invention relates to a method of enabling multiple operating systems to run concurrently on the same computer, the method comprising: scheduling a plurality of tasks for execution by at least first and second operating systems, wherein each task has one of a plurality of priorities; setting the priority of each operating system in accordance with the priority of the next task scheduled for execution by the respective operating system; and providing a common program arranged to compare the priorities of all operating systems and to pass control to the operating system having the highest priority. Accordingly, the invention resides in the idea that different operating systems can be run more efficiently on a single CPU by changing the priority of each operating system over time. In other words, each operating system has a flexible priority.01-21-2010
20080235693Methods and apparatus for window-based fair priority scheduling - A system provides a task scheduler to define a priority queue with at least one window and a queue-window key. Each window is an ordered collection of tasks in a task pool of the priority queue and is identified by the queue-window key. The task scheduler sets a task-window key equal to a user-window key when the user-window key is greater than the minimum queue-window key. The task scheduler can further set the task-window key equal to the minimum queue-window key when the user-window key is less than the minimum queue-window key. A maximum task limit per user for each window and a priority increment for the user-window key are further applied to ensure fair scheduling.09-25-2008
20090172686METHOD FOR MANAGING THREAD GROUP OF PROCESS - A method for managing a thread group of a process is provided. First, a group scheduling module is used to receive an execution permission request from a first thread. When detecting that a second thread in the thread group is under execution, the group scheduling module stops the first thread, and does not assign the execution permission to the first thread until the second thread is completed, and till then, the first thread retrieves a required shared resource and executes the computations. Then, the first thread releases the shared resource when completing the computations. Then, the group scheduling module retrieves a third thread with the highest priority in a waiting queue and repeats the above process until all the threads are completed. Through this method, when one thread executes a call back function, the other threads are prevented from taking this chance to use the resource required by the thread.07-02-2009
20120042318AUTOMATIC PLANNING OF SERVICE REQUESTS - A method, system, and computer usable program product for automatic planning of service requests are provided in the illustrative embodiments. At an application executing in a computer, information is located in a ticket corresponding to the service request, the information being usable for categorizing the ticket. Using the information, a set of records is selected from a ticket history repository, the set of records including data representing a set of tickets processed before the ticket. A second ticket in the set of tickets includes information corresponding to the information in the ticket being processed. A category of the second ticket is selected as a suggested category for the ticket. A priority associated with the suggested category is identified. The suggested category and the priority are recommended for the ticket.02-16-2012
20090172685SYSTEM AND METHOD FOR IMPROVED SCHEDULING OF CONTENT TRANSCODING - A method and system for improved scheduling of content transcoding is disclosed. Embodiments are capable of generating and assigning a first transcoding priority value to a piece of content, where the first transcoding priority value is based upon information about the content and at least one semi-static constraint. A second transcoding priority value may also be generated and assigned based upon the first transcoding priority value and at least one dynamic constraint. Transcoding of the content may be scheduled using the first and/or second transcoding priority values, thereby providing scheduling of content transcoding which takes into account longer-term knowledge and/or shorter-term knowledge for better assessment of the demand for transcoding of a given piece of content. Accordingly, embodiments enable transcoding of content with reduced resource load, reduced transcoding cost, and improved quality of service.07-02-2009
20090172683MULTICORE INTERFACE WITH DYNAMIC TASK MANAGEMENT CAPABILITY AND TASK LOADING AND OFFLOADING METHOD THEREOF - A multicore interface with dynamic task management capability and a task loading and offloading method thereof are provided. The method disposes a communication interface between a micro processor unit (MPU) and a digital signal processor (DSP) and dynamically manages tasks assigned by the MPU to the DSP. First, an idle processing unit of the DSP is searched, and then one of a plurality of threads of the task is assigned to the processing unit. Finally, the processing unit is activated to execute the thread. Accordingly, the communication efficiency of the multicore processor can be effectively increased while the hardware cost can be saved.07-02-2009
20090172684SMALL LOW POWER EMBEDDED SYSTEM AND PREEMPTION AVOIDANCE METHOD THEREOF - Provided are a small low power embedded system and a preemption avoidance method thereof. A method for avoiding preemption in a small low power embedded system includes fetching and running a periodic atomic task from a periodic run queue, reducing any one of periodic atomic tasks or performing the change of a task after changing a field of the run periodic atomic task into a run standby state, according to a result value of the run of the periodic atomic task, fetching a sporadic atomic task from a sporadic run queue, and acquiring a system clock, running the fetched sporadic atomic task according to run time in the worst condition, and reducing any one of sporadic atomic tasks or performing the change of an event after a field of the run sporadic atomic task into a run standby state, according to a result value of the run of the sporadic atomic task.07-02-2009
20090172681SYSTEMS, METHODS AND APPARATUSES FOR CLOCK ENABLE (CKE) COORDINATION - Embodiments of the invention are generally directed to systems, methods, and apparatuses for clock enable (CKE) coordination. In some embodiments, a memory controller includes logic to predict whether a scheduled request will be issued to a rank. The memory controller may also include logic to predict whether a scheduled request will not be issued to the rank. In some embodiments, the clock enable (CKE) is asserted or de-asserted to a rank based, at least in part, on the predictions. Other embodiments are described and claimed.07-02-2009
20090031319TASK SCHEDULING METHOD AND APPARATUS - A method of scheduling execution of a plurality of tasks by a processor, the processor having a processor memory, the processor being arranged to load into the processor memory, during execution of a current task, data for a task that is scheduled for execution after the processor has completed the current task, the method comprising the steps of scheduling a next task for execution by the processor after the processor has completed a current task, and determining whether there is a high priority task to be executed by the processor, if there is a high priority task to be executed by the processor: determining whether the processor has begun loading the data for the next task into the processor memory, and if the processor has not begun loading the data for the next task into the processor memory, scheduling the high priority task, instead of the next task, for execution by the processor after the processor has completed the current task.01-29-2009
20090031317SCHEDULING THREADS IN MULTI-CORE SYSTEMS - Scheduling of threads in a multi-core system is performed using per-processor queues for each core to hold threads with fixed affinity for each core. Cores are configured to pick the highest priority thread among the global run queue, which holds threads without affinity, and their respective per-processor queue. To select between two threads with same priority on both queues, the threads are assigned sequence numbers based on their time of arrival. The sequence numbers may be weighted for either queue to prioritize one over the other.01-29-2009
20120210325Method And Apparatus Of Smart Power Management For Mobile Communication Terminals Using Power Thresholds - A method is provided for use in a mobile communication terminal configured to support a plurality of applications, wherein each application is executed by performing one or more tasks. The method includes, in response to a scheduling request from an application, obtaining an indication of power supply condition at a requested run-time of at least one of the tasks. The method further includes obtaining a prediction of a rate of energy usage by the task at the requested run-time, estimating, from the predicted rate of energy usage, a total amount of energy needed to complete the task, and making a scheduling decision for the task. The scheduling decision comprises making a selection from a group of two or more alternative dispositions for the task. The selection is made according to a criterion that relates the run-time power-supply condition to the predicted rate of energy usage by the task and to the estimate of total energy needed to complete the task.08-16-2012
20120210326Constrained Execution of Background Application Code on Mobile Devices - The subject disclosure is directed towards a technology by which background application code (e.g., provided by third-party developers) runs on a mobile device in a way that is constrained with respect to resource usage. A resource manager processes a resource reservation request for background code, to determine whether the requested resources meet constraint criteria for that type of background code. If the criteria are met and the resources are available, the resources are reserved, whereby the background code is ensured priority access to its reserved resources. As a result, a properly coded background application that executes within its constraints will not experience glitches or other problems (e.g., unexpected termination) and thereby provide a good user experience.08-16-2012
20120047510IMAGE FORMING DEVICE - An image forming device includes a priority task startup detection unit to detect that startup of a priority task is completed, a job acceptance unit configured to change a status to a job acceptable status and accept a job when it is detected that the startup of the priority task is completed, a first startup control unit to start the non-priority task when a predetermined time has elapsed since it is detected that the startup of the priority task is completed, and a second startup control unit to start the non-priority task a job is accepted from the time it is detected that the startup of the priority task is completed to when the predetermined time has elapsed and if all processing of all jobs, including the accepted job is terminated.02-23-2012
20120047509Systems and Methods for Improving Performance of Computer Systems - Priorities of an application and/or processes associated with an application executing on a computer is determined according to user-specific usage patterns of the application and stored for subsequent use, analysis and distribution.02-23-2012
20120005683Data Processing Workload Control - Data processing workload control in a data center is provided, where the data center includes computers whose operations consume power and a workload controller composed of automated computing machinery that controls the overall data processing workload in the data center. The data processing workload is composed of a plurality of specific data processing jobs, including scheduling, by the workload controller in dependence upon power performance information, the data processing jobs for execution upon the computers in the data center, the power performance information including power consumption at a plurality of power-conserving states for each computer in the data center that executes data processing jobs and dispatching by the workload controller the data processing jobs as scheduled for execution on computers in the data center.01-05-2012
20120005684PRIORITY ROLLBACK PROTOCOL - Mechanisms for enforcing limits to resource access are provided. In some embodiments, synchronization tools are used to reduce the worst case execution time of selected processing sequences. In one example, instructions from a first processing sequence are rolled back using rollback information stored in a data structure if a higher priority processing sequence seeks access to the resource.01-05-2012
20090055828Profile engine system and method - A system for profile record generation of input records, the system comprising: a record processor which converts the input records into a data records suitable for the profile record generation; and a statistics engine for the generation of profile records based on the data records. Furthermore, system optimization can be obtained by use of a task control method that sub-divides the aggregations of profile records into units of work that can be individually performed, the method comprising: partitioning based on a pre-determined partitioning key associated with entities to be profiled, wherein the association between the partitioning key and the entities being profiled is varied in order to optimize the profiling performance.02-26-2009
20110167427COMPUTING SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM PREVENTING STARVATION - A computing system, method and computer-readable medium is provided. To prevent a starvation phenomenon from occurring in a priority-based task scheduling, a plurality of tasks may be divided into a priority-based group and other groups. The groups to which the tasks belong may be changed.07-07-2011
20120011515Resource Consumption Template Processing Model - In one embodiment, a method determines a task to execute in a computer processing system. A resource consumption template from a plurality of resource consumption templates is determined for the task. The plurality of resource consumption templates have different priorities. A computer processing system determines resources for the task based on the determined resource consumption template. Also, the computer processing system processes the task using the allocated resources. The processing of the task is prioritized based on the priority of the resource consumption template.01-12-2012
20120023501HIGHLY SCALABLE SLA-AWARE SCHEDULING FOR CLOUD SERVICES - An efficient cost-based scheduling method called incremental cost-based scheduling, iCBS, maps each job, based on its arrival time and SLA function, to a fixed point in the dual space of linear functions. Due to this mapping, in the dual space, the job will not change their locations over time. Instead, at the time of selecting the next job with the highest priority to execute, a line with appropriate angle in the query space is used to locate the current job with the highest CBS score in logarithmic time. Because only those points that are located on the convex hull in the dual space can be chosen, a dynamic convex hull maintaining method incrementally maintains the job with the highest CBS score over time.01-26-2012
20120023500DYNAMICALLY ADJUSTING PRIORITY - A method to dynamically adjust priority may include providing a boost, by a processing device, to an element relative to at least one other element in response to a boost feature associated with the element being activated. Providing the boost to the element may include providing a predetermined longer duration of use of a shared use resource to the element relative to the at least one other element based on a boost setting associated with the element. The boost results in adjusting a priority of the element by allowing the element to complete a task in a shorter time period.01-26-2012
20120023502ESTABLISHING THREAD PRIORITY IN A PROCESSOR OR THE LIKE - In a multi-threaded processor, one or more variables are set up in memory (e.g., a register) to indicate which of a plurality of executable threads has a higher priority. Once the variable is set, several embodiments are presented for granting higher priority processing to the designated thread. For example, more instructions from the higher priority thread may be executed as compared to the lower priority thread. Also, a higher priority thread may be given comparatively more access to a given resource, such as memory or a bus.01-26-2012
20120023499DETERMINING WHETHER A GIVEN DIAGRAM IS A CONCEPTUAL MODEL - Systems and methods for scheduling events in a virtualized computing environment are provided. In one embodiment, the method comprises scheduling one or more events in a first event queue implemented in a computing environment, in response to determining that number of events in the first event queue is greater than a first threshold value, wherein the first event queue comprises a first set of events received for purpose of scheduling, wherein said first set of events remain unscheduled; mapping the one or more events in the first event queue to one or more server resources in a virtualized computing environment; receiving a second set of events included in a second event queue, wherein one more events in the second set of event are defined as having a higher priority than one or more events in the first event queue that have or have not yet been scheduled.01-26-2012
20090165009OPTIMAL SCHEDULING FOR CAD ARCHITECTURE - A system and method for optimal scheduling of image processing jobs is provided. Requests for processing originate either from a DICOM service that receives images sent to the system, and forwards those for batch processing, or from an interactive workstation application, which requests interactive CAD processing. Each request is placed onto a queue which is sorted first by priority, and second by the time that the request is added to the queue. Requests for interactive processing from a workstation application are added to the queue with the highest priority, whereas requests for batch processing are added at a low priority. The algorithm service takes the top-most item from the queue and passes the request to the algorithms which it hosts, and when that processing is completed, it sends a message to one or more output queues.06-25-2009
20120060164METHOD FOR REGISTERING AND SCHEDULING EXECUTION DEMANDS - A method for registering and scheduling execution demands comprises steps of: providing an execution demand register having a plurality of execution demand registering flags describing whether an identical number of jobs are registered execution demands or not and priorities thereof; providing a lookup device, and using all possible values of the execution demand registering flags as addresses to respectively store thereinside a job sequence permutation, initial position and registering number corresponding to the job sequence permutation; when a job has to be executed successively, setting the value of the execution demand registering flag corresponding to the job; and in scheduling, using the value of the execution demand registering flag of the updated execution demand register as a lookup address to acquire the initial position and registering number from the lookup device, and finding out the job sequence permutation according to the acquired initial position and registering number to complete scheduling.03-08-2012
20130014117ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining.01-10-2013
20120159500VALIDATION OF PRIORITY QUEUE PROCESSING - A method for validating outsourced processing of a priority queue includes configuring a verifier for independent, single-pass processing of priority queue operations that include insertion operations and extraction operations and priorities associated with each operation. The verifier may be configured to validate N operations using a memory space having a size that is proportional to the square root of N using an algorithm to buffer the operations as a series of R epochs. Extractions associated with each individual epoch may be monitored using arrays Y and Z. Insertions for the epoch k may monitored using arrays X and Z. The processing of the priority queue operations may be verified based on the equality or inequality of the arrays X, Y, and Z. Hashed values for the arrays may be used to test their equality to conserve storage requirements.06-21-2012
20120159501SYNCHRONIZATION SCHEDULING APPARATUS AND METHOD IN REAL-TIME MULT-CORE SYSTEM - A synchronization scheduling apparatus and method in a real-time multi-core system are described. The synchronization scheduling apparatus may include a plurality of cores, each having at least one wait queue, a storage unit to store information regarding a first core receiving a wake-up signal in a previous cycle among the plurality of cores, and a scheduling processor to schedule tasks stored in the at least one wait queue, based on the information regarding the first core.06-21-2012
20120159499RESOURCE OPTIMIZATION - A method may include storing information associated with a number of tasks for processing a media file, where the information includes resource information identifying resources scheduled to fulfill the tasks. The method may also include identifying a first task associated with processing the media file, identifying a first resource scheduled to fulfill the first task, and determining whether the first resource is available to fulfill the first task. The method may further include determining, when the first resource is not available, whether an alternate resource is available to fulfill the first task, and scheduling, when an alternate resource is available, the alternate resource to fulfill the first task.06-21-2012
20120159498FAST AND LINEARIZABLE CONCURRENT PRIORITY QUEUE VIA DYNAMIC AGGREGATION OF OPERATIONS - Embodiments of the invention improve parallel performance in multi-threaded applications by serializing concurrent priority queue operations to improve throughput. An embodiment uses a synchronization protocol and aggregation technique that enables a single thread to handle multiple operations in a cache-friendly fashion while threads awaiting the completion of those operations spin-wait on a local stack variable, i.e., the thread continues to poll the stack variable until it has been set or cleared appropriately, rather than rely on an interrupt notification. A technique for an enqueue/dequeue (push/pop) optimization uses re-ordering of aggregated operations to enable the execution of two operations for the price of one in some cases. Other embodiments are described and claimed.06-21-2012
20120079491THREAD CRITICALITY PREDICTOR - Each thread of a multi-threaded application is assigned a ranking, referred to as thread criticality, based on the amount of time the thread is expected to take to complete one or more operations associated with the thread. More resources are assigned to threads having a higher thread criticality, in order to increase the rate at which the thread completes its operations. Thread criticality is determined using a perceptron model, whereby the thread criticality for a thread is a weighted sum of a set of data processing device performance characteristics associated with the thread, such as the number of instruction cache misses and data cache misses experienced by the thread. The weights of the perceptron model can be repeatedly adjusted over time based on repeated measurements that indicate the relative speed with which each thread is completing its operations.03-29-2012
20100095299MIXED WORKLOAD SCHEDULER - A mixed workload scheduler and operating method efficiently handle diverse queries ranging from short less-intensive queries to long resource-intensive queries. A scheduler is configured for scheduling mixed workloads and comprises an analyzer and a schedule controller. The analyzer detects execution time and wait time of a plurality of queries and balances average stretch and maximum stretch of scheduled queries wherein query stretch is defined as a ratio of a sum of wait time and execution time to execution time of a query. The schedule controller modifies scheduling of queries according to service level differentiation.04-15-2010
20110099552SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING PROCESSOR ENTITY TASKS IN A MULTIPLE-PROCESSING ENTITY SYSTEM - A system, computer program and a method, the method for scheduling processor entity tasks in a multiple-processing entity system includes: receiving task data structures from multiple processing entities; wherein a task data structure represents a task to be executed by a processing entity; and scheduling an execution of the tasks by a multiple purpose entity.04-28-2011
20090133026METHOD AND SYSTEM TO IDENTIFY CONFLICTS IN SCHEDULING DATA CENTER CHANGES TO ASSETS - An information technology services management product is provided with a change management component that identifies conflicts based on a wide range of information. When a change on a configuration item is scheduled, the change management component identifies, for example, affected business applications, affected service level agreements, resource availability, change schedule, workflow, resource dependencies, and the like. The change management component warns the user if a conflict is found. The user does not have to consult multiple sources of information and make a manual determination concerning conflicts. The change management component may also suggest a best time to schedule a change request based on the information available. The change management component provides a constrained interface such that the user cannot schedule a change request that violates any of the above requirements. The change management component also applies these requirements when changing an already scheduled change request.05-21-2009
20120124590MINIMIZING AIRFLOW USING PREFERENTIAL MEMORY ALLOCATION - One embodiment provides a method of controlling memory in a computer system. Airflow is generated through an enclosure at a variable airflow rate to cool a plurality of memory banks at different locations within the enclosure. The airflow rate is controlled as a function of the temperature of one or more of the memory banks. Memory workload is selectively allocated to the memory banks according to expected differences in airflow, such as differences in airflow temperature, at each of the different locations.05-17-2012
20120124589MATRIX ALGORITHM FOR SCHEDULING OPERATIONS - The present invention provides a method and apparatus for implementing a matrix algorithm for scheduling instructions. One embodiment of the method includes selecting a first subset of instructions so that each instruction in the first subset is the earliest in program order of instructions associated with a corresponding one of a plurality of sub-matrices of a matrix that has a plurality of matrix entries. Each matrix entry indicates the program order of one pair of instructions that are eligible for execution. This embodiment also includes selecting, from the first subset of instructions, the instruction that is earliest in program order based on matrix entries associated with the first subset of instructions.05-17-2012
20120124591 SCHEDULER AND RESOURCE MANAGER FOR COPROCESSOR-BASED HETEROGENEOUS CLUSTERS - A system and method for scheduling client-server applications onto heterogeneous clusters includes storing at least one client request of at least one application in a pending request list on a computer readable storage medium. A priority metric is computed for each application, where the computed priority metric is applied to each client request belonging to that application. The priority metric is determined based on estimated performance of the client request and load on the pending request list. The at least one client request of the at least one application is scheduled based on the priority metric onto one or more heterogeneous resources.05-17-2012
20110126205SYSTEM AND A METHOD FOR PROCESSING SYSTEM CALLS IN A COMPUTERIZED SYSTEM THAT IMPLEMENTS A KERNEL - A computer implementing a kernel, the computer including: (a) a processor that is configured to run processes in kernel mode and to run other processes not in kernel mode, wherein the processor is configured to run in the kernel mode the following processes: (i) selecting a rule out of a group of rules that is stored in a kernel memory of the computer, in response to system call information that pertains to a system call made to a kernel entity of the kernel; (ii) assigning a priority to the system call in response to the rule selected; and (iii) selectively enabling transmission of the system call to a hardware device of the computerized entity, in response to the priority assigned to the system call; (b) a memory that includes the memory kernel; and (c) the hardware device that is configured to execute the system call, wherein execution of the system call by the hardware device results in modifying a state of the hardware device.05-26-2011
20120222035Priority Inheritance in Multithreaded Systems - A method includes determining that a first task having a first priority is blocked from execution at a multithreaded processor by a second task having a second priority that is lower than the first priority. A temporary priority of the second task is set to be equal to an elevated priority, such that in response to the second task being preempted from execution by another task, the second task is rescheduled for execution based on the elevated priority identified by the temporary priority.08-30-2012
20120131588APPARATUS AND METHOD FOR DATA PROCESSING IN HETEROGENEOUS MULTI-PROCESSOR ENVIRONMENT - An apparatus for data processing in a heterogeneous multi-processor environment are provided. The apparatus including an analysis unit configured to analyze 1) operations to be run in connection with data processing and 2) types and a number of processors available for the data processing, a partition unit configured to dynamically partition data into a plurality of data regions having different sizes based on the analyzed operations and operation-specific processor priority information, which is stored in advance of running the operations, and a scheduling unit configured to perform scheduling by allocating operations to be run in the data regions between the available processors.05-24-2012
20120167108Model for Hosting and Invoking Applications on Virtual Machines in a Distributed Computing Environment - The described method/system/apparatus uses intelligence to better allocate tasks/work items among the processors and computers in the cloud. A priority score may be calculated for each task/work unit for each specific processor. The priority score may indicate how well suited a task/work item is for a processor. The result is that tasks/work items may be more efficiently executed by being assigned to processors in the cloud that are better prepared to execute the tasks/work items.06-28-2012
20120216207DYNAMIC TECHNIQUES FOR OPTIMIZING SOFT REAL-TIME TASK PERFORMANCE IN VIRTUAL MACHINE - Methods to dynamically improve soft real-time task performance in virtualized computing environments under the management of an enhanced hypervisor comprising a credit scheduler. The enhanced hypervisor analyzes the on-going performance of the domains of interest and of the virtualized data-processing system. Based on the performance metrics disclosed herein, some of the governing parameters of the credit scheduler are adjusted. Adjustments are typically performed cyclically, wherein the performance metrics of an execution cycle are analyzed and adjustments may be applied in a later execution cycle. In alternative embodiments, some of the analysis and tuning functions are in a separate application that resides outside the hypervisor. The performance metrics disclosed herein include: a “total-time” metric; a “timeslice” metric; a number of “latency” metrics; and a “count” metric. In contrast to prior art, the present invention enables on-going monitoring of a virtualized data-processing system accompanied by dynamic adjustments based on objective metrics.08-23-2012
20120216206METHODS AND SYSTEMS FOR MANAGING DATA - Systems and methods for managing data, such as metadata or index databases. In one exemplary method, a notification that an existing file has been modified or that a new file has been created is received by an indexing software component, which then, in response to the notification performs an indexing operation, where the notification is either not based solely on time or user input or the notification includes an identifier that identifies the file. Other methods in data processing systems and machine readable media are also described.08-23-2012
20100205607METHOD AND SYSTEM FOR SCHEDULING TASKS IN A MULTI PROCESSOR COMPUTING SYSTEM - A multi processor computing system managing tasks based on the health index of the plurality of processors and the priority of tasks to be scheduled. The method comprise receiving the tasks to be scheduled on the computing system; preparing a queue of the tasks based on a scheduling algorithm; computing a health index value for each processor of the computing system; and scheduling the tasks on processors based on the health index value of the processors. A task from a processor with a lower health index may be moved to an available processor with a higher health index.08-12-2010
20120137302PRIORITY INFORMATION GENERATING UNIT AND INFORMATION PROCESSING APPARATUS - In an information processing device 05-31-2012
20120137301RESOURCE UTILIZATION MANAGEMENT FOR A COMMUNICATION DEVICE - A technique for resource utilization management for a communication device includes provisioning 05-31-2012
20110185362System and method for integrating software schedulers and hardware interupts for a deterministic system - The problem which is being addressed by this invention is the lack of determinism in mass market operating systems. This invention provides a mechanism for mass market operating systems running on mass market hardware to be extended to create a true deterministic responsive environment.07-28-2011
20120216208In-Car-Use Multi-Application Execution Device - An in-car-use multi-application execution device is provided that ensures safety while maintaining convenience by securing operation of a plurality of applications and suppressing occurrence of a termination process within a limited processing capacity without degrading a real-time feature. The in-car-use multi-application execution device dynamically predicts a processing time for each application, and schedules each application on the basis of the predicted processing time. If it is determined that an application failing to complete a process in a prescribed cycle exists as a result of the scheduling, a process is executed that terminates the application or degrades the function of the application on the basis of a preset priority order.08-23-2012
20100175067METHOD FOR PROCESSING APPLICATION COMMANDS FROM PHYSICAL CHANNELS USING A PORTABLE ELECTRONIC DEVICE AND CORRESPONDING DEVICE AND SYSTEM - The invention relates to a method for processing at least two application commands from at least two physical communication channels respectively using a portable electronic device. The method includes receiving each application command from one of the physical communication channels, determining a priority level associated with each application command, comparing priority levels and identifying the application command with the highest priority among the application commands and processing of the application command with highest priority. The invention also relates to the portable electronic device and an electronic system including a host device cooperating with such a portable electronic device.07-08-2010
20100275211Method and apparatus for scheduling the issue of instructions in a multithreaded microprocessor - There is provided a method to dynamically determine which instructions from a plurality of available instructions to issue in each clock cycle in a multithreaded processor capable of issuing a plurality of instructions in each clock cycle, comprising the steps of: determining a highest priority instruction from the plurality of available instructions; determining the compatibility of the highest priority instruction with each of the remaining available instructions; and issuing the highest priority instruction together with other instructions compatible with the highest priority instruction in the same clock cycle; wherein the highest priority instruction cannot be a speculative instruction. The effect of this is that speculative instructions are only ever issued together with at least one non-speculative instruction.10-28-2010
20100011363CONTROLLING A COMPUTER SYSTEM HAVING A PROCESSOR INCLUDING A PLURALITY OF CORES - Controlling a computer system having at least one processor including a plurality of cores includes establishing a core max value that sets a maximum number of the plurality of cores operating at a predetermined time period based on an operating condition, determining a core run value that is associated with a number of the plurality of cores of the at least one processor operating at the predetermined time period, and stopping at least one of the plurality of cores in the event the core run value exceeds the core max value at the predetermined time period.01-14-2010
20120260256WORKLOAD MANAGEMENT OF A CONCURRENTLY ACCESSED DATABASE SERVER - Several methods and a system of a workload management of a concurrently accessed database server are disclosed. In one embodiment, a method includes applying a weight to a service class. The method also includes generating a priority of the service class. In addition, the method includes selecting a group based on the weight of the service class. The method further includes determining a priority level based on the priority of the service class. The method also includes generating a characteristic of a shadow process through the weight and the priority of the service class. In addition, the method includes executing a query.10-11-2012
20120084784SYSTEM AND METHOD FOR MANAGING MEMORY RESOURCE(S) OF A WIRELESS HANDHELD COMPUTING DEVICE - A method and system for managing one or more memory resources of a wireless handheld computing device is described. The method and system may include receiving a request to initiate a web browser module and receiving input for a web address. The method and system may also include receiving a file corresponding to the web address and reviewing one or more objects present within the file. The method and system may determine if an object already exists in the one or more memory resources. And if the object does not exist in the one or more memory resources, then the method and system may calculate a priority for the object. The priority of the object may then be assigned and stored. It may also be determined if the current object will exceed the threshold of the one or more memory resources, and discarding other objects with lower priority as needed.04-05-2012
20120260257SCHEDULING THREADS IN MULTIPROCESSOR COMPUTER - A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor.10-11-2012
20130174172DATACENTER DATA TRANSFER MANAGEMENT SYSTEM AND METHOD - An exemplary data transfer manager includes a datacenter configured to communicate over at least one link and a scheduler that is configured to schedule a plurality of jobs for communicating data from the datacenter. The scheduler determines a minimum bandwidth requirement of each job and determines a maximum bandwidth limit of each job. The scheduler determines a flex parameter of each job. The flex parameter indicates how much a data transfer rate can vary between adjacent data transfer periods for the job.07-04-2013
20130174173DATA PROCESSOR AND DATA PROCESSING METHOD - A data processing method has a device control thread for each peripheral device capable of an independent operation, a CPU processing thread for each data processing that is performed by a CPU, a control thread equipped with a processing part for constructing an application. The control thread checks an output from the thread related with each processing part, performs with a higher priority from the processing part in which output data of the preprocessing part as a configuration of the application exists and that is near termination, and instructs execution of the each device control thread and the CPU processing thread, and data input/output. Each of device control thread and CPU processing thread processes the data according to the instructions, and sends a processing result and a notification to the control thread.07-04-2013
20110004883Method and System for Job Scheduling - Logical processors/hardware contexts are assigned to different jobs/threads in a multithreaded/multicore environment. There are provided a number of different sorting algorithms, from which one is periodically selected on the basis of whether the present algorithm is giving satisfactory results or not. The period is preferably a super-context interval. The different sorting algorithms preferably include a software/OS priority. A second sorting algorithm may include sorting according to hardware performance measurements. The judgement of satisfactory performance is preferably based on the difference between a desired number of time quantum attributed per super-context switch interval to each job/thread and a real number of time quantum attributed per super-context switch interval to each job/thread.01-06-2011
20110004882Method and system for scheduling a thread in a multiprocessor system - A method for scheduling a thread on a plurality of processors that includes obtaining a first state of a first processor in the plurality of processors and a second state of a second processor in the plurality of processors, wherein the thread is last executed on the first processor, and wherein the first state of the first processor includes the state of a cache of the first processor, obtaining a first estimated instruction rate to execute the thread on the first processor using an estimated instruction rate function and the first state, obtaining a first estimated global throughput for executing the thread on the first processor using the first estimated instruction rate and the second state, obtaining a second estimated global throughput for executing the thread on the second processor using the second state, comparing the first estimated global throughput with the second estimated global throughput to obtain a comparison result, and executing the thread, based on the comparison result, on one selected from a group consisting of the first processor and the second processor, wherein the thread performs an operation on one of the plurality of processors.01-06-2011
20110029980LOW DEPTH PROGRAMMABLE PRIORITY ENCODERS - An apparatus having a plurality of first circuits, second circuits, third circuits and fourth circuits is disclosed. The first circuits may be configured to generate a plurality of first signals in response to (i) a priority signal and (ii) a request signal. The second circuits may be configured to generate a plurality of second signals in response to the first signals. The third circuits may be configured to generate a plurality of enable signals in response to the second signals. The fourth circuits may be configured to generate collectively an output signal in response to (i) the enable signals and (ii) the request signal. A combination of the first circuits, the second circuits, the third circuits and the fourth circuits generally establishes a programmable priority encoder. The second signals may be generated independent of the enable signals.02-03-2011
20110041134PLUGGABLE COMPONENT INTERFACE - A system, method, and computer program product are provided for initiating an application in communication with a database management system via a bridge. Application memory is allocated to the application from a shared memory space within the database management system.02-17-2011
20120324463System for Managing Data Collection Processes - A system and process for managing data collection processes is disclosed. An apparatus that incorporates teachings of the present disclosure can include, a data collection system having a controller element that assigns to each of the processes a query interval according to a priority level of the data collection process for requesting use of processing resources, receiving one or more requests from the processes, once per respective query interval, for use of at least a portion of available processing resources, releases at least a portion of the available processing resources to a requesting one of the processes when the use of the available processing resources exceeds a utilization threshold. Additional embodiments are disclosed.12-20-2012
20120324462VIRTUAL FLOW PIPELINING PROCESSING ARCHITECTURE - A computer system for embodying a virtual flow pipeline programmable processing architecture for a plurality of wireless protocol applications is disclosed. The computer system includes a plurality of functional units for executing a plurality of tasks, a synchronous task queue and a plurality of asynchronous task queues for linking the plurality of tasks to be executed by the functional units in a priority order, and a virtual flow pipeline controller. The virtual flow pipeline controller includes a processing engine for processing a plurality of commands; a scheduler, communicatively coupled to the processing engine, for selecting a next task for processing at run time for each of the plurality of functional units; a processing engine controller, communicatively coupled to the processing engine, for providing commands and arguments to the processing engine and monitoring command completion; and a task flow manager, communicatively coupled to the processing engine controller, for activating the next task for processing. Also disclosed is a computer-implemented method for executing a plurality of wireless protocol applications embodying a virtual flow pipeline programmable processing architecture in a computer system.12-20-2012
20120324461Effective Management Of Blocked-Tasks In Preemptible Read-Copy Update - A technique for managing read-copy update readers that have been preempted while executing in a read-copy update read-side critical section. A single blocked-tasks list is used to track preempted reader tasks that are blocking an asynchronous grace period, preempted reader tasks that are blocking an expedited grace period, and preempted reader tasks that require priority boosting. In example embodiments, a first pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking a current asynchronous grace period. A second pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking an expedited grace period. A third pointer may be used to segregate the blocked-tasks list into preempted reader tasks that do and do not require priority boosting.12-20-2012
20110239221Method and Apparatus for Assigning Thread Priority in a Processor or the Like - In a multi-threaded processor, thread priority variables are set up in memory. The actual assignment of thread priority is based on the expiration of a thread precedence counter. To further augment, the effectiveness of the thread precedence counters, starting counters are associated with each thread that serve as a multiplier for the value to be used in the thread precedence counter. The value in the starting counters are manipulated so as to prevent one thread from getting undue priority to the resources of the multi-threaded processor.09-29-2011
20110239219PROTECTING SHARED RESOURCES USING SHARED MEMORY AND SOCKETS - Shared memory and sockets are used to protect shared resources in an environment where multiple operating systems execute concurrently on the same hardware. Rather than using spinlocks for serializing access to the shared resources, when a thread is unable to acquire a shared resource because that resource is already held by another thread, the thread that was unable to acquire the resource creates a socket with which it will wait to be notified that the shared resource has been released. The sockets may be network sockets or in-memory sockets that are accessible across the multiple operating systems; if sockets are not available in a particular implementation, communication technology that provides analogous services between operating systems may be used instead. In an optional aspect, fault tolerance is provided to address socket failures, in which case one or more threads may fall back (at least temporarily) to using spinlocks. As another option, a locking service may execute on each operating system to provide a programming interface through which threads can invoke operations for holding and releasing the lock.09-29-2011
20120331474REAL TIME SYSTEM TASK CONFIGURATION OPTIMIZATION SYSTEM FOR MULTI-CORE PROCESSORS, AND METHOD AND PROGRAM - Disclosed is an automatic optimization system capable of searching for an allocation with a good performance from among a plurality of task allocations which can be scheduled in a system of a development target configured with a plurality of periodic tasks. A task allocation optimization system for a multi-core processor including a plurality of cores calculates a response time of each of a plurality of tasks which are core allocation decision targets, and outputs an accumulative value of the calculated response time as an evaluation function value which is an index representing excellence of a task allocation. A task allocation from which a good evaluation function value is calculated is searched based on the evaluation function value. A candidate having a good evaluation function value among a plurality of searched task allocation candidates is held.12-27-2012
20100229173Managing Latency Introduced by Virtualization - A component manages and minimizes latency introduced by virtualization. The virtualization component determines that a currently scheduled guest process has executed functionality responsive to which the virtualization component is to execute a virtualization based operation, wherein the virtualization based operation is one that is not visible to the guest operating system. The virtualization component causes the guest operating system to de-schedule the currently scheduled guest process and schedule at least one separate guest process. The virtualization component then executes the virtualization based operation concurrently with the execution of the at least one separate guest process. Responsive to completing the execution of the virtualization based operation, the virtualization component causes the guest operating system to re-schedule the de-scheduled guest process.09-09-2010
20110258632Dynamically Migrating Channels - In one embodiment, the present invention includes a method of determining a relative priority between a first agent and a second agent, and assigning the first agent to a first channel and the second agent to a second channel according to the relative priority. Depending on the currently programmed status of the channels, information stored in at least one of the channels may be dynamically migrated to another channel based on the assignments. Other embodiments are described and claimed.10-20-2011
20120089984Performance Monitor Design for Instruction Profiling Using Shared Counters - Counter registers are shared among multiple threads executing on multiple processor cores. An event within the processor core is selected. A multiplexer in front of each of a number of counters is configured to route the event to a counter. A number of counters are assigned for the event to each of a plurality of threads running for a plurality of applications on a plurality of processor cores, wherein each of the counters includes a thread identifier in the interrupt thread identification field and a processor identifier in the processor identification field. The number of counters is configured to have a number of interrupt thread identification fields and a number of processor identification fields to identify a thread that will receive a number of interrupts.04-12-2012
20110276975AUDIO DEVICE - An audio device is provided that is arranged for communication of data and signalling with a controller, signalling from the device to the controller being made in discrete time slots, the device comprising: a plurality of nodes, each assigned a priority value and each having one or more unsolicited response sources capable of generating an unsolicited response for transmission to the controller, wherein unsolicited responses generated from a particular node are assigned the priority value of that node; and unsolicited response management means operable to hold unsolicited responses generated by the plurality of nodes that are awaiting transmission to the controller, wherein when two or more unsolicited responses are awaiting transmission to the controller in the unsolicited response management means, the device is arranged to transmit the unsolicited response with the highest assigned priority value first, in the next free time slot.11-10-2011
20110276974SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - Some embodiments of a multi processor system implement a virtual-time-based quality-of-service scheduling technique. In at least one embodiment of the invention, a method includes scheduling a memory request to a memory from a memory request queue in response to expiration of a virtual finish time of the memory request. The virtual finish time is based on a share of system memory bandwidth associated with the memory request. The method includes scheduling the memory request to the memory from the memory request queue before the expiration of the virtual finish time of the memory request if a virtual finish time of each other memory request in the memory request queue has not expired and based on at least one other scheduling rule.11-10-2011
20110276973METHOD AND APPARATUS FOR SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - In at least one embodiment, a method includes locally scheduling a memory request requested by a thread of a plurality of threads executing on at least one processor. The memory request is locally scheduled according to a quality-of-service priority of the thread. The quality-of-service priority of the thread is based on a quality of service indicator for the thread and system-wide memory bandwidth usage information for the thread. In at least one embodiment, the method includes determining the system-wide memory bandwidth usage information for the thread based on local memory bandwidth usage information associated with the thread periodically collected from a plurality of memory controllers during a timeframe. In at least one embodiment, the method includes at each mini-timeframe of the timeframe accumulating the system-wide memory bandwidth usage information for the thread and updating the quality-of-service priority based on the accumulated system-wide memory bandwidth usage information for the thread.11-10-2011
20110276972MEMORY-CONTROLLER-PARALLELISM-AWARE SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - Some embodiments of a processing system implement a memory-controller-parallelism-aware scheduling technique. In at least one embodiment of the invention, a method of operating a processing system includes scheduling a memory request requested by a thread of a plurality of threads executing on at least one processor according to thread priority information associated with the plurality of threads. The thread priority information is based on a maximum of a plurality of local memory bandwidth usage indicators for each thread of the plurality of threads. Each of the plurality of local memory bandwidth usage indicators for each thread corresponds to a respective memory controller of a plurality of memory controllers.11-10-2011
20120331473ELECTRONIC DEVICE AND TASK MANAGING METHOD - A task managing method is configured to manage tasks processed by an electronic device. The electronic device includes a central processing unit (CPU) capable of processing a plurality of the tasks at one time. The task managing method includes the steps of: detecting whether a predetermined status occurs; analyzing a current utilization rate of the CPU; determining whether the current utilization rate is greater than or equal to a predetermined utilization rate; and reducing some tasks being processed by the CPU to keep the CPU working normally, if the current utilization rate is greater than or equal to a predetermined utilization rate.12-27-2012
20100229174Synchronizing Resources in a Computer System - Synchronizing processes in a computer system includes creating a predictability model for a process. The predictability model establishes a predicted time slot for a resource that will be needed by the process. The method further requires establishing a predictive request for the resource at the predicted time slot. The predictive request establishes a place holder associated with the process. In addition, the method requires accessing another resource needed by the process for a period of time before the predicted time slot, submitting a request for the resource at the predicted time slot, and processing the request for the process at the resource.09-09-2010
20110321054SYSTEMS AND METHODS FOR MANAGED SERVICE DELIVERY IN 4G WIRELESS NETWORKS - Systems and methods for managed service delivery at the edge in 4G wireless networks for: dynamic QoS (Quality of Service) provisioning and prioritization of sessions based on the task (current, future) of the workflow instance; predicting the current and future network requirements based on the current and future tasks of all business process sessions and prepare session QoS accordingly; providing an audit trail of business process execution; and reporting on business process execution.12-29-2011
20110321053MULTIPLE LEVEL LINKED LRU PRIORITY - A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.12-29-2011
20110321052MUTLI-PRIORITY COMMAND PROCESSING AMONG MICROCONTROLLERS - A method, system and computer program product for serially transmitting processor commands of different execution priority. A front-end processor, for example, serially receives processor commands. A low-priority queue coupled to the front-end processor stores low-priority commands, and a high-priority queue coupled to the front-end processor stores high-priority commands. A controller enables transmission of commands from either the low-priority queue or the high-priority queue for execution.12-29-2011
20120102497Mobile Computing Device Activity Manager - A system and a method are disclosed for an activity manager providing a centralized component for allocating resources of a mobile computing device among various activities. An activity represents work performed using computing device resources, such as processor time, memory, storage device space or network connections. An application or system service requests generation of an activity by the activity manager, causing the activity manager to associate a priority level with the activity request and identify resources used by the activity. Based on the priority level, resources used and current resource availability of the mobile computing device, the activity manager determines when the activity is allocated mobile computing device resources. Using the priority level allows the activity manager to optimize performance of certain activities, such as activities receiving data from a user.04-26-2012
20120291038METHOD FOR REDUCING INTER-PROCESS COMMUNICATION LATENCY - A method for handling a system call in an operating system executed by a processor is disclosed. The message comprises steps of receiving the system call to a called process from a calling process; if the system call is a synchronous system call and if a priority of the calling process is higher than a priority of the called process, increasing the priority of the called process to be at least the priority of the calling process; and switching context to the called process.11-15-2012
20100199284INFORMATION PROCESSING APPARATUS, SELF-TESTING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes: a storage unit; testing units; read units that respectively read priority information, class information, and progress information from the storage unit; and an assignment unit that assigns an unexecuted testing process to a testing unit according to the read information, and that rewrites the progress information according to assignment of the unexecuted testing process. The testing units executes testing processes of the information processing apparatus. The priority information indicates a priority defined according to dependency among the testing processes in executing the testing processes. The class information associates a class with each testing process and indicates a range of the testing unit(s) to execute the associated testing process. The progress information indicates which testing process is uncompleted.08-05-2010
20100199283DATA PROCESSING UNIT - When a CPU is processing a first task by using an accelerator for use in image processing, if a request for allocating the accelerator to a process of a second task is issued, the CPU sets an interruption flag when the process of the second task is prioritized over a process of the first task, and the accelerator is allowed to be used for the process of the second task when a state in which the interruption flag is set is detected at a timing predetermined in accordance with a process stage of the accelerator for the first task. Since the timing of detecting the set interruption flag is determined in accordance with a progress state of the process of the task to be interrupted, task switching can be made at a timing of reducing overhead for save and return for the process of the task to be interrupted.08-05-2010
20100186016DYNAMIC PROCESS PRIORITY DETECTION AND MITIGATION - Described herein are techniques for dynamically monitoring and rebalancing priority levels of processes running on a computing node. Runaway processes and starved processes can be proactively detected and prevented, thereby making such a node to perform significantly better and more responsively than otherwise.07-22-2010
20120144397INFORMATION PROCESSING APPARATUS, METHOD, AND RECORDING MEDIUM - An information processing apparatus includes, a storage unit that stores an image to be transmitted, an update-frequency setter that sets, for respective sections set in the image to be transmitted, update frequencies of images stored for the sections in a predetermined period of time, an association-degree setter that sets association degrees to indicate degrees of association between the sections based on the update frequencies, a priority setter that identifies the section on which an operation is performed and sets a higher priority for the identified section and the section having a highest degree of association with the identified section than priorities for other sections, and a transmitter that transmits the image, stored by the storage unit, in sequence with the images stored for the sections whose set priority is higher first.06-07-2012
20130019247Method for using a temporary object handle - A method is provided for using a temporary object handle. The method performed at a resource manager includes: receiving an open temporary handle request from an application for a resource object, wherein a temporary handle can by asynchronously invalidated by the resource manager at any time; and creating a handle control block at the resource manager for the object, including an indication that the handle is a temporary handle. The method then includes: responsive to receiving a request from an application to use a handle, which has been invalidated by the resource manager, sending a response to the application that the handle is invalidated.01-17-2013
20110161969Consolidating CPU - Cache - Memory Access Usage Metrics - A computer system is provided with a processing chip having one or more processor cores, with the processing chip in communication with an operating system having kernel space and user space. Each processor core has multiple core threads to share resources of the core, with each thread managed by the operating system to function as an independent logical processor within the core. A logical extended map of the processor core is created and supported, with the map including each of the core threads indicating usage of the operating system, including user space and kernel space, and cache, memory, and non-memory. An operating system scheduling manager is provided to schedule a routine on the processor core by allocating the routine to different core threads based upon thread availability as demonstrated in the map, and thread priority.06-30-2011
20080235698METHOD AND APPARATUS FOR ASSIGNING CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A method of choosing jobs to run in a stream based distributed computer system includes determining jobs to be run in a distributed stream-oriented system by deciding a priority threshold above which jobs will be accepted, below which jobs will be rejected. Overall importance is maximized relative to the priority threshold based on importance values assigned to all jobs. System constraints are applied to ensure jobs meet set criteria.09-25-2008
20080235697Job scheduler, job scheduling method, and job control program storage medium - To provide a job scheduler, a job scheduling method, and a job control program that are capable of, even with an incapable CPU not equipped with a real-time OS, meeting basic real-time property that is required in a system. The job scheduler is a job scheduler 09-25-2008
20080235696ACCESS CONTROL APPARATUS AND ACCESS CONTROL METHOD - The disclosed access control apparatus and method controls an I/O device to perform processing of access requests in a predetermined order including inputting access requests from multiple tasks to cause the I/O device to perform file processing, storing and managing information about file priorities, obtaining a file priority corresponding to an access request, managing a queue having multiple queues for which the processing priorities corresponding to the file priorities are set and causing the access request to be stored in any of the queues corresponding to the file priority, and obtaining the access requests stored in the queues in an order based on the processing priorities set for the queues and sends the access requests to the I/O device.09-25-2008
20080235695RESOURCE ALLOCATION SYSTEM FOR JOBS, RESOURCE ALLOCATION METHOD AND RESOURCE ALLOCATION PROGRAM FOR JOBS - A resource allocation system for jobs includes: a timer for notifying switch of priority jobs in a priority period based on a predetermined processor priority allocation time of each job; a dispatcher for taking out a head process from a ready queue which is a queue of a process corresponding to a job selected as a priority job and being executable by an information processing system, for each job, based on the notification, and for allocating it to an instruction execution unit; and the instruction execution unit for executing an instruction of an executing process which is an allocated process.09-25-2008
20080235694Method of Launching Low-Priority Tasks - A driver is provided to manage launching of tasks at different levels of priority and within the parameters of the firmware interface. The driver includes two anchors for managing the tasks, a dispatcher and an agent. The dispatcher operates at a medium priority level and manages communication from a remote administrator. The agent functions to receive communications from the dispatcher by way of a shared data structure and to launch lower priority level tasks in respond to the communication. The shared data structure stores communications received from the dispatcher. Upon placing the communication in the shared data structure, the dispatcher sends a signal to the agent indicating that a communication is in the data structure for reading by the agent. Following reading of the communication in the data structure, the agent launches the lower priority level task and sends a signal to the data structure indicating the status of the task. Accordingly, a higher level task maintains its level of operation and spawns lower level tasks through the dispatcher in conjunction with the agent.09-25-2008
20110246999METHOD AND APPARATUS FOR ASSIGNING CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A method of choosing jobs to run in a stream based distributed computer system includes determining jobs to be run in a distributed stream-oriented system by deciding a priority threshold above which jobs will be accepted, below which jobs will be rejected. Overall importance is maximized relative to the priority threshold based on importance values assigned to all jobs. System constraints are applied to ensure jobs meet set criteria.10-06-2011
20110246998METHOD FOR REORGANIZING TASKS FOR OPTIMIZATION OF RESOURCES - A method of reorganizing a plurality of task for optimization of resources and execution time in an environment is described. In one embodiment, the method includes mapping of each task to obtain qualitative and quantitative assessment of each functional elements and variables within the time frame for execution of each tasks, representation of data obtained from the mapping in terms of a matrix of dimensions N×N, wherein N represents total number of tasks and reorganizing the tasks in accordance with the represented data in the matrix for the execution, wherein reorganizing the tasks provides for both static and dynamic methodologies. It is advantageous that the present invention determines optimal number of resources required to achieve a practical overall task completion time and can be adaptable to non computer applications.10-06-2011
20110246997ROUTING AND DELIVERY OF DATA FOR ELECTRONIC DESIGN AUTOMATION WORKLOADS IN GEOGRAPHICALLY DISTRIBUTED CLOUDS - Electronic design automation (EDA) libraries are delivered using a geographically distributed private cloud including EDA design centers and EDA library stores. EDA projects associated with an EDA library are determined by matching information describing the EDA library with information describing the projects. A set of design centers hosting the projects is determined. A data delivery model is determined for transmitting the EDA library to the design centers. The EDA library is scheduled for delivery to the design centers based on a deadline associated with a project stage that requires the EDA library. Network links with specialized hardware for transmitting data are determined in the private cloud by measuring their deterioration in performance on increase of data transmission load. These links are used for delivering EDA libraries expected to be used urgently for a stage of an EDA project.10-06-2011
20110246996DYNAMIC PRIORITY QUEUING - Techniques are provided for dynamically re-ordering operation requests that have previously been submitted to a queue management unit. After the queue management unit has placed multiple requests in a queue to be executed in an order that is based on priorities that were assigned to the operations, the entity that requested the operations (the “requester”) sends one or more priority-change messages. The one or more priority-change messages include requests to perform operations that have already been queued. For at least one of the operations, the priority assigned to the operation in the subsequent request is different from the priority that was assigned to the same operation when that operation was initially queued for execution. Based on the change in priority, the operation whose priority has change is placed at a different location in the queue, relative to the other operations in the queue that were requested by the same requester.10-06-2011
20110246995CACHE-AWARE THREAD SCHEDULING IN MULTI-THREADED SYSTEMS - The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.10-06-2011
20110265089Executing Processes Using A Profile - A management entity for managing the execution priority of processes in a computing system, the management entity being configured to, in response to activation of a pre-stored profile defining execution priorities for each of a plurality of processes, cause those processes to be executed by the computing system in accordance with the respective priorities defined in the active profile.10-27-2011
20080222640Prediction Based Priority Scheduling - Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing.09-11-2008
20130152097Resource Health Based Scheduling of Workload Tasks - A computer-implemented method for allocating threads includes: receiving a registration of a workload, the registration including a workload classification and a workload priority; 06-13-2013
20120254882Controlling priority levels of pending threads awaiting processing - A data processing apparatus comprises processing circuitry arranged to process processing threads using resources accessible to the processing circuitry. A pipeline is provided for handling at least two pending threads awaiting processing by the processing circuitry. The pipeline includes at least one resource-requesting pipeline stage for requesting access to resources for the pending threads. A priority controller controls priority levels of the pending threads. The priority levels define a priority with which pending threads are granted access to resources. When a pending thread reaches a final pipeline stage, if the request resources are not yet available then the priority level of that thread is raised selectively and the thread is returned to a first pipeline stage of the pipeline. If the requested resources are available then the thread is forwarded from the pipeline.10-04-2012
20130091506MONITORING PERFORMANCE ON WORKLOAD SCHEDULING SYSTEMS - The present invention relates to the field of enterprise network computing. In particular, it relates to monitoring workload of a workload scheduler. Information defining a plurality of test jobs of low priority is received. The test jobs have respective launch times, and are launched for execution in a data processing system in accordance with said launch times and said low execution priority. The number of test jobs executed within a pre-defined analysis time range is determined A performance decrease warning is issued if the number of executed test jobs is lower than a predetermined threshold number. A workload scheduler discards launching of jobs having a low priority when estimating that a volume of jobs submitted with higher priority is sufficient to keep said scheduling system busy.04-11-2013
20130091505Priority Level Arbitration Method and Device - The present invention discloses a method and device for arbitrating priority levels. The method comprises: setting a plurality of first stage polling arbiters and a second stage priority level arbiter respectively, wherein the number of the first stage polling arbiters is equal to the number of priority levels contained in a plurality of source ends; receiving task request signals for requesting tasks from the plurality of source ends and assigning request tasks with the same priority level to the same first stage polling arbiter; each of the first stage polling arbiters polling the received request tasks with the same priority level respectively to obtain one request task and transmitting the request task to the second stage priority level arbiter; and the second stage priority level arbiter receiving the plurality of request tasks and outputting an output result of request tasks with the highest priority level to a destination end.04-11-2013
20130104139System for Managing Data Collection Processes - A system and process for managing data collection processes is disclosed. An apparatus that incorporates teachings of the present disclosure can include, a data collection system having a controller element that assigns to each of the processes a query interval according to a priority level of the data collection process for requesting use of processing resources, receiving one or more requests from the processes, once per respective query interval, for use of at least a portion of available processing resources, releases at least a portion of the available processing resources to a requesting one of the processes when the use of the available processing resources exceeds a utilization threshold. Additional embodiments are disclosed.04-25-2013
20130104138SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes.04-25-2013
20110276976EXECUTION ORDER DECISION DEVICE - An execution sequence decision device is capable of efficiently and appropriately determining the execution sequence of processing modules even in a case where those have a closed circuit in the input/output dependencies. A dependence evaluation sub-unit and an anti dependence evaluation sub-unit evaluate the dependence and anti dependence of each processing module in a processing module group. A priority evaluation sub-unit determines the priority of each processing module in the processing module group based on the dependence and anti dependence. An execution order allocation sub-unit allocates the top of execution sequence to one processing module that has the highest priority obtained by the priority evaluation sub-unit. An execution sequence allocation unit causes the respective sub-units to repeatedly execute the above-mentioned process every time the order of execution sequence of one processing module is determined, and then sequentially allocates the orders of execution sequence to the respective processing modules.11-10-2011
20130132963Superseding of Recovery Actions Based on Aggregation of Requests for Automated Sequencing and Cancellation - Command sequencing may be provided. Upon receiving a plurality of action requests, an ordered queue comprising at least some of the plurality of actions may be created. The actions may then be performed in the queue's order.05-23-2013
20130132964ELECTRONIC DEVICE AND METHOD OF OPERATING THE SAME - An electronic device and a method of operating the same are provided. More particularly, in an electronic device and a method of operating the same, by recognizing a keyword of contents, reflecting the contents, and executing an application corresponding to the recognized keyword, an application execution environment corresponding to a user intention is provided.05-23-2013
20110219380MARSHALING RESULTS OF NESTED TASKS - The present invention extends to methods, systems, and computer program products for marshaling results of nested tasks. Unwrap methods are used to reduce the level of task nesting and insure that appropriate results are marshaled between tasks. A proxy task is used to represent the aggregate asynchronous operation of a wrapping task and a wrapped task. The proxy task has a completion state that is at least indicative of the completion state of the aggregate asynchronous operation. The completion state of the aggregate asynchronous operation is determined and set from one or more of the completion state of the wrapping task and the wrapped task. The completion state of the proxy task can be conveyed to calling logic to indicate the completion state of the aggregate asynchronous operation to the calling logic.09-08-2011
20080201714INFORMATION PROCESSING APPARATUS FOR CONTROLLING INSTALLATION, METHOD FOR CONTROLLING THE APPARATUS AND CONTROL PROGRAM FOR EXECUTING THE METHOD - A server apparatus manages a device driver for enabling any of a plurality of devices to which a plurality of client apparatuses are connected on a network. The server apparatus comprises a storage unit that stores, for each device, a device driver that can be installed to the device in association with the device, a generating unit that generates different tasks for any of the stored device drivers, a creating unit that creates a schedule for executing the generated tasks, and an executing unit that executes the generated tasks based on the created schedule.08-21-2008
20080201713Project Management System - A method and apparatus for managing a project are described. According to one embodiment, the method includes the steps of ranking the plurality of tasks to produce a first list; assigning a task cost to each of the plurality of tasks; setting a planned velocity, the planned velocity determining the rate at which task costs are planned to be completed per time segment; and dynamically assigning each of the plurality of tasks to one of the sequence of time segments in the order indicated by the first list based on the planned velocity. In other embodiments, the apparatus includes a machine-readable medium that provides instructions for a processor, which when executed by the processor cause the processor to perform a method of the present invention.08-21-2008
20110225590SYSTEM AND METHOD OF EXECUTING THREADS AT A PROCESSOR - A method and system for executing a plurality of threads are described. The method may include mapping a thread specified priority value associated with a dormant thread to a thread quantized priority value associated with the dormant thread if the dormant thread becomes ready to run. The method may further include adding the dormant thread to a ready to run queue and updating the thread quantized priority value. A thread quantum value associated with the dormant thread may also be updated, or a combination of the quantum value and quantized priority value may be both updated.09-15-2011
20100287559ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining.11-11-2010
20100287558THROTTLING OF AN INTERATIVE PROCESS IN A COMPUTER SYSTEM - Throttling of an iterative process in a computer system is disclosed. Embodiments of the present invention focus on non-productive iterations of an iterative process in a computer system. The number of productive iterations of the iterative process during a current timeframe is determined while the iterative process is executing. A count of the number of process starts for the iterative process during the current timeframe is stored. The count can be normalized to obtain a number of units of work handled during the current timeframe. A throttling schedule can be calculated, and the throttling schedule can be stored in the computer system. The throttling schedule can then be used to determine a delay time between iterations of the iterative process for a new timeframe. A formula can be used to calculate the throttling schedule. The throttling schedule can be overridden in accordance with a service level agreement (SLA), as well as for other reasons.11-11-2010
20110231855APPARATUS AND METHOD FOR CONTROLLING PRIORITY - A priority control apparatus, includes a job operation information storage unit stores, as job operation information on a per job operation basis for a plurality of job operations, a process and an object used by the process with the process mapped to the object, each job operation being executed by a plurality of processes; a delay determiner determines a first job operation that is delayed from among the plurality of job operations; and a priority controller identifies a second job operation sharing an object used in the first job operation by referencing the job operation information storage unit, identifies a process, using an object not used in the first job operation, from among the processes executing the second job operation identified, and lowers a priority at which the identified process is to be executed.09-22-2011
20130152099DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues.06-13-2013
20130152100METHOD TO GUARANTEE REAL TIME PROCESSING OF SOFT REAL-TIME OPERATING SYSTEM - A method to guarantee real time processing of a soft real time operating system in a multicore platform by executing a thread while varying a core in which the thread is executed and apparatus are provided. The method includes assigning priority to a task thread, executing the task thread, determining a core in which the task thread is to be executed, and if the core is determined, transferring the task thread to the determined core.06-13-2013
20130152098TASK PRIORITY BOOST MANAGEMENT - According to one aspect of the present disclosure, a method and technique for task priority boost management is disclosed. The method includes: responsive to a thread executing in user mode an instruction to boost a priority of the thread, accessing a boost register, the boost register accessible in kernel mode; determining a value of the boost register; and responsive to determining that the boost register holds a non-zero value, boosting the priority of the thread.06-13-2013
20090100431DYNAMIC BUSINESS PROCESS PRIORITIZATION BASED ON CONTEXT - Instantiated business processes are dynamically prioritized to an execution priority level based upon a priority relevant context associated with the business process. The business process instance is further executed based upon the execution priority level. The execution priority level for the business process instance may be determined using at least one of a table lookup, a rule or an algorithm to determine the execution priority level. Moreover, the execution priority level may be set based upon available priority levels in a priority band. Still further, detected changes in the priority relevant context may trigger changing the execution priority level based upon the change in the priority relevant context. Resources allocated to implement the business process instance may also be dynamically adjusted based upon changes to the execution priority level of an associated business process instance.04-16-2009
20130205299APPARATUS AND METHOD FOR SCHEDULING KERNEL EXECUTION ORDER - A method and apparatus for guaranteeing real-time operation of an application program that performs data processing and particular functions in a computer environment using a micro architecture are provided. The apparatus estimates execution times of kernels based on an effective progress index (EPI) of each of the kernels, and determines an execution order of the kernels based on the estimated execution times of the kernels and priority of the kernels.08-08-2013
20120284728Method for the Real-Time Ordering of a Set of Noncyclical Multi-Frame Tasks - A method for real-time scheduling of an application having a plurality m of software tasks executing at least one processing operation on a plurality N of successive data frames, each of said tasks i being defined at least, for each of said frames j, by an execution time C11-08-2012
20120284727Scheduling in Mapreduce-Like Systems for Fast Completion Time - A method and system for scheduling tasks is provided. A plurality of lower bound completion times is determined, using one or more computer processors and memory, for each of a plurality of jobs, each of the plurality of jobs including a respective subset plurality of tasks. A task schedule is determined for each of the plurality of processors based on the lower bound completion times.11-08-2012
20130185728SCHEDULING AND EXECUTION OF COMPUTE TASKS - One embodiment of the present invention sets forth a technique for assigning a compute task to a first processor included in a plurality of processors. The technique involves analyzing each compute task in a plurality of compute tasks to identify one or more compute tasks that are eligible for assignment to the first processor, where each compute task is listed in a first table and is associated with a priority value and an allocation order that indicates relative time at which the compute task was added to the first table. The technique further involves selecting a first task compute from the identified one or more compute tasks based on at least one of the priority value and the allocation order, and assigning the first compute task to the first processor for execution.07-18-2013
20110283287METHOD FOR ALLOCATING PRIORITY TO RESOURCE AND METHOD AND APPARATUS FOR OPERATING RESOURCE USING THE SAME - Disclosed are a method for allocating priority to resources, and a method and apparatus for operating resources using the same. The method for allocating priority to resources includes: selecting a resource block including at least one unit; determining a priority level of the selected resource block by reflecting a retrieval rate (or recovery rate) including a retrieval frequency and a retrieval period of the selected resource block; and allotting the determined priority level to the selected resource block.11-17-2011
20110283286Methods and systems for dynamically adjusting performance states of a processor - A method for dynamically adjusting performance states of a processor includes executing a workload associated with a workload mode and determining a primary thread among all processor threads executing the workload. The method also includes calculating and setting a performance state (P state) of the processor based on the workload mode.11-17-2011
20110314477FAIR SHARE SCHEDULING BASED ON AN INDIVIDUAL USER'S RESOURCE USAGE AND THE TRACKING OF THAT USAGE - Fair share scheduling to divide the total amount of available resource into a finite number of shares and allocate a portion of the shares to an individual user or group of users as a way to specify the resource proportion entitled by the user or group of users. The scheduling priority of jobs for a user or group of users depends on a customizable expression of allocated and used shares by that individual user or group of users. The usage by the user or group of users is accumulated and an exponential decay function is applied thereto in order to keep track of historic resource usage for a user or group of users by one piece of data and an update timestamp.12-22-2011
20110314476BROADCAST RECEIVING APPARATUS AND SCHEDULING METHOD THEREOF - A broadcast receiving apparatus and scheduling method thereof are provided. The broadcast receiving apparatus includes: a communication interface which performs an input-output operation of the broadcast receiving apparatus in response to a request for an input-output event from at least one of the plurality of operating systems; and a controller which processes the requested input-output event according to a priority given to the operating system that has requested the input-output event.12-22-2011
20110314475RESOURCE ACCESS CONTROL - Various embodiments can control access to a computing resource (e.g., a memory resource) by detecting that a high priority activity is accessing the resource and preventing a lower priority activity from accessing the resource. The lower priority activity can be allowed access to the resource after the high priority activity is finished accessing the resource. Various embodiments enable memory operations to be mapped to account for changes in data ordering that can occur when a lower priority activity is suppressed. For example, when an activity requests that data be written to a logical memory region, a mapping is created that maps the logical memory region to a physical memory region. The data can then be written to the physical memory region.12-22-2011
20130191836SYSTEM AND METHOD FOR DYNAMICALLY COORDINATING TASKS, SCHEDULE PLANNING, AND WORKLOAD MANAGEMENT - Systems and methods for dynamically coordinating a plurality of tasks are provided. Such tasks include a priority rank and at least one of a target date, a classification, an associated application, an associated action, and an associated priority rank adjustment parameter. A particular task can be processed relative to other tasks to generate a first scheduling scheme that defines a prioritized arrangement of the tasks. Based on the priority rank adjustment parameter(s), further scheduling schemes can be generated in lieu of the first scheduling scheme, thereby accounting for the respective priority rank adjustment parameters by influencing the arrangement of the tasks relative to one another. Additionally, based on a status notification, the tasks can be processed to generate a scheduling scheme that accounts for the status notification by influencing the arrangement of the first task and the stored tasks relative to one another.07-25-2013
20080288946States matrix for workload management simplification - A computer-implemented method, system and article of manufacture for managing workloads in a computer system, comprising monitoring system conditions and operating environment events that impact on the operation of the computer system, using an n-dimensional state matrix to identify at least one state resulting from the monitored system conditions and operating environment events, and initiating an action in response to the identified state.11-20-2008
20120030682Dynamic Priority Assessment of Multimedia for Allocation of Recording and Delivery Resources - Techniques are provided to allocate resources used for recording multimedia or to retrieve recorded content and deliver it to a recipient. A request associated with multimedia for access to resources is received. A context associated with the multimedia is determined. Resources for the multimedia are allocated based on the context.02-02-2012
20120089985Sharing Sampled Instruction Address Registers for Efficient Instruction Sampling in Massively Multithreaded Processors - Sampled instruction address registers are shared among multiple threads executing on a plurality of processor cores. Each of a plurality of sampled instruction address registers are assigned to a particular thread running for an application on the plurality of processor cores. Each of the sampled instruction address registers are configured by storing in each of the sampled instruction address registers a thread identification of the particular thread in a thread identification field and a processor identification of a particular processor on which the particular thread is running in a processor identification field.04-12-2012
20120096472VIRTUAL QUEUE PROCESSING CIRCUIT AND TASK PROCESSOR - A queue control circuit controls the placement and retrieval of a plurality of tasks in a plurality of types of virtual queues. State registers are associated with respective tasks. Each of the state registers stores a task priority order, a queue ID of a virtual queue, and the order of placement in the virtual queue. Upon receipt of a normal placement command ENQ_TL, the queue control circuit establishes, in the state register for the placed task, QID of the virtual queue as the destination of placement and an order value indicating the end of the queue. When a reverse placement command ENQ_TP is received, QID of the destination virtual queue and an order value indicating the start of the queue are established. When a retrieval command DEQ is received, QID is cleared in the destination virtual queue.04-19-2012
20120096471APPARATUS AND METHOD FOR EXECUTING COMPONENTS BASED ON THREAD POOL - An apparatus for executing components based on a thread pool includes a component executor configured to have a set priority and period, to register components having the set priority and period, and to execute the registered components. Further, the apparatus for executing the components based on the thread pool includes a thread pool configured to allocate a thread for executing the component executor; and an Operating System (OS) configured to create an event for allocating the thread to the component executor in each set period.04-19-2012
20120096470PRIORITIZING JOBS WITHIN A CLOUD COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach to prioritize jobs (e.g., within a cloud computing environment) so as to maximize positive financial impacts (or to minimize negative financial impacts) for cloud service providers, while not exceeding processing capacity or failing to meet terms of applicable Service Level Agreements (SLAs). Specifically, under the present invention a respective income (i.e., a cost to the customer), a processing need, and set of SLA terms (e.g., predetermined priorities, time constraints, etc.) will be determined for each of a plurality of jobs to be performed. The jobs will then be prioritized in a way that: maximizes cumulative/collective income; stays within the total processing capacity of the cloud computing environment; and meets the SLA terms.04-19-2012
20120096469SYSTEMS AND METHODS FOR DYNAMICALLY SCANNING A PLURALITY OF ACTIVE PORTS FOR WORK - Systems and methods for scanning ports for work are provided. One system includes one or more processors, multiple ports, a first tracking mechanism, and a second tracking mechanism for tracking high priority work and low priority work, respectively. The processor(s) is/are configured to perform the below method. One method includes scanning the ports, finding high priority work on a port, and accepting or declining the high priority work. The method further includes changing a designation of the processor to TRUE in the first tracking mechanism if the processor accepts the high priority work such that the processor is allowed to perform the high priority work on the port. Also provided are computer storage mediums including computer code for performing the above method.04-19-2012
20120096468COMPUTE CLUSTER WITH BALANCED RESOURCES - A scheduler for a compute cluster that allocates computing resources to jobs to achieve a balanced distribution. The balanced distribution maximizes the number of executing jobs to provide fast response times for all jobs by, to the extent possible, assigning a designated minimum for each job. If necessary to achieve this minimum distribution, resources in excess of a minimum previously allocated to a job may be de-allocated, if those resources can be used to meet the minimum requirements of other jobs. Resources above those used to meet the minimum requirements of executing jobs are allocated based on a computed desired allocation, which may be developed based on respective job priorities. To meet the desired allocation, resources may be de-allocated from jobs having more than their desired allocation and re-allocated to jobs having less than their desired allocation of resources.04-19-2012
20130212591TASK SCHEDULING METHOD AND APPARATUS - An apparatus schedules execution of a plurality of tasks by a processor. Each task has an associated periodicity and an associated priority based upon the associated periodicity. The processor executes each of the plurality of tasks periodically according to the associated periodicity of the task. A scheduler, at each of a series of scheduling time points updates the priorities of the plurality of tasks and schedules the tasks that need to be executed in accordance with their priorities. The scheduler identifies an unexecuted task which, at a preceding scheduling time point, was scheduled for execution but which, since that preceding scheduling time point, has not been executed. The scheduler sets the priority of the unexecuted task as greater than the priority of other tasks that have the same periodicity as the unexecuted task and that are not themselves unexecuted tasks.08-15-2013
20130212592SYSTEM AND METHOD FOR TIME-AWARE RUN-TIME TO GUARANTEE TIMELINESS IN COMPONENT-ORIENTED DISTRIBUTED SYSTEMS - A method and system for achieving time-awareness in the highly available, fault-tolerant execution of components in a distributed computing system, without requiring the writer of these components to explicitly write code (such as entity beans or database transactions) to make component state persistent. It is achieved by converting the intrinsically non-deterministic behavior of the distributed system to a deterministic behavior, thus enabling state recovery to be achieved by advantageously efficient checkpoint-replay techniques. The system is deterministic by repeating the execution of the receiving component by processing the messages in the same order as their associated timestamps and time-aware by allowing adjustment of message execution based on time.08-15-2013

Patent applications in class Priority scheduling