Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Process scheduling

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
718104000 Resource allocation 807
718103000 Priority scheduling 394
718105000 Load balancing 287
718107000 Multitasking, time sharing 153
718106000 Dependency based cooperative processing of multiple programs working together to accomplish a larger task 127
Entries
DocumentTitleDate
20130031555SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONDITIONALLY EXECUTING RELATED REPORTS IN PARALLEL BASED ON AN ESTIMATED EXECUTION TIME - In accordance with embodiments, there are provided mechanisms and methods for conditionally executing related reports in parallel based on an estimated execution time. These mechanisms and methods for conditionally executing related reports in parallel based on an estimated execution time can provide parallel execution of related reports when predetermined time-based criteria are met. The ability to conditionally provide parallel execution of related reports can reduce overhead caused by such parallel execution when the time-based criteria is met.01-31-2013
20080209424IRP HANDLING - An apparatus for handling IRPs, the apparatus comprising an overload determining unit (08-28-2008
20090165001Timer Patterns For Process Models - The subject matter disclosed herein provides methods and apparatus, including computer program products, for providing timers for tasks of process models. In one aspect, an input representative of a temporal constraint for a task of a graph-process model may be received. The temporal constraint defines at least one of a delay or a deadline. The task may be associated with the temporal constraint created based on the received input. The temporal constraint defined to have a placement at the graph-process model based on the type of temporal constraint. The task and the temporal constraint may be provided to configure the process model. Related systems, apparatus, methods, and/or articles are described.06-25-2009
20090193423WAKEUP PATTERN-BASED COLOCATION OF THREADS - A method of co-locating threads and corresponding system are described. The method comprises a first thread executing on a first processor awakening a second thread for execution on a second processor and assigning the second thread to execute on the first processor based on a determination that the first thread awakened the second thread at a prior awakening of the second thread.07-30-2009
20080307419Lazy kernel thread binding - Various technologies and techniques are disclosed for providing lazy kernel thread binding. User mode and kernel mode portions of thread scheduling are decoupled so that a particular user mode thread can be run on any one of multiple kernel mode threads. A dedicated backing thread is used whenever a user mode thread wants to perform an operation that could affect the kernel mode thread, such as a system call. For example, a notice is received that a particular user mode thread running on a particular kernel mode thread wants to make a system call. A dedicated backing thread that has been assigned to the particular user mode thread is woken. State is shuffled from the user mode thread to the dedicated backing thread using a state shuffling process. The particular kernel mode thread is put to sleep. The system call is executed using the dedicated backing thread.12-11-2008
20090019443METHOD AND SYSTEM FOR FUNCTION-SPECIFIC TIME-CONFIGURABLE REPLICATION OF DATA MANIPULATING FUNCTIONS - The system (01-15-2009
20130086591SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONTROLLING A PROCESS USING A PROCESS MAP - In accordance with embodiments, there are provided mechanisms and methods for controlling a process using a process map. These mechanisms and methods for controlling a process using a process map can enable process operations to execute in order without necessarily having knowledge of one another. The ability to provide the process map can avoid a requirement that the operations themselves be programmed to follow a particular sequence, as can further improve the ease by which the sequence of operations may be changed.04-04-2013
20130086589Acquiring and transmitting tasks and subtasks to interface - A system includes a task request receiving module configured to receive task data related to a request to acquire data, a related subtask acquisition module configured to acquire subtasks related to the task data received by the task request receiving module, a two-or-more discrete interface devices selection module configured to select discrete interface devices by analyzing at least one of a status and a characteristic of discrete interface devices, a two-or-more discrete interface devices subtask transmission module configured to transmit one or more subtasks acquired by the related subtask acquisition module to two or more discrete interface devices selected by the two-or-more discrete interface device selection module, and an executed subtask result data receiving module configured to receive result data from at least one of the two-or-more discrete interface devices to which the two-or-more discrete interface devices subtask transmission module transmitted one or more subtasks.04-04-2013
20130086590MANAGING CAPACITY OF COMPUTING ENVIRONMENTS AND SYSTEMS THAT INCLUDE A DATABASE - Capacity of a computing environment that includes a database can be maintained at a target capacity by regulating the usage of one or more of the resources by one or more tasks or activities (e.g., database work). Moreover, the usage of the resource(s) can be regulated based on the extent of use of the resource(s) by one or more other activities not being regulated (e.g., non-database activities that cannot be regulated by a database system). In other words, a target capacity can be maintained by effectively adjusting the extent by which one or more tasks can access one more resources in consideration of the extent by which one or more of the resources are used by one or more other tasks or activities that are not being regulated with respect to their access of the resource(s).04-04-2013
20090100429Dual Mode Operating System For A Computing Device - A computing device which runs non-pageable real time and pageable non-real time processes is provided with non-pageable real time and pageable non-real time versions of operating system services where the necessity to page in memory would block a real-time thread of execution. In one embodiment, a real time operating system service has all its code and data locked, and only supports clients that similarly have their code and data locked. This ensures that such a service will not block due to a page fault caused by client memory being unavailable. A non-real time operating system service does not have its data locked and supports clients whose memory can be paged out. In a preferred embodiment servers which are required to provide real time behaviour are multithreaded and arrange for requests from real time and non-real time clients to be serviced in different threads.04-16-2009
20080256542Processor - In a processor including a plurality of register groups, while a task is being executed using one of the register groups, a context of a task to be executed next is restored into another one of the register groups. If the execution of the task currently being executed is suspended before the restoration starts, the task execution is continued by using one of the register groups in which a context of a task executed previously remains and executing the task.10-16-2008
20120246656SCHEDULING OF TASKS TO BE PERFORMED BY A NON-COHERENT DEVICE - A method for scheduling tasks to be processed by one of a plurality of non-coherent processing devices, at least two of the devices being heterogeneous devices and at least some of said tasks being targeted to a specific one of the processing devices. The devices process data that is stored in local storage and in a memory accessible by at least some of the devices. The method includes the steps of: for each of a plurality of non-dependent tasks to be processed by the device, determining consistency operations required to be performed prior to processing the non-dependent task; performing the consistency operations for one of the non-dependent tasks and on completion issuing the task to the device for processing; performing consistency operations for a further non-dependent task such that, on completion of the consistency operations, the device can process the further task.09-27-2012
20120246653GENERIC COMMAND PARSER - A requesting processing unit includes a generic-parser is described, which is adapted to operate together with a specifically configured one or more command-files. A command-file includes one or more structured data elements descriptive of a command, which is available for execution by the processing unit. The data included in the command-file is registered in the computer memory associated with the processing unit. In general generic-parser is configured, in response to an issued command to search, in the computer memory, for data comprised in the data-elements, which is now registered in the computer memory, including information corresponding to the command and use this data in order to generate a request to perform the command.09-27-2012
20120246652Processor Management Via Thread Status - Various systems, processes, and products may be used to manage a processor. In particular implementations, managing a processor may include the ability to determine whether a thread is pausing for a short period of time and place a wait event for the thread in a queue based on a short thread pause occurring. Managing a processor may also include the ability to activate a delay thread that determines whether a wait time associated with the pause has expired and remove the wait event from the queue based on the wait time having expired.09-27-2012
20100115521MEDIATION SERVER, TERMINALS AND DISTRIBUTED PROCESSING METHOD - A highly convenient data processing technique is provided.05-06-2010
20130081028RECEIVING DISCRETE INTERFACE DEVICE SUBTASK RESULT DATA AND ACQUIRING TASK RESULT DATA - Computationally implemented methods and systems include transmitting one or more subtasks corresponding to at least a portion of one or more tasks of acquiring data requested by a task requestor to a plurality of discrete interface devices, obtaining subtask result data corresponding to a result of the one or more subtasks carried out by two or more discrete interface devices of the plurality of discrete interface devices in an absence of information regarding the task of acquiring data and/or the task requestor, and acquiring task result data corresponding to a result of the task of acquiring data using the obtained subtask result data and information regarding the two or more discrete interface devices from which the subtask result data is obtained. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130081027Acquiring, presenting and transmitting tasks and subtasks to interface devices - Computationally implemented methods and systems include acquiring one or more subtasks that correspond to portions of one or more tasks configured to be carried out by two or more discrete interface devices, presenting one or more representations corresponding to the one or more subtasks, wherein the one or more representations correspond to the one or more subtasks, and transmitting subtask data corresponding to one or more subtasks in response to selection of one of the one or more corresponding representations. In addition to the foregoing, other method aspects are described in the claims, drawings, and text.03-28-2013
20130081026PRECONFIGURED SHORT SCHEDULING REQUEST CYCLE - In communication systems, for example Long Term Evolution (LTE) of the 3rd Generation Partnership Project (3GPP), using two cycles (long and short) to configure uplink (UL) scheduling request (SR) resources, and various ways of configuring a short scheduling request cycle may be able to add flexibility for a network (NW) to configure scheduling request cycles, allowing balance between latency and resource reservation. A method, according to certain embodiments, can include detecting that there is data activity associated with a user equipment and activating a short scheduling request cycle upon the detecting the data.03-28-2013
20130081036PROVIDING AN ELECTRONIC MARKETPLACE TO FACILITATE HUMAN PERFORMANCE OF PROGRAMMATICALLY SUBMITTED TASKS - A method, system, and computer-readable medium is described for facilitating interactions between task requesters who have tasks that are available to be performed and task performers who are available to perform tasks. In some situations, the tasks to be performed are human performance tasks that use cognitive and other mental skills of human task performers, such as to employ judgment, perception and/or reasoning skills of the human task performers. In addition, in some situations the available tasks are submitted by human task requesters via application programs that programmatically invoke one or more application program interfaces of an electronic marketplace in order to request that the tasks be performed and to receive corresponding results of task performance in a programmatic manner, so that an ensemble of unrelated human agents can interact with the electronic marketplace to collectively perform a wide variety and large number of tasks.03-28-2013
20130081035Adaptively Determining Response Time Distribution of Transactional Workloads - An adaptive mechanism is provided that learns the response time characteristics of a workload by measuring the response times of end user transactions, classifies response times into buckets, and dynamically adjusts the response time distribution as response time characteristics of the workload change. The adaptive mechanism maintains the actual distribution across changes and, thus, helps the end user to understand changes of workload behavior that take place over a longer period of time. The mechanism is stable enough to suppress spikes and returns a constant view of workload behavior, which is required for long term, performance analysis and capacity planning. The mechanism distinguishes between an initial learning phase of establishing the distribution and one or multiple reaction periods. The reaction periods can be for example a fast reaction period for strong fluctuations of the workload behavior and a slow reaction period for small deviations.03-28-2013
20130081030Methods and devices for receiving and executing subtasks - Computationally implemented methods and systems include receiving subtask data including one or more subtasks that correspond to at least one portion of at least one task requested by a task requestor, wherein the one or more subtasks are configured to be carried out by two or more discrete interface devices, carrying out the one or more subtasks in an absence of information regarding the at least one task and/or the task requestor, and transmitting result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130081029Methods and devices for receiving and executing subtasks - Computationally implemented methods and systems include receiving subtask data including one or more subtasks that correspond to at least one portion of at least one task requested by a task requestor, wherein the one or more subtasks are configured to be carried out by two or more discrete interface devices, carrying out the one or more subtasks in an absence of information regarding the at least one task and/or the task requestor, and transmitting result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130081032ACQUIRING AND TRANSMITTING EVENT RELATED TASKS AND SUBTASKS TO INTERFACE DEVICES - Computationally implemented methods and systems include detecting an occurrence of an event, acquiring one or more subtasks configured to be carried out by two or more discrete interface devices, the subtasks corresponding to portions of one or more tasks of acquiring information related to the event, facilitating transmission of the one or more subtasks to the two or more discrete interface devices, and receiving data corresponding to a result of the one or more subtasks executed by two or more of the two or more discrete interface devices. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130081031Receiving subtask representations, and obtaining and communicating subtask result data - Computationally implemented methods and systems include receiving one or more representations of one or more subtasks that correspond to at least one portion of at least one task of acquiring data requested by a task requestor, wherein the one or more subtasks are configured to be carried out by at least two discrete interface devices, obtaining subtask result data in an absence of information regarding the at least one task and/or the task requestor, and communicating the result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130081038MULTIPROCESSOR COMPUTING DEVICE - A computing device includes a first processor configured to operate at a first speed and consume a first amount power and a second processor configured to operate at a second speed and consume a second amount of power. The first speed is greater than the second speed and the first amount of power is greater than the second amount of power. The computing device also includes a scheduler configured to assign processes to the first processor only if the processes utilize their entire timeslice.03-28-2013
20130081037PERFORMING COLLECTIVE OPERATIONS IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system including: determining by at least one task that a parent of the task has failed to send the task data through the tree topology; and determining whether to request the data from a grandparent of the task or a peer of the task in the same tier in the tree topology; and if the task requests the data from the grandparent, requesting the data and receiving the data from the grandparent of the task through the second networking topology; and if the task requests the data from a peer of the task in the same tier in the tree, requesting the data and receiving the data from a peer of the task through the second networking topology.03-28-2013
20130081034METHOD FOR DETERMINING ASSIGNMENT OF LOADS OF DATA CENTER AND INFORMATION PROCESSING SYSTEM - A load management system for a data center determines assignment of task loads to information processing devices. The data center includes a plurality of servers cooled by heat radiation, in a room isolated from an outdoor space, that allows air to be taken into and discharged from the room. The plurality of processes are assigned to the plurality of servers in order from a process applied with the proportionality coefficient that is smallest among the maximum proportionality coefficients (Ai-max). The proportionality coefficient (Aij) indicates the ratio of temperature of air taken in the servers (j) arranged in the room to a load on the server (i) arranged in the room, and when the server (i) is compared with the respective servers (j) for the proportionality coefficient (Aij) to obtain the maximum proportionality coefficients (Ai-max).03-28-2013
20080216080Method and system to alleviate denial-of-service conditions on a server - A method is presented for processing data in a multithreaded application to alleviate impaired or substandard performance conditions. Work items that are pending processing by the multithreaded application are placed into a data structure. The work items are processed by a plurality of threads within the multithreaded application in accordance with a first algorithm, e.g., first-in first-out (FIFO). A thread within the multithreaded application is configured apart from the plurality of threads such that it processes work items in accordance with a second algorithm that differs from the first algorithm, thereby avoiding the impairing condition. For example, the thread may process a pending work item only if it has a particular characteristic. The thread restricts its own processing of work items by intermittently evaluating workflow conditions for the plurality of threads; if the workflow conditions improve or are unimpaired, then the thread does not process any work items.09-04-2008
20130081033CONFIGURING INTERFACE DEVICES WITH RESPECT TO TASKS AND SUBTASKS - Computationally implemented methods and systems include configuring a device to acquire one or more subtasks configured to be carried out by at least two discrete interface devices, said one or more subtasks corresponding to portions of one or more tasks of acquiring data requested by a task requestor, facilitating execution of the received one or more subtasks, and controlling access to at least one feature of the device unrelated to the execution of the one or more subtasks, based on successful execution of the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text.03-28-2013
20130036422OPTIMIZED DATACENTER MANAGEMENT BY CENTRALIZED TASK EXECUTION THROUGH DEPENDENCY INVERSION - A Datacenter Management Service (DMS) is provided as a platform designed to automate datacenter management tasks that are performed across multiple technology silos and datacenter servers or collections of servers. The infrastructure to perform the automation is provided by integrating heterogeneous task providers and implementations into a set of standardized adapters through dependency inversion. A platform automating datacenter management tasks may include three main components: integration of adapters into an interface allowing a common interface for datacenter task execution, an execution platform that works against the adapters, and implementation of the adapters for a given type of datacenter management task.02-07-2013
20130212586SHARED RESOURCES IN A DOCKED MOBILE ENVIRONMENT - A first and second data handling systems provides for shared resources in a docked mobile environment. The first data handling system maintains a set of execution tasks within the first data handling system having a system dock interface to physically couple to the second data handling system. The first data handling system assigns a task to be executed by the second data handling system while the two systems are physically coupled.08-15-2013
20130036421CONSTRAINED RATE MONOTONIC ANALYSIS AND SCHEDULING - A method for scheduling schedulable entities onto an execution timeline for a processing entity in a constrained environment includes determining available capacity on the execution timeline for the processing entity based on constraints on the execution timeline over a plurality of time periods, wherein schedulable entities can only be scheduled onto the execution timeline during schedulable windows of time that are not precluded by constraints. The method further includes determining whether enough available capacity exists to schedule a schedulable entity with a budget at a rate. The method further includes when enough available capacity exists to schedule the schedulable entity with the budget at the rate, scheduling the schedulable entity onto the execution timeline for the processing entity during a schedulable window of time. The method further includes when the schedulable entity is scheduled onto the execution timeline, updating available capacity to reflect the capacity utilized by the schedulable entity.02-07-2013
20130042246SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY - One or more techniques and/or systems are provided for assigning power management classifications to a process, transitioning a computing environment into a connected standby state based upon power management classifications assigned to processes, and transitioning the computing environment from the connected standby state to an execution state. That is, power management classifications, such as exempt, throttle, and/or suspend, may be assigned to processes based upon various factors, such as whether a process provides desired functionality and/or whether the process provides functionality relied upon for basic operation of the computing environment. In this way, the computing environment may be transitioned into a low power connected standby state that may continue executing desired functionality, while reducing power consumption by suspending and/or throttling other functionality. Because some functionality may still execute, the computing environment may transition into the execution state in a responsive manner to quickly provide a user with up-to-date information.02-14-2013
20130042247Starvationless Kernel-Aware Distributed Scheduling of Software Licenses - Methods, systems, and apparatuses for implementing shared-license management are provided. Shared-license management may be performed by receiving from a remote client a license request to run a process of a shared-license application; adding the process to a queue maintained for processes waiting for license grants; and reserving at least one license instance for the received license request, the at least one license instance comprising a quantum of CPU time for running the process.02-14-2013
20130042245Performing A Global Barrier Operation In A Parallel Computer - Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.02-14-2013
20130042248SYSTEM AND METHOD FOR SUPPORTING PARALLEL THREADS IN A MULTIPROCESSOR ENVIRONMENT - A method and system for supporting parallel processing of threads includes receiving a read request for a container from one or more read threads. Next, parallel read access to the container for each read thread may be controlled with a manager module that is coupled to the container. The manager module may receive a mutating request for the container from one or more mutating threads. While other read threads may be accessing the container, the manager module may provide single mutating access to the container in a series. The manager may monitor a reference count in the collection barrier for tracking a number of threads (whether read and/or mutating threads) which are accessing the collection barrier. The manager module may provide a mutex to a mutating thread for locking the container from any other mutating requests while permitting parallel read requests of the same container during the mutating operation.02-14-2013
20130139167Identification of Thread Progress Information - Embodiments relate to a method, apparatus and program product and for capturing thread specific state timing information. The method includes associating a time field and a time valid field to a thread data structure and setting a current time state by determining a previous time state and updating it according to a previously identified method for setting time states. The method further includes determining status of a time valid bit to see if it is set to valid or invalid. When the status is valid, it is made available for reporting.05-30-2013
20130139168Scaleable Status Tracking Of Multiple Assist Hardware Threads - A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating.05-30-2013
20100043002WORKFLOW HANDLING APPARATUS, WORKFLOW HANDLING METHOD AND IMAGE FORMING APPARATUS - A workflow handling apparatus includes an activity storage unit which stores various activities forming a workflow, a workflow configuration storage unit which stores information about an existing workflow including each of the activities, an information type storage unit which stores data on an information type used in the existing workflow, a request storage unit which stores a new workflow created on the basis of a processing request to the workflow handling apparatus, the new workflow being connected with a processing corresponding to the various activities, an information type determination unit which determines an information type used in the new workflow, a determination unit which determines a degree of similarity between the information type used in the new workflow and the information type used in the existing workflow, and a workflow extraction unit which extracts an existing workflow having the degree of similarity equal to or greater than a predetermined value.02-18-2010
20100043001METHOD FOR CREATING AN OPTIMIZED FLOWCHART FOR A TIME-CONTROLLED DISTRIBUTION COMPUTER SYSTEM - A method is described and presented for creation of an optimized schedule (P) for execution of a functionality by means of a time-controlled distributed computer system, in which the distributed computer system and the functionality have a set of (especially structural and functional) elements (e02-18-2010
20100043000High Accuracy Timer - Technologies for a high-accuracy timer in a tasked-based, multi-processor computing system without using dedicated hardware timer resources.02-18-2010
20090158287DYNAMIC CRITICAL PATH UPDATE FACILITY - A method is presented for dynamically selecting and updating a critical execution path. The method may include receiving a network of jobs for execution. One or more critical jobs may be included in the network of jobs. A job causing a delay in the execution of the network of jobs may be detected, where the job precedes the critical job. A critical path in the network of jobs may then be determined as a function of the job causing a delay. Determination of the critical path may be further based on a slack time associated with jobs in the network that have planned execution times preceding a planned execution time for the critical job.06-18-2009
20090158286FACILITY FOR SCHEDULING THE EXECUTION OF JOBS BASED ON LOGIC PREDICATES - A solution for scheduling execution of jobs in a data processing system is disclosed. One method for implementing such a solution may start by providing a scheduling structure for scheduling the execution of jobs. Such a scheduling structure may include a workflow plan defining a flow of execution for planned jobs and/or a workflow model defining static policies for execution of modeled jobs. A set of rules for updating the scheduling structure is provided. The method may continue by updating the scheduling structure according to the rules, such as by adding or removing jobs for rules evaluated to be true. The execution of the jobs may then be scheduled according to the updated scheduling structure. A corresponding system and computer program product are also disclosed.06-18-2009
20090158284SYSTEM AND METHOD OF PROCESSING SENDER REQUESTS FOR REMOTE REPLICATION - A system and a method of processing sender requests for remote replication are applied in local system having a plurality of network block devices (NBD). A fixed number of sender threads are created in local system to form sender thread pool. All NBDs receiving write request for corresponding remote mirror volume are serially connected to be circular linked list. A pointer is set to sequentially record latest processed NBD in circular linked list, the sender threads in the sender thread pool are allocated to actively search NBD to be processed pointed by the pointer according to a sequence in circular linked list, and processing of NBD pointed by the pointer is locked by using the sender thread, hence processing the sender request of NBD. Each time when the sender request is finished, the pointer is sequentially moved to next NBD and the sender request of corresponding NBD is performed.06-18-2009
20090158283DECOUPLING STATIC PROGRAM DATA AND EXECUTION DATA - Persisting execution state of a continuation based runtime program. The continuation based runtime program includes static program data defining activities executed by the program. One or more of the activities are parent activities including sequences of child activities. The continuation based runtime program is loaded. A child activity to be executed is identified based on scheduling defined in a parent of the child activity in the continuation based runtime program. The child activity is sent to a continuation based runtime separate from one or more other activities in the continuation based runtime program. The child activity is executed at the continuation based runtime, creating an activity instance. Continuation state information is stored separate from the static program data by storing information about the activity instance separate from one or more other activities defined in the continuation based runtime program.06-18-2009
20100107167MULTI-CORE SOC SYNCHRONIZATION COMPONENT - The present invention discloses a multi-core SOC synchronization component, which comprises a key administration module, a thread schedule unit supporting data synchronization and thread administration, and an expansion unit serving to expand the memory capacity of the key administration module. The present invention can improve interconnect traffic and prevents from interconnect blocking. The present invention can function as a standard interface of different components. Thus, the present invention can solve the synchronization problem and effectively accelerate product design.04-29-2010
20100107166SCHEDULER FOR PROCESSOR CORES AND METHODS THEREOF - A data processing device assigns tasks to processor cores in a more distributed fashion. In one embodiment, the data processing device can schedule tasks for execution amongst the processor cores in a pseudo-random fashion. In another embodiment, the processor core can schedule tasks for execution amongst the processor cores based on the relative amount of historical utilization of each processor core. In either case, the effects of bias temperature instability (BTI) resulting from task execution are distributed among the processor cores in a more equal fashion than if tasks are scheduled according to a fixed order. Accordingly, the useful lifetime of the processor unit can be extended.04-29-2010
20090119669User-specified configuration of scheduling services - Methods and systems for facilitating user-specified configuration of scheduling services in a manufacturing facility. In one embodiment, a workflow user interface is presented to allow a user to specify a workflow for providing a scheduling for a manufacturing facility. The workflow identifies a sequence of operations to be performed for providing the schedule. In addition, the user can specify properties for each operation in the workflow user interface. The workflow with the properties are then stored in a repository for subsequent execution in response to a workflow trigger.05-07-2009
20090119668DYNAMIC FEASIBILITY ANALYSIS FOR EVENT BASED PROGRAMMING - Embodiments of the present invention provide a method, system and computer program product for dynamic feasibility analysis of event-driven program code. In an embodiment of the invention, a method for a dynamic feasibility analysis of event-driven program code can be provided. The method can include loading multiple different tasks associated with different registered events in event-driven program code of an event-driven application, reducing overlapping ones of the registered events for different ones of the tasks to a single task of the overlapping events to produce a reduced set of tasks and corresponding events, ordering the corresponding events of the reduced set of tasks and grouping the corresponding events by time slice for the event-driven application, and reporting whether or not adding a new event to a particular time slice for the event-driven application results in a depth of events in the particular time slice exceeding a capacity of the particular time slice rendering the event-driven application infeasible.05-07-2009
20090133022Multiprocessing apparatus, system and method - An apparatus to isolate a main memory in a multiprocessor computer is provided. The apparatus include a master processor and a management device communicating with the master processor. One or more slave processors communicate with the master processor and the management device. A volatile memory also communicates with the management device and the main memory communicating with the volatile memory. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules that allow a reader to quickly ascertain the subject matter of the disclosure contained herein. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.05-21-2009
20130047162EFFICIENT CACHE REUSE THROUGH APPLICATION DETERMINED SCHEDULING - A method of determining a thread from a plurality of threads to execute a task in a multi-processor computer system. The plurality of threads is grouped into at least one subset associated with a cache memory of the computer system. The task has a type determined by a set of instructions. The method obtains an execution history of the subset of plurality of threads and determines a weighting for each of the set of instructions and the set of data, the weightings depending on the type of the task. A suitability of the subset of the threads to execute the task based on the execution history and the determined weightings, is then determined. Subject to the determined suitability of the subset of threads, the method determining a thread from the subset of threads to execute the task using content of the cache memory associated with the subset of threads.02-21-2013
20120167107Power Managed Lock Optimization - In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free.06-28-2012
20120167104SYSTEM AND METHOD FOR EXTENDING LEGACY APPLICATIONS WITH UNDO/REDO FUNCTIONALITY - In a system and method for recalling a state in an application, a processor may store in a memory data representing a first set of previously executed commands, the first set representing a current application state, and, for recalling a previously extant application state different than the current application state, the processor may modify the data to represent a second set of commands and may execute in sequence the second set of commands.06-28-2012
20120167101SYSTEM AND METHOD FOR PROACTIVE TASK SCHEDULING - The described implementations relate to distributed computing. One implementation provides a system that can include an outlier detection component that is configured to identify an outlier task from a plurality of tasks based on runtimes of the plurality of tasks. The system can also include a cause evaluation component that is configured to evaluate a cause of the outlier task. For example, the cause of the outlier task can be an amount of data processed by the outlier task, contention for resources used to execute the outlier task, or a communication link with congested bandwidth that is used by the outlier task to input or output data. The system can also include one or more processing devices configured to execute one or more of the components.06-28-2012
20090044193ENHANCED STAGEDEVENT-DRIVEN ARCHITECTURE - The present invention is an enhanced staged event-driven architecture (SEDA) stage. The enhanced SEDA stage can include an event queue configured to enqueue a plurality of events, an event handler programmed to process events in the event queue, and a thread pool coupled to the event handler. A resource manager further can be coupled to the thread pool and the event queue. Moreover, the resource manager can be programmed to allocate additional threads to the thread pool where a number of events enqueued in the event queue exceeds a threshold value and where all threads in the thread pool are busy.02-12-2009
20090044192OBJECT ORIENTED BASED, BUSINESS CLASS METHODOLOGY FOR GENERATING QUASI-STATIC WEB PAGES AT PERIODIC INTERVALS - A method for providing a requestor with access to dynamic data via quasi-static data requests, comprising the steps of defining a web page, the web page including at least one dynamic element, creating an executable digital code to be run on a computer and invoked at defined intervals by a scheduler component the executable code effective to create and storing a quasi-static copy of the defined web page, creating the scheduler component capable of invoking the executable code at predefined intervals, loading the executable code and the scheduler component onto a platform in connectivity with a web server and with one another, invoking execution of the scheduler component, and retrieving and returning the static copy of the defined web page in response to requests for the defined web page.02-12-2009
20090044190URGENCY AND TIME WINDOW MANIPULATION TO ACCOMMODATE UNPREDICTABLE MEMORY OPERATIONS - The variable latency associated with flash memory due to background data integrity operations is managed in order to allow the flash memory to be used in isochronous systems. A system processor is notified regularly of the nature and urgency of requests for time to ensure data integrity. Minimal interruptions of system processing are achieved and operation is ensured in the event of a power interruption.02-12-2009
20090044189PARALLELISM-AWARE MEMORY REQUEST SCHEDULING IN SHARED MEMORY CONTROLLERS - Parallelism-aware scheduling of memory requests of threads in shared memory controllers. Parallel scheduling is achieved by prioritizing threads that already have requests being serviced in the memory banks. A first algorithm prioritizes requests of the last-scheduled thread that is currently being serviced. This is accomplished by tracking the thread that generated the last-scheduled request (if the request is still being serviced), and then scheduling another request from the same thread if there is an outstanding ready request from the same thread. A second algorithm prioritizes the requests of all threads that are currently being serviced. This is accomplished by tracking threads that have at least one request currently being serviced in the banks, and assigning the highest priority to these threads in the scheduling decisions. If there are no outstanding requests from any thread having requests that are being serviced, the algorithm defaults back to a baseline scheduling algorithm.02-12-2009
20110004881LOOK-AHEAD TASK MANAGEMENT - A method comprising receiving tasks for execution on at least one processor, and processing at least one task within one processor. To decrease the turn-around time of task processing, a method comprises parallel to processing the at least one task, verifying readiness of at least one next task assuming the currently processed task is finished, preparing a readystructure for the at least one task verified as ready, and starting the at least one task verified as ready using the ready-structure after the currently processed task is finished.01-06-2011
20110004879METHOD AND APPARATUS FOR ELIMINATING WAIT FOR BOOT-UP - A method and apparatus for eliminating wait for boot-up of an apparatus while simultaneously preventing increased power usage. The method includes predicting a boot-up schedule according to a determined usage schedule, and scheduling boot-up time according to the predicted boot-up schedule, wherein said boot-up schedule eliminates wait for boot-up while simultaneously preventing increased power usage.01-06-2011
20110010719ELECTRONIC DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An electronic device includes a control information storing unit; a setting unit configured to request a user to specify, for each first program in the electronic device, a reception setting indicating whether to allow reception of a second program to be applied to the first program and to store the reception setting as control information for the first program in the control information storing unit, the second program being configured to insert a process in a process of the first program; a reception determining unit configured to determine whether to allow reception of the second program based on the control information for the first program; and a receiving unit configured to receive or refuse to receive the second program according to the determination result of the reception determining unit.01-13-2011
20090328049INFORMATION PROCESSING APPARATUS, GRANULARITY ADJUSTMENT METHOD AND PROGRAM - According to one embodiment, an information processing apparatus includes a plurality of execution modules and a scheduler which controls assignment of a plurality of basic modules to the plurality of execution modules. The scheduler includes assigning, when an available execution module which is not assigned any basic modules exists, a basic module which stands by for completion of execution of other basic module to the available execution module, measuring an execution time of processing of the basic module itself, measuring execution time of processing for assigning the basic module to the execution module, and performing granularity adjustment by linking two or more basic modules to be successively executed according to the restriction of a execution sequence so as to be assigned as one set to the execution module and redividing the linked two or more basic modules, based on the two execution measured execution times.12-31-2009
20090307699APPLICATION PROGRAMMING INTERFACES FOR DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS - A method and an apparatus for a parallel computing program calling APIs (application programming interfaces) in a host processor to perform a data processing task in parallel among compute units are described. The compute units are coupled to the host processor including central processing units (CPUs) and graphic processing units (GPUs). A program object corresponding to a source code for the data processing task is generated in a memory coupled to the host processor according to the API calls. Executable codes for the compute units are generated from the program object according to the API calls to be loaded for concurrent execution among the compute units to perform the data processing task.12-10-2009
20090307698INFORMATION HANDLING SYSTEM POWER MANAGEMENT DEVICE AND METHODS THEREOF - An information handling system includes a set of power and performance profiles. Based on which of the profiles has been selected, the information handling system selects a thread scheduling table for provision to an operating system. The thread scheduling table determines the sequence of processor cores at which program threads are scheduled for execution. In a power-savings mode, the corresponding thread scheduling table provides for threads to be concentrated at subset of available processor cores, increasing the frequency with which the information handling system can place unused processors in a reduced power state.12-10-2009
20090307697Method and Apparatus for Efficient Gathering of Information in a Multicore System - Methods and apparatus for gathering information from processors by using compressive sampling are presented. The invention can monitor multicore processor performance and schedule processor tasks to optimize processor performance. Using compressive sampling minimizes processor-memory bus usage by the performance monitoring function. An embodiment of the invention is a method of gathering information from a processor, the method comprising compressive sampling of information from at least one processor core. The compressive sampling produces compressed information. The processor comprises the at least one processor core, and the at least one processor core is operative to process data.12-10-2009
20090307696THREAD MANAGEMENT BASED ON DEVICE POWER STATE - Managing threads for executing on a computing device based on a power state of the computing device. A power priority value corresponding to each of the threads is compared to a threshold value associated with the power state. The threads having an assigned power priority value that violates the threshold value are suspended from executing, while the remaining threads are scheduled for execution. When the power state of the computing device changes, the threads are re-evaluated for suspension or execution. In an embodiment, the threads on a mobile computing device are managed to maintain the processor in a low power state to reduce power consumption.12-10-2009
20090113435INTEGRATED BACKUP WITH CALENDAR - A computer implemented method, apparatus, and computer program product for automatically scheduling execution of a process using information in a calendar. Entries in a set of electronic calendars associated with a set of users are analyzed to generate expected computer usage patterns for the set of users. A low usage time interval for a computer is identified using the expected computer usage patterns. The low usage time interval for the computer is a time interval when expected usage of the computer by the set of users does not exceed a threshold amount of usage. The process is automatically executed during the low usage time interval.04-30-2009
20120192193Executing An Application On A Parallel Computer - Methods, systems, and products are disclosed for executing an application on a parallel computer having a plurality of nodes. Executing an application on a parallel computer includes: booting up a first subset of a plurality of nodes in a serial processing mode; booting up a second subset of the plurality of nodes in a parallel processing mode; profiling, prior to application execution, an application to identify serial segments of the application, parallel segments of the application, and application data utilized by each of the serial segments and the parallel segments; and executing the application on the plurality of nodes, including migrating, in dependence upon the profile for the application upon encountering the parallel segments during execution, only specific portions of the application and the application data from the nodes booted up in the serial processing mode to the nodes booted up in the parallel processing mode.07-26-2012
20120192192EVENT PROCESSING - A method, a system and a computer program for parallel event processing in an event processing network (EPN) are disclosed. The EPN has at least one event processing agent (EPA). The method includes assigning an execution mode for the at least one EPA, the execution mode including a concurrent mode and a sequential mode. The execution mode for the at least one EPA is stored in the EPN metadata. The method also includes loading and initializing the EPN. The method further includes routing the event in the EPN and, when an EPA is encountered, depending on the execution mode of the encountered EPA, further processing of the event. Also disclosed are a system and a computer program for parallel event processing in an event processing network (EPN).07-26-2012
20120192191EXECUTION OF WORK UNITS IN A HETEROGENEOUS COMPUTING ENVIRONMENT - Work units are transparently offloaded from a main processor to offload processing systems for execution. For a particular work unit, a suitable offload processing system is selected to execute the work unit. This includes determining the requirements of the work unit, including, for instance, the hardware and software requirements; matching those requirements against a set of offload processing systems with an arbitrary set of available resources; and determining if a suitable offload processing system is available. If a suitable offload processing system is available, the work unit is scheduled to execute on that offload processing system with no changes to the work unit itself. Otherwise, the work unit may execute on the main processor or wait to be executed on an offload processing system.07-26-2012
20120192190Host Ethernet Adapter for Handling Both Endpoint and Network Node Communications - A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.07-26-2012
20120192189DYNAMIC TRANSFER OF SELECTED BUSINESS PROCESS INSTANCE STATE - Business processes that may be affected by events, conditions or circumstances that were unforeseen or undefined at modeling time (referred to as unforeseen events) are modeled and/or executed. Responsive to an indication of such an event during process execution, a transfer is performed from the process, in which selected data is stored and the process is terminated. The selected data may then be used by a target process. The target process may be, for instance, a new version of the same process, the same process or a different process. The target process may or may not have existed at the time the process was deployed.07-26-2012
20090094607PROCESSING REQUEST CONTROL DEVICE, RECORDING MEDIUM STORING PROGRAM, PROCESSING REQUEST CONTROL METHOD AND DATA SIGNAL - A processing request control device, which includes: a reception section that receives a processing request and information on a property of the processing request; a calculation section that calculates a processing time zone based on the processing request; a management section that manages the processing request and the processing time zone associated with each other; a processing implementation control section that controls to implement, based on the processing request, the processing from a processing start time; a specification section that, when a new processing request is received, specifies a processing request being managed whose processing time zone overlaps with a processing time zone of the newly received processing request; and a change section that changes at least one of the processing time zone of the specified processing request and that of the new processing request within a range based on the properties of the processing request.04-09-2009
20130074086PIPELINING PROTOCOLS IN MISALIGNED BUFFER CASES - Systems, methods and articles of manufacture are disclosed for effecting a desired collective operation on a parallel computing system that includes multiple compute nodes. The compute nodes may pipeline multiple collective operations to effect the desired collective operation. To select protocols suitable for the multiple collective operations, the compute nodes may also perform additional collective operations. The compute nodes may pipeline the multiple collective operations and/or the additional collective operations to effect the desired collective operation more efficiently.03-21-2013
20130074085SYSTEM AND METHOD FOR CONTROLLING CENTRAL PROCESSING UNIT POWER WITH GUARANTEED TRANSIENT DEADLINES - Methods, systems and devices that include a dynamic clock and voltage scaling (DCVS) solution configured to compute and enforce performance guarantees to ensure that a processor does not remain in a busy state (e.g., due to transient workloads) for more than a predetermined amount of time above that which is required for that processor to complete its pre-computed steady state workload. The DCVS may adjust the frequency and/or voltage of a processor based on a variable delay to ensure that the processing core only falls behind its steady state workload by, at most, a predefined maximum amount of work, irrespective of the operating frequency or voltage of the processor.03-21-2013
20130074084DYNAMIC OPERATING SYSTEM OPTIMIZATION IN PARALLEL COMPUTING - A method for dynamic optimization of thread assignments for application workloads in an simultaneous multi-threading (SMT) computing environment includes monitoring and periodically recording an operational status of different processor cores each supporting a number of threads of the thread pool of the SMT computing environment and also operational characteristics of different workloads of a computing application executing in the SMT computing environment. The method further can include identifying by way of the recorded operational characteristics a particular one of the workloads demonstrating a threshold level of activity. Finally, the method can include matching a recorded operational characteristic of the particular one of the workloads to a recorded status of a processor core best able amongst the different processor cores to host execution in one or more threads of the particular one of the workloads and directing the matched processor core to host execution of the particular one of the workloads.03-21-2013
20130074083SYSTEM AND METHOD FOR HANDLING STORAGE EVENTS IN A DISTRIBUTED DATA GRID - A system and method can handle storage events in a distributed data grid. The distributed data grid cluster includes a plurality of cluster nodes storing data partitions distributed throughout the cluster, each cluster node being responsible for a set of partitions. A service thread, executing on at least one of said cluster nodes in the distributed data grid, is responsible for handling one or more storage events. The service thread can use a worker thread to accomplish synchronous event handling without blocking the service thread.03-21-2013
20130074082CONTROL METHOD AND CONTROL DEVICE FOR RELEASING MEMORY - A control method and a control device for releasing memory are provided by the embodiments of the present invention. The present invention relates to the technical field of terminal device program management, which is used for solving the problem of wasting memory resource of terminal devices. The present invention comprises: obtaining information of current running programs in a terminal device; checking programs whose running states are idle in the current running programs according to the obtained information; closing programs whose running states are idle and releasing corresponding memory. According to the present invention, idle programs can be quickly found and then closed, and thereby the memory is saved and user experience is improved.03-21-2013
20130074081MULTI-THREADED QUEUING SYSTEM FOR PATTERN MATCHING - A multi-threaded processor may support efficient pattern matching techniques. An input data buffer may be provided, which may be shared between a fast path and a slow path. The processor may retire the data units in the input data buffer that is not required and thus avoids copying the data unit used by the slow path. The data management and the execution efficiency may be enhanced as multiple threads may be created to verify potential pattern matches in the input data stream. Also, the threads, which may stall may exit the execution units allowing other threads to run. Further, the problem of state explosion may be avoided by allowing the creation of parallel threads, using the fork instruction, in the slow path.03-21-2013
20130074080Timed Iterator - A computer implemented method for processing tasks is disclosed. The method includes invoking a timed iterator, during an event loop pass, without spawning a new thread, wherein the invoking includes passing a task list and a timeout constraint to the timed iterator. The method further includes executing one or more tasks in the task list for a period of time as specified in the timeout constraint, and relinquishing program control to a caller after the period of time.03-21-2013
20130061231CONFIGURABLE COMPUTING ARCHITECTURE - A configurable computing system for parallel processing of software applications includes an environment abstraction layer (EAL) for abstracting low-level functions to the software applications; a space layer including a distributed data structure; and a kernel layer including a job scheduler for executing parallel processing programs constructing the software applications according to a configurable mode.03-07-2013
20130061230SYSTEMS AND METHODS FOR GENERATING REFERENCE RESULTS USING PARALLEL-PROCESSING COMPUTER SYSTEM - A method for debugging an application includes obtaining first and second fusible operation requests; if there is a break point between the first and the second operation request, generating a first set of compute kernels including programs corresponding to the first operation request, but not to the second operation request; and generating a second set of compute kernels including programs corresponding the second operation request, but not to the first operation request; if there is no break point between the first and the second operation request, generating a third set of compute kernels which include programs corresponding to a merge of the first and second operation requests; and arranging for execution of either the first and second, or the third set of compute kernels, further including debugging the first or second set of compute kernels when there is a break point set between the first and second operation requests.03-07-2013
20110067029THREAD SHIFT: ALLOCATING THREADS TO CORES - Techniques are generally described for allocating a thread to heterogeneous processor cores. Example techniques may include monitoring real time computing data related to the heterogeneous processor cores processing the thread, allocating the thread to the heterogeneous processor cores based, at least in part, on the real time computing data, and/or executing the thread by the respective allocated heterogeneous processor core.03-17-2011
20090300630WAITING BASED ON A TASK GROUP - A method includes creating a first task group. A plurality of task object representations are added to the first task group. Each representation corresponds to one task object in a first plurality of task objects. A wait operation is performed on the first task group that waits for at least one of the task objects in the first plurality of task objects to complete.12-03-2009
20090288087SCHEDULING COLLECTIONS IN A SCHEDULER - A scheduler in a process of a computer system includes a respective scheduling collection for each scheduling node in the scheduler. The scheduling collections are mapped into at least a partial search order based on one or more execution metrics. When a processing resource in a scheduling node becomes available, the processing resource first attempts to locate a task to execute in a scheduling collection corresponding to the scheduling node before searching other scheduling collections in an order specified by the search order.11-19-2009
20090271791SYSTEM AND METHOD FOR PERFORMING TIME-FLEXIBLE CALENDRIC STORAGE OPERATIONS - A system and method are provided for creating a non-standard calendar that may have customized attributes, such as number of days in a month, first day of a month, number of months in a year, first month of a year, number of years, or other customized attributes. Such non-standard calendars may be similar to non-standard calendars used by companies, enterprises or other organizations, such as a fiscal calendar, academic calendar, or other calendar. A storage management system manager may have a database of storage policies that include preferences and frequencies for performing storage operations, and associations with a non-standard calendar. The storage manager can initiate storage operations based on the storage policy using data that may be identified according to selection criteria, and determine a time to perform the storage operation according to a non-standard calendar.10-29-2009
20090083743System method and apparatus for binding device threads to device functions - A system apparatus and method for supporting one or more functions in an IO virtualization environment. One or more threads are dynamically associated with, and executing on behalf of, one or more functions in a device.03-26-2009
20120227050CHANGING A SCHEDULER IN A VIRTUAL MACHINE MONITOR - Machine-readable media, methods, and apparatus are described to change a first scheduler in the virtual machine monitor. in some embodiments, a second scheduler is loaded in a virtual machine monitor when the virtual machine monitor is running; and then is activated to handle a scheduling request for a scheduling process in place of the first scheduler, when the virtual machine monitor is running.09-06-2012
20120227048FRAMEWORK FOR SCHEDULING MULTICORE PROCESSORS - A method for a framework for scheduling tasks in a multi-core processor or multiprocessor system is provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.09-06-2012
20120227047WORKFLOW VALIDATION AND EXECUTION - An apparatus, a computer program product and a computer-implemented method performed by a computerized device, comprising: receiving a description of a workflow, the workflow comprising a plurality of blocks, wherein each block comprises one or more instructions, the plurality of blocks comprising at least a first block and a second block, wherein the first block is adapted to output information, and the second block is adapted to receive the information wherein at least one of the plurality of blocks is associated with a ratio between a number of records input into the block and a number of records output by the block; and validating that the workflow can operate properly, using the ratio, wherein during execution, each of the first block and the second block can keep an internal state and request to receive again data previously received as input.09-06-2012
20120117570INFORMATION PROCESSING APPARATUS, WORKFLOW MANAGEMENT SYSTEM, AND WORKFLOW EXECUTION METHOD - An information processing apparatus sequentially executing one or more processes of a workflow on an input document includes: a workflow-information storage unit storing workflow information; a result storage unit storing a process result; a workflow control unit receiving workflow identification information for identifying the workflow and acquiring workflow information from the workflow-information storage unit on the basis of the workflow identification information; and a result acquiring unit acquiring the process result from the result storage unit based on the result identification information when the workflow information acquired by the workflow control unit includes the result identification information. The workflow control unit acquires the process result from the result acquiring unit and transmits the process result to an apparatus that executes a process subsequent to a process corresponding to the process result in the workflow in order to execute the workflow from a process in the middle of the workflow.05-10-2012
20120117569TASK AUTOMATION FOR UNFORMATTED TASKS DETERMINED BY USER INTERFACE PRESENTATION FORMATS - Methods and systems are provided for web page task automation. In one embodiment, the method comprises of the following steps: i) decomposing the high level task into a sequence of anthropomimetic subroutines, ii) decomposing each routine into a series of anthropomimetic actions or steps, for example stored as a unit shares of work, iii) generating computer code to interact with the content of the webpage, for each unit share of work, iv) executing the generated computer code by a web interface module, and transmitting the results of the execution of computer code, steps iii) and iv) being repeated until all steps of a subroutine have been executed, until the sequence of subroutines for a logical task have been achieved.05-10-2012
20130067483LOCALITY MAPPING IN A DISTRIBUTED PROCESSING SYSTEM - Topology mapping in a distributed processing system that includes a plurality of compute nodes, including: initiating a message passing operation; including in a message generated by the message passing operation, topological information for the sending task; mapping the topological information for the sending task; determining whether the sending task and the receiving task reside on the same topological unit; if the sending task and the receiving task reside on the same topological unit, using an optimal local network pattern for subsequent message passing operations between the sending task and the receiving task; otherwise, using a data communications network between the topological unit of the sending task and the topological unit of the receiving task for subsequent message passing operations between the sending task and the receiving task.03-14-2013
20130067482METHOD FOR CONFIGURING AN IT SYSTEM, CORRESPONDING COMPUTER PROGRAM AND IT SYSTEM - A method designed to configure an IT system having at least one computing core for executing instruction threads, in which each computing core is capable of executing at least two instruction threads at a time in an interlaced manner, and an operating system, being executed on the IT system, capable of providing instruction threads to each computing core. The method includes a step of configuring the operating system being executed in a mode in which it provides each computing core with a maximum of one instruction thread at a time.03-14-2013
20130067481AUDIO FEEDBACK FOR COMMAND LINE INTERFACE COMMANDS - Exemplary method, system, and computer program product embodiments for audio feedback for command line interface (CLI) commands in a computing environment are provided. In one embodiment, by way of example only, auditory notifications are generated for indicating a completion of CLI commands. The auditory notifications are configurable by user preferences. Additional system and computer program product embodiments are disclosed and provide related advantages.03-14-2013
20130067480PROGRAMMABLE WALL STATION FOR AUTOMATED WINDOW AND DOOR COVERINGS - A programmable wall station system for controlling automated coverings includes at least one automated covering adapted to receive command signals, and a computer which includes a processor and a computer connection port. The processor is programmed to receive location input, position input for the automated coverings, schedule input, and generate scheduled events based on any of the received input. A wall station includes a controller and a station connection port that is linkable to the computer connection port. The controller is programmed to receive scheduled events from the processor when the station connection port and computer connection port are linked to one another and generate command signals based on the scheduled events for receipt by the automated covering to control its operation.03-14-2013
20130067479Establishing A Group Of Endpoints In A Parallel Computer - A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.03-14-2013
20110023044SCHEDULING HIGHLY PARALLEL JOBS HAVING GLOBAL INTERDEPENDENCIES - A method of scheduling highly parallel jobs with global interdependencies is provided herein. The method includes the following steps: grouping input elements, each group being associated with an interdependency tag reflecting a level of interdependency between data associated with different input elements within a group; clustering the groups into collections of groups, wherein the clustered groups are associated with an interdependency tag reflecting a level of interdependency between groups, above a specified value; applying a conflict check to the collections of groups and to active jobs of a working set, to yield a conflict level between each collection of groups and each active job, by analyzing the interdependency tags of the collections of groups vis à vis interdependency tags associated with the active jobs; and adding collections of groups into the working set, wherein added collections of groups are associated with a conflict level below an acceptable conflict level.01-27-2011
20110023043EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits.01-27-2011
20110023040POWER-EFFICIENT INTERACTION BETWEEN MULTIPLE PROCESSORS - A technique for processing instructions in an electronic system is provided. In one embodiment, a processor of the electronic system may submit a unit of work to a queue accessible by a coprocessor, such as a graphics processing unit. The coprocessor may process work from the queue, and write a completion record into a memory accessible by the processor. The electronic system may be configured to switch between a polling mode and an interrupt mode based on progress made by the coprocessor in processing the work. In one embodiment, the processor may switch from an interrupt mode to a polling mode upon completion of a threshold amount of work by the coprocessor. Various additional methods, systems, and computer program products are also provided.01-27-2011
20110023039THREAD THROTTLING - Techniques for scheduling a thread running in a computer system are disclosed. Example computer systems may include but are not limited to a multiprocessor having first and second cores, an operating system, and a memory bank for storing data. The example methods may include but are not limited to measuring a temperature of the memory bank and determining whether the thread includes a request for data stored in the memory bank, if the temperature of the memory bank exceeds a predetermined temperature. The methods may further include but are not limited to slowing down the execution of the thread upon determining if the thread includes a request for data.01-27-2011
20090260012Workload Scheduling - Computer-implemented methods, computer program products and systems for a scalable workload scheduling system to accommodate increasing workloads within a heterogeneous distributed computing environment. In one embodiment, a modified average consensus method is used to evenly distribute network traffic and jobs among a plurality of computers. The user establishes a virtual network comprising a logical topology of the computers. State information from each computer is propagated to the rest of the computers by the modified average consensus method, thereby enabling the embodiment to dispense with the need for a master server, by allowing the individual computers to themselves select jobs which optimally match a desired usage of their own resources to the resources required by the jobs.10-15-2009
20090235262EFFICIENT DETERMINISTIC MULTIPROCESSING - A hardware and/or software facility for controlling the order of operations performed by threads of a multithreaded application on a multiprocessing system is provided. The facility may serialize or selectively-serialize execution of the multithreaded application such that, given the same input to the multithreaded application, the multiprocessing system deterministically interleaves operations, thereby producing the same output each time the multithreaded application is executed. The facility divides the execution of the multithreaded application code into two or more quantum specifying a deterministic number of operations, and the facility specifies a deterministic order in which the threads execute the two or more quantum. The deterministic number of operations may be adapted to follow the critical path of the multithreaded application. Specified memory operations may be executed regardless of the deterministic order, such as those accessing provably local data. The facility may provide dynamic bug avoidance and sharing of identified bug information.09-17-2009
20090235260Enhanced Control of CPU Parking and Thread Rescheduling for Maximizing the Benefits of Low-Power State - A system may comprise a plurality of processing units and a scheduler configured to maintain a record for each respective processing unit. Each respective record may comprise entries which may indicate 1) how long the respective processing unit has been residing in an idle state, 2) a present power-state in which the respective processing unit resides, and 3) whether the respective processing unit is a designated default (bootstrap) processing unit. The scheduler may select one or more of the plurality of processing units according to their respective records, and assign impending instructions to be executed on the selected one or more processing units. Where additional processing units are required, the scheduler may also insert an instruction to trigger an inter-processor interrupt to transition one or more processing units out of idle-state. The scheduler may then assign some impending instructions to these one or more processing units.09-17-2009
20090150890STRAND-BASED COMPUTING HARDWARE AND DYNAMICALLY OPTIMIZING STRANDWARE FOR A HIGH PERFORMANCE MICROPROCESSOR SYSTEM - Strand-based computing hardware and dynamically optimizing strandware are included in a high performance microprocessor system. The system operates in real time automatically and unobservably to parallelize single-threaded software into a plurality of parallel strands for execution by cores implemented in a multi-core and/or multi-threaded microprocessor of the system. The microprocessor executes a native instruction set tailored for speculative multithreading. The strandware directs hardware of the microprocessor to collect dynamic profiling information while executing the single-threaded software. The strandware analyzes the profiling information for the parallelization, and uses binary translation and dynamic optimization to produce native instructions to store in a translation cache later accessed to execute the produced native instructions instead of some of the single-threaded software. The system is capable of parallelizing a plurality of single-threaded software applications (e.g. application software, device drivers, operating system routines or kernels, and hypervisors).06-11-2009
20130167152MULTI-CORE-BASED COMPUTING APPARATUS HAVING HIERARCHICAL SCHEDULER AND HIERARCHICAL SCHEDULING METHOD - A computing apparatus includes a global scheduler configured to schedule a job group on a first layer, and a local scheduler configured to schedule jobs belonging to the job group according to a set guide on a second layer. The computing apparatus also includes a load monitor configured to collect resource state information associated with states of physical resources and set a guide with reference to the collected resource state information and set policy.06-27-2013
20090007122AUTOMATIC RELEVANCE FILTERING - A computer-implemented method and an apparatus for use in a computing apparatus are disclosed. The method includes determining a context and a data requirement for a candidate action to be selected, the selection specifying an action in a workflow; and filtering the candidate actions for relevance in light of the context and the data requirement. The apparatus, in a first aspect, includes a program storage medium encoded with instructions that, when executed by a computing device, performs the method. In a second aspect, the apparatus includes a computing apparatus programmed to perform the method.01-01-2009
20090007121Method And Apparatus To Enable Runtime Processor Migration With Operating System Assistance - In a method for switching to a spare processor during runtime, a processing system determines that execution should be migrated off of an active processor. An operating system (OS) scheduler and at least one device are then paused, and the active processor is put into an idle state. State data from writable and substantial non-writable stores in the active processor is loaded into the spare processor. Interrupt routing table logic for the processing system is dynamically reprogrammed to direct external interrupts to the spare processor. The active processor may then be off-lined, and the device and OS scheduler may be unpaused or resumed. Threads may then be dispatched to the spare processor for execution. Other embodiments are described and claimed.01-01-2009
20090007119METHOD AND APPARATUS FOR SINGLE-STEPPING COHERENCE EVENTS IN A MULTIPROCESSOR SYSTEM UNDER SOFTWARE CONTROL - An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode.01-01-2009
20130167151JOB SCHEDULING BASED ON MAP STAGE AND REDUCE STAGE DURATION - A plurality of job profiles is received. Each job profile describes a job to be executed, and each job includes map tasks and reduce tasks. An execution duration for a map stage including the map tasks and an execution duration for a reduce stage including the reduce tasks of each job is estimated. The jobs are scheduled for execution based on the estimated execution duration of the map stage and the estimated execution duration of the reduce stage of each job.06-27-2013
20110145826MECHANISM FOR PARTITIONING PROGRAM TREES INTO ENVIRONMENTS - Partitioning continuation based runtime programs. Embodiments may include differentiating activities of a continuation based runtime program between public children activities and implementation children activities. The continuation based runtime program is partitioned into visibility spaces. The visibility spaces have boundaries based on implementation children activities. The continuation based runtime program is partially processes at a visibility space granularity.06-16-2011
20080295103DISTRIBUTED PROCESSING METHOD11-27-2008
20080295102COMPUTING SYSTEM, METHOD OF CONTROLLING THE SAME, AND SYSTEM MANAGEMENT UNIT11-27-2008
20080295101ELECTRONIC DOCUMENT MANAGER11-27-2008
20080295100SYSTEM AND METHOD FOR DIAGNOSING AND MANAGING INFORMATION TECHNOLOGY RESOURCES11-27-2008
20080295099Disk Drive for Handling Conflicting Deadlines and Methods Thereof11-27-2008
20090031315Scheduling Method and Scheduling Apparatus - Thread information is retained in a main memory. The thread information includes a bit string and last executed information. Each bit of the bit string is allocated to a thread, and the number and the value of the bit indicate the number of the thread and whether or not the thread is in an executable state, respectively. The last executed information is the number of a last executed thread. A processor rotates the bit string so that a bit indicating the last executed thread comes to the end of the bit string. It searches the rotated bit string for a bit corresponding to a thread in the executable state in succession from the top, and selects the number of the first obtained bit as the number of the next thread to be executed. Then, the thread information is updated by changing the value of the bit of this number to indicate not being executable, and setting the last executed information to the number of this bit. This operation is performed by using an atomic command.01-29-2009
20100064287Scheduling control within a data processing system - A processor 03-11-2010
20090172679CONTROL APPARATUS, STORAGE SYSTEM, AND MEMORY CONTROLLING METHOD - In order to more efficiently use a cache memory to realize improved response ability in a storage system, there provided are a cache memory which stores the data read from the storage apparatus, an access monitoring unit which monitors a state of access from the upper apparatus to the data stored in the storage apparatus, a schedule information creating unit which creates schedule information that determines contents to be stored in the cache memory based on the access state, and a memory controlling unit which controls record-processing of the data from the storage apparatus to the cache memory and removal-processing of the data from the cache memory based on the schedule information.07-02-2009
20120131583ENHANCED BACKUP JOB SCHEDULING - Systems and methods of enhanced backup job scheduling are disclosed. An example method may include determining a number of jobs (n) in a backup set, determining a number of tape drives (m) in the backup device, and determining a number of concurrent disk agents (maxDA) configured for each tape drive. The method may also include defining a scheduling problem based on n, m, and maxDA. The method may also include solving the scheduling problem using an integer programming (IP) formulation to derive a bin-packing schedule that minimizes makespan (S) for the backup set.05-24-2012
20110283285Real Time Mission Planning - The different advantageous embodiments provide a system comprising a number of computers, a graphical user interface, first program code stored on the computer, and second program code stored on the computer. The graphical user interface is executed by a computer in the number of computers. The computer is configured to run the first program code to define a mission using a number of mission elements. The computer is configured to run the second program code to generate instructions for a number of assets to execute the mission and monitor the number of assets during execution of the mission.11-17-2011
20080282248Electronic computing device capable of specifying execution time of task, and program therefor - When an execution time of a task is short the execution time of the task can be reliably specified and erroneous calculation of an execution time due to other processing can be prevented. A task designated in advance and a task whose execution is initiated are collated with each other. When the tasks are compared with each other, the initiation time of the executing task is recorded at the initiation of the executing task and the difference between the termination time and the initiation time is recorded as an execution time when the executing task terminates.11-13-2008
20110197195THREAD MIGRATION TO IMPROVE POWER EFFICIENCY IN A PARALLEL PROCESSING ENVIRONMENT - A method and system to selectively move one or more of a plurality threads which are executing in parallel by a plurality of processing cores. In one embodiment, a thread may be moved from executing in one of the plurality of processing cores to executing in another of the plurality of processing cores, the moving based on a performance characteristic associated with the plurality of threads. In another embodiment of the invention, a power state of the plurality of processing cores may be changed to improve a power efficiency associated with the executing of the multiple threads.08-11-2011
20100122257ELECTRONIC DEVICE AND ELECTRONIC DEVICE CONTROL METHOD - When a request is made for execution of a new application while other application is being executed or interrupted, and a judgment unit (05-13-2010
20080282250COMPONENT INTEGRATOR - Techniques allow for communication with and management of multiple external components. A component manager communicates with one or more component adapters. Each component adapter communicates with an external component and is able to call the methods, functions, procedures, and other operations of the external component. The component manager associates these external operations with local operations, such that an application may use local operation names to invoke the external operations. Furthermore, the component manager has component definitions and operation definitions that describe the component adapters and operations, including input and output parameters and the like. The component manager is able to receive a group of data including a local operation and a list of input and output parameters and determine from the foregoing information which external operation to call and which component adapter has access to the external operation.11-13-2008
20110302582TASK ASSIGNMENT ON HETEROGENEOUS THREE-DIMENSIONAL/STACKED MICROARCHITECTURES - A method of enhancing performance of a three-dimensional microarchitecture includes determining a computational demand for performing a task, selecting an optimization criteria for the task, identifying at least one computational resource of the microarchitecture configured to meet the computational demand for performing the task, and calculating an evaluation criteria for the at least one computational resource based on the computational demand for performing the task. The evaluation criteria defines an ability of the computational resource to meet the optimization criteria. The method also includes assigning the task to the computational resource based on the evaluation criteria of the computational resource in order to proactively avoid creating a hot spot on the three-dimensional microarchitecture.12-08-2011
20110302585Techniques for Providing Improved Affinity Scheduling in a Multiprocessor Computer System - Techniques for controlling a thread on a computerized system having multiple processors involve accessing state information of a blocked thread, and maintaining the state information of the blocked thread at current values when the state information indicates that less than a predetermined amount of time has elapsed since the blocked thread ran on the computerized system. Such techniques further involve setting the state information of the blocked thread to identify affinity for a particular processor of the multiple processors when the state information indicates that at least the predetermined amount of time has elapsed since the blocked thread ran on the computerized system. Such operation enables the system to place a cold blocked thread which shares data with another thread on the same processor of that other thread so that, when the blocked thread awakens and runs, that thread is closer to the shared data.12-08-2011
20110302584SYNTHESIS OF CONCURRENT SCHEDULERS FOR MULTICORE ARCHITECTURES - Systems and methods provide a high-level language for generation of a scheduling specification based on a scheduling policy, and synthesis of scheduler based on the scheduling specification. The systems and methods can permit the use of more sophisticated scheduling strategies than those afforded by conventional systems, without requiring the programmer to write explicitly parallel code. In certain embodiments, synthesis of the scheduler includes implementation of at least one rule related to the scheduling specification through definition of one or more workset objects that are concurrent, a workset object of the one or more workset objects having an addition method, a first poll method, and a second poll method. Such poll methods extend the operability of sequential poll methods. The one or more worksets satisfy a condition for correctness that is less stringent than conventional conditions for correctness.12-08-2011
20110302583SYSTEMS AND METHODS FOR PROCESSING DATA - A system, method, and computer program product for processing data are disclosed. The system includes a data processing framework configured to receive a data processing task for processing, a plurality of database systems coupled to the data processing framework, wherein the database systems are configured to perform a data processing task, and a storage component in communication with the data processing framework and the plurality database systems, configured to store information about each partition of the data processing task being processed by each database system and the data processing framework. The data processing task is configured to be partitioned into a plurality of partitions and each database system is configured to process a partition of the data processing task assigned for processing to that database system. Each database system is configured to perform processing of its assigned partition of the data processing task in parallel with another database system processing another partition of the data processing task assigned to the another database system. The data processing framework is configured to perform at least one partition of the data processing task.12-08-2011
20110289503EXTENSIBLE TASK SCHEDULER - A parallel execution runtime allows tasks to be executed concurrently in a runtime environment. The parallel execution runtime delegates the implementation of task queuing, dispatch, and thread management to one or more plug-in schedulers in a runtime environment of a computer system. The plug-in schedulers may be provided by user code or other suitable sources and include interfaces that operate in conjunction with the runtime. The runtime tracks the schedulers and maintains control of all aspects of the execution of tasks from user code including task initialization, task status, task waiting, task cancellation, task continuations, and task exception handling.11-24-2011
20110289505INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM - The function restriction information of a designated flow executor is acquired. The acquired function restriction information is analyzed. An operation screen that identifiably displays process contents executable by the flow executor in association with setting target functions to be set in the flow is displayed on the basis of the analyzed function restriction information. Process contents of a setting target function to be set in the flow are selected on the basis of an operation in the operation screen. The flow of the flow executor is generated by combining the functions of the selected process contents.11-24-2011
20110296422Switch-Aware Parallel File System - Embodiments of the invention related to a switch-aware parallel file system. A computing cluster is partitioned into a plurality of computing cluster building blocks comprising a parallel file system. Each computing cluster building block comprises a file system client, a storage module, a building block metadata module, and a building block network switch. The building block metadata module tracks a storage location of data allocated by the storage module within the computing cluster building block. The computing cluster further comprises a file system metadata module that tracks which of the plurality of computing cluster building blocks data is allocated among within the parallel file system. The computing cluster further comprises a file system network switch to provide the parallel file system with access to each of the plurality of computing cluster building blocks and the file system metadata module. At least one additional computing cluster building block is added to the computing cluster, if resource utilization of the computing cluster exceeds a pre-determined threshold.12-01-2011
20110296420METHOD AND SYSTEM FOR ANALYZING THE PERFORMANCE OF MULTI-THREADED APPLICATIONS - A method and system to provide an analysis model to determine the specific problem(s) of a multi-threaded application. In one embodiment of the invention, the multi-thread application uses a plurality of threads for execution and each thread is assigned to a respective one of a plurality of states based on a current state of each thread. By doing so, the specific problem(s) of the multi-threaded application is determined based on the number of transitions among the plurality of states for each thread. In one embodiment of the invention, the analysis model uses worker threads transition counters or events to determine for each parallel region or algorithm of the multi-threaded application which problem has happened and how much it has affected the scalability of the parallel region or algorithm.12-01-2011
20110296426METHOD AND APPARATUS HAVING RESISTANCE TO FORCED TERMINATION ATTACK ON MONITORING PROGRAM FOR MONITORING A PREDETERMINED RESOURCE - Exemplary embodiments include a method and system having resistance to a forced termination attack on a monitoring program for monitoring a predetermined resource. Aspects of the exemplary embodiment include a device that executes a predetermined process including a monitoring program that monitors a predetermined resource, wherein the predetermined process is a process for which the predetermined resource becomes unavailable in response to termination of the predetermined process; a program starting unit for starting the monitoring program in response to an execution of the predetermined process; and a terminator for terminating the predetermined process in the case where the monitoring program is forcibly terminated from the outside.12-01-2011
20110296424Synthesis of Memory Barriers - A framework is provided for automatic inference of memory fences in concurrent programs. A method is provided for generating a set of ordering constraints that prevent executions of a program violating a specification. One or more incoming avoidable transitions are identified for a state and one or more ordering constraints are refined for the state. The set of ordering constraints are generated by taking a conjunction of ordering constraints for all states that violate the specification. One or more fence locations can optionally be selected based on the generated set of ordering constraints.12-01-2011
20110296423FRAMEWORK FOR SCHEDULING MULTICORE PROCESSORS - A method, system, and computer usable program product for a framework for scheduling tasks in a multi-core processor or multiprocessor system are provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.12-01-2011
20110296421METHOD AND APPARATUS FOR EFFICIENT INTER-THREAD SYNCHRONIZATION FOR HELPER THREADS - A monitor bit per hardware thread in a memory location may be allocated, in a multiprocessing computer system having a plurality of hardware threads, the plurality of hardware threads sharing the memory location, and each of the allocated monitor bit corresponding to one of the plurality of hardware threads. A condition bit may be allocated for each of the plurality of hardware threads, the condition bit being allocated in each context of the plurality of hardware threads. In response to detecting the memory location being accessed, it is determined whether a monitor bit corresponding to a hardware thread in the memory location is set. In response to determining that the monitor bit corresponding to a hardware thread is set in the memory location, a condition bit corresponding to a thread accessing the memory location is set in the hardware thread's context.12-01-2011
20110173620Execution Context Control - A system and method for controlling the execution of notifications in a computer system with multiple notification contexts. A RunOn operator enables context hopping between notification contexts. Push-based stream operators optionally perform error checking to determine if notifications combined into a push-based stream share a common notification context. Context boxes group together notification creators and associate their notifications with a common scheduler and notification context. Operators employ a composition architecture, in which they receive one or more push-based streams and produce a transformed push-based stream that may be further operated upon. Components may be used in combinations to implement various policies, including a strict policy in which all notifications are scheduled in a common execution context, a permissive policy that provides programming flexibility, and a hybrid policy that combines flexibility with error checking.07-14-2011
20100088704META-SCHEDULER WITH META-CONTEXTS - A process in a computer system creates and uses a meta-scheduler with meta-contexts that execute on meta-virtual processors. The meta-scheduler includes a set of schedulers with scheduler-contexts that execute on virtual processors. The meta-scheduler schedules the scheduler-contexts on the meta-contexts and schedules the meta-contexts on the meta-virtual processors which execute on execution contexts associated with hardware threads.04-08-2010
20100138837ENERGY BASED TIME SCHEDULER FOR PARALLEL COMPUTING SYSTEM - A system, computer readable medium and method for reducing an energy consumption in a parallel computing system that includes plural resources. The method includes receiving a computing job to be performed by the parallel computing system, determining a number of resources of the plural resources to be used for performing the computing job by searching a preset table stored in the parallel computing system, wherein the preset table is populated prior to determining the number of resources, and distributing the computing job to the determined number of resources.06-03-2010
20110191780METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM - A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks.08-04-2011
20110191778COMPUTER PROGRAM, METHOD, AND APPARATUS FOR GROUPING TASKS INTO SERIES - In an apparatus for generating a series, a task discrimination unit identifies a task executed on a first device and tasks executed on a second device, on the basis of messages exchanged between those devices. A memory stores models defining caller-callee relationships between caller tasks on the first device and callee tasks on the second device. A series grouping unit produces a series of tasks from a callee-eligible sequence of tasks executed on the second device during a processing time of the identified task on the first device. The series grouping unit achieves this by selecting one of the models that defines the identified task on the first device as a caller task and extracting a portion of the callee-eligible sequence that matches at least in part with the callee tasks defined in the selected model while excluding therefrom the tasks that cannot be the callee tasks.08-04-2011
20110191777Method and Apparatus for Scheduling Data Backups - An apparatus and computer-executed method for scheduling data backups may include accessing a specification for a backup job. The specification may include an identification of a data source, a start time and a target storage device to which backup data should be written. A first history of past backup jobs that specify the data source, and a second history of past backup jobs that specify the target storage device, may be identified. Using the first history, an expected size of the backup data may be computed. Using the second history, an expected rate at which the backup data may be written to the target storage device may be computed. Using the expected size, the expected rate and the start time, an expected completion time for the backup job may be computed.08-04-2011
20110191776LOW OVERHEAD DYNAMIC THERMAL MANAGEMENT IN MANY-CORE CLUSTER ARCHITECTURE - A semiconductor chip includes a plurality of multi-core clusters each including a plurality of cores and a cluster controller unit. Each cluster controller unit is configured to control thread assignment within the multi-core cluster to which it belongs. The cluster controller unit monitors various parameters measured in the plurality of cores within the multi-core cluster to estimate the computational demand of each thread that runs in the cores. The cluster controller unit may reassign the threads within the multi-core cluster based on the estimated computational demand of the threads and transmit a signal to an upper-level software manager that controls the thread assignment across the semiconductor chip. When an acceptable solution to thread assignment cannot be achieved by shuffling of threads within the multi-core cluster, the cluster controller unit may also report inability to solve thread assignment to the upper-level software manager to request a system level solution.08-04-2011
20110191775ARRAY-BASED THREAD COUNTDOWN - The forking of thread operations. At runtime, a task is identified as being divided into multiple subtasks to be accomplished by multiple threads (i.e., forked threads). In order to be able to verify when the forked threads have completed their task, multiple counter memory locations are set up and updated as forked threads complete. The multiple counter memory locations are evaluated in the aggregate to determine whether all of the forked threads are completed. Once the forked threads are determined to be completed, a join operation may be performed. Rather than a single memory location, multiple memory locations are used to account for thread completion. This reduces risk of thread contention.08-04-2011
20100169889MULTI-CORE SYSTEM - A multi-core system includes: a first core that writes first data by execution of a first program, wherein the first core gives write completion notice after completion of the writing; a second core that refers to the written first data by execution of a second program; and a scheduler that instructs the second core to start the execution of the second program before the execution of the first program is completed when the scheduler is given the write completion notice from the first core by the execution of the first program.07-01-2010
20100169888VIRTUAL PROCESS COLLABORATION - Methods, apparatuses, and systems are presented for automating organization of multiple processes involving maintaining a uniform record of process threads using at least one server, each process thread comprising a representation of a collaborative process capable of involving a plurality of users, enabling at least one of the plurality of users to carry out a user action while interacting with one of a plurality of different types of application programs, and modifying at least one process thread in the uniform record of process threads in response to the user action carried out by the user. Modifying the at least one process thread may comprise generating the at least one process thread as a new process thread. Alternatively or in addition, modifying the at least one process thread may comprise modifying the at least one process thread as an existing process thread. At least one of the process threads may reflect user actions carried out by more than one of the plurality of users.07-01-2010
20110219378ITERATIVE DATA PARALLEL OPPORTUNISTIC WORK STEALING SCHEDULER - The scheduling of a group of work units across multiple computerized worker processes. A group of work units is defined and assigned to a first worker. The worker uses the definition of the group of work units to determine when processing is completed on the group of work units. Stealing workers may steal work from the first worker, and steal from the group of work initially assigned to the first worker, by altering the definition of the group of work units assigned to the first worker. The altered definition results in the first worker never completing a subset of the work units original assigned to the first worker, thereby allowing the stealing worker to complete work on that subset of work units. The process may perhaps be performed recursively in that the stealing worker may have some of its work stolen in the same way.09-08-2011
20110219377DYNAMIC THREAD POOL MANAGEMENT - Dynamically managing a thread pool associated with a plurality of sub-applications. A request for at least one of the sub-applications is received. A quantity of threads currently assigned to the at least one of the sub-applications is determined. The determined quantity of threads is compared to a predefined maximum thread threshold. A thread in the thread pool is assigned to handle the received request if the determined quantity of threads is not greater than the predefined maximum thread threshold. Embodiments enable control of the quantity of threads within the thread pool assigned to each of the sub-applications. Further embodiments manage the threads for the sub-applications based on latency of the sub-applications.09-08-2011
20090276778CONTEXT SWITCHING IN A SCHEDULER - A scheduler in a process of a computer system detects a task with an associated execution context that has not been previously invoked by the scheduler. The scheduler executes the task on a processing resource without performing a context switch if the processing resource executed a previous task to completion. The scheduler stores the execution context originally associated with the task for later use.11-05-2009
20120233622PORTABLE DEVICE AND TASK PROCESSING METHOD AND APPARATUS THEREFOR - A portable device and a task processing method and apparatus for the portable device are provided. The method comprises the steps of: obtaining task requirement information of a user; determining, from a first system and a second system, an execution system for responding to a system task corresponding to the task requirement information based on a predetermined policy; and transmitting the task requirement information to the execution system such that the execution system can execute the system task based on the task requirement information. With the present invention, it is possible to automatically determine, based on the task requirement information, an execution system for executing a system task corresponding to the task requirement information, such that the user operation can be facilitated.09-13-2012
20120233620SELECTIVE CONSTANT COMPLEXITY DISMISSAL IN TASK SCHEDULING - A strictly increasing function is implemented to generate a plurality of unique creation stamps, each of the plurality of unique creation stamps increasing over time pursuant to the strictly increasing function. A new task to be placed with the plurality of tasks is labeled with a new unique creation stamp of the plurality of unique creation stamps. The one of the list of dismissal rules holds a minimal valid creation (MVC) stamp, which is updated when a dismissal action for the one of the list of dismissal rules is executed. The dismissal action acts to dismiss a selection of tasks over time due to continuous dispatch.09-13-2012
20120233618INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including a selection unit configured to, on the basis of first identification information included in a processing instruction and corresponding to a service, and first association information in which the first identification information is associated with second identification information for identifying an application, select an application to perform the service corresponding to the processing instruction, and an execution unit configured to cause the selected application to perform a process in accordance with the processing instruction.09-13-2012
20090178044FAIR STATELESS MODEL CHECKING - Techniques for providing a fair stateless model checker are disclosed. In some aspects, a schedule is generated to allocate resources for threads of a multi-thread program in lieu of having an operating system allocate resources for the threads. The generated schedule is both fair and exhaustive. In an embodiment, a priority graph may be implemented to reschedule a thread when a different thread is determined not to be making progress. A model checker may then implement one of the generated schedules in the program in order to determine if a bug or a livelock occurs during the particular execution of the program. An output by the model checker may facilitate identifying bugs and/or livelocks, or authenticate a program as operating correctly.07-09-2009
20090320030METHOD FOR MANAGEMENT OF TIMEOUTS - A method of managing a multithreaded computer system comprises instantiating, in response to each transaction initiated by a first thread of a plurality of threads, a timer object including a scheduled expiration time and a set of timeout handling information for the transaction in storage local to the first thread; registering, in response to each passing of a fixed time interval, each timer object in the storage local to the first thread for which the scheduled expiration time is earlier than the fixed time interval added to a current time in a timer processing component by adding a pointer referencing the timer object to a data structure managed by the timer processing component; and managing each timer object corresponding to a transaction initiated by the first thread that is not registered in the timer processing component in the storage local to the first thread. The timer processing component regularly processes each timer object referenced by the data structure for which the scheduled expiration time value is not earlier than the current time in accordance with the set of timeout handling information of the timer object.12-24-2009
20090150888EMBEDDED OPERATING SYSTEM OF SMART CARD AND THE METHOD FOR PROCESSING THE TASK - An embedded operating system of smart card and the method for processsing task are disclosed. The method includes: A, initializing the system; B, creating at least one task according to the function set by the system; C, scheduling the pre-execution task according to the priority of the system; D, executing the task and returning the executing result through a data transmission channel. The invention enchances the support of the data channel of the hardware platform, and not only supports the single data channel, ISO7816, of conventional smart cards, but also supports the status of two or more data channels coexisting, in order to make the smart card transmit the information more flexible with higher speed with device terminals. The invention enchances the support of application of smart card, and not only supports the single application on the conventional smart card, but also supports several applications running simultaneity on one card, in order to utilize the smart card with higher efficiency.06-11-2009
20090150889INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND DEVICE AND PROGRAM USED FOR THE INFORMATION PROCESSING SYSTEM AND THE INFORMATION PROCESSING METHOD - An information processing terminal is provided with a data acquiring means for reading data from an external recording medium; a program storing means for storing a plurality of application programs; a program executing means for executing the stored application programs; and a program selecting means for selecting the application program to be executed by the program executing means. The program selecting means selects the application program to be executed from the programs stored in the program storing means, corresponding to the data acquired through the data acquiring means, and processes the data acquired through the data acquiring means by the application program selected by the program selecting means.06-11-2009
20090037917Apparatus and method capable of using reconfigurable descriptor in system on chip - An apparatus and method capable of using a reconfigurable descriptor in a System on Chip (SoC) is provided. The apparatus includes: a Central Processing Unit (CPU) for receiving parameters, each of which defines a descriptor, from a user and for providing the parameters to a controller. The controller defines the descriptor by reading target data indicated by the received parameters.02-05-2009
20090083740ASYNCHRONOUS EXECUTION OF SOFTWARE TASKS - A service broker for asynchronous execution of software. The broker functions include dynamically loading working modules from a specified directory, publishing the working module commands, receiving service requests from clients, and upon successful authentication and authorization, dispatching the requests to module command queues for scheduling and execution. The modules are invoked in separate domains so that management functions can control the modules independently. A management application facilitates interactive user scheduling of the actions being invoked. This can also be accomplished automatically according to business rules, for example. The management application also facilitates checking the progress on an action that is occurring, displaying errors that occur during the command execution, results of an action can also be displayed, and scheduling of requests.03-26-2009
20120110586THREAD GROUP SCHEDULER FOR COMPUTING ON A PARALLEL THREAD PROCESSOR - A parallel thread processor executes thread groups belonging to multiple cooperative thread arrays (CTAs). At each cycle of the parallel thread processor, an instruction scheduler selects a thread group to be issued for execution during a subsequent cycle. The instruction scheduler selects a thread group to issue for execution by (i) identifying a pool of available thread groups, (ii) identifying a CTA that has the greatest seniority value, and (iii) selecting the thread group that has the greatest credit value from within the CTA with the greatest seniority value.05-03-2012
20120110585ENERGY CONSUMPTION OPTIMIZATION IN A DATA-PROCESSING SYSTEM - A method for optimizing energy consumption in a data-processing system comprising a set of data-processing units is disclosed. In one embodiment, such a method includes indicating a set of data-processing jobs to be executed on a data-processing system during a production period. An ambient temperature expected for each data-processing unit during the production period is estimated. The method calculates an execution scheme for the data-processing jobs on the data-processing system. The execution scheme optimizes the energy consumed by the data-processing system to execute the data-processing jobs based on the ambient temperature of the data-processing units. The method then executes the data-processing jobs on the data processing system according to the execution scheme. A corresponding apparatus and computer program product are also disclosed.05-03-2012
20120110583DYNAMIC PARALLEL LOOPING IN PROCESS RUNTIME - Systems and methods for dynamic parallel looping in process runtime environment are described herein. A currently processed process-flow instance of a business process reaches a dynamic loop activity including a repetitive task to be executed with each loop cycle. A predefined expression is evaluated on top of the current data context of the process-flow instance to discover a number of loop cycles for execution within the dynamic loop activity. A number of parallel activities corresponding to the repetitive task recurrences are instantiated and executed in parallel. The results of the parallel activities are coordinated to confirm that the dynamic loop activity is completed.05-03-2012
20090150887Process Aware Change Management - A change order to be executed at a scheduled time as part of a change plan is created, wherein the change order to define a change to an Information Technology (IT) environment. The change order is validated against validation rules to simulate execution of the change order at the scheduled time wherein other change orders scheduled to execute before the execution of the change order are included in the simulation. Breaks in change orders scheduled to execute after the change order are detected. Side effects caused by execution of the change order are determined. The results of validating the change order are output.06-11-2009
20100122256Scheduling Work in a Multi-Node Computer System Based on Checkpoint Characteristics - Efficient application checkpointing uses checkpointing characteristics of a job to determine how to schedule jobs for execution on a multi-node computer system. A checkpoint profile in the job description includes information on the expected frequency and duration of a check point cycle for the application. The checkpoint profile may be based on a user/administrator input as well as historical information. The job scheduler will attempt to group applications (jobs) that have the same checkpoint profile, on the same nodes or group of nodes. Additionally, the job scheduler may control when new jobs start based on when the next checkpoint cycle(s) are expected. The checkpoint monitor will monitor the checkpoint cycles, updating the checkpoint profiles of running jobs. The checkpoint monitor will also keep track of an overall system checkpoint profile to determine the available checkpointing capacity before scheduling jobs on the cluster.05-13-2010
20100122255ESTABLISHING FUTURE START TIMES FOR JOBS TO BE EXECUTED IN A MULTI-CLUSTER ENVIRONMENT - Start times are determined for jobs to be executed in the future in a multi-cluster environment. The start times are, for instance, the earliest start times in which the jobs may be executed. The start times are computed in logarithmic time, providing processing efficiencies for the multi-cluster environment. Processing efficiencies are further realized by employing parallel processing in determining the start times.05-13-2010
20130219398METHOD FOR EXECUTING A UTILITY PROGRAM, COMPUTER SYSTEM AND COMPUTER PROGRAM PRODUCT - A method of executing a utility program on a computer system having a system management chip includes activating a graphics memory in the computer system, downloading a memory map including the utility program to be executed by the system management chip, storing the memory map in the graphics memory by the system management chip, copying the memory map from the graphics memory to a main memory in the computer system, and executing the utility program with a processor in the computer system.08-22-2013
20100122259Multithreaded kernel for graphics processing unit - Systems and methods are provided for scheduling the processing of a coprocessor whereby applications can submit tasks to a scheduler, and the scheduler can determine how much processing each application is entitled to as well as an order for processing. In connection with this process, tasks that require processing can be stored in physical memory or in virtual memory that is managed by a memory manager. The invention also provides various techniques of determining whether a particular task is ready for processing. A “run list” may be employed to ensure that the coprocessor does not waste time between tasks or after an interruption. The invention also provides techniques for ensuring the security of a computer system, by not allowing applications to modify portions of memory that are integral to maintaining the proper functioning of system operations.05-13-2010
20100122258VERSIONING AND EFFECTIVITY DATES FOR ORCHESTRATION BUSINESS PROCESS DESIGN - Particular embodiments generally relate to the orchestration of an order fulfillment business process using effectivity dates and versioning. In one embodiment, a plurality of services in the order fulfillment business process are provided. A definition of a business process including one or more services is received from an interface. The one or more services may be defined in steps to be performed in the order fulfillment business process. An effectivity date associated with the definition is also received from the interface. For example, the effectivity date may be associated with the business process or individual steps in the business process and may specify a period of time during which the process or step can be used. The effectivity dates and versioning may then be enforced at run-time.05-13-2010
20120036511CONTROL DEVICE FOR DIE-SINKING ELECTRICAL DISCHARGE MACHINE - A program analyzing unit that extracts electrode numbers included in a plurality of processing programs, determines duplication of the electrode numbers among the processing programs to display a result of determination, and that stores correspondence between a revision electrode number that is specified by a user and an in-use electrode number that is used in the processing program for each of the processing programs and a program executing unit that executes each of the processing programs by reading the revision electrode number instead of the in-use electrode number used in each of the processing programs based on the stored correspondence at the time of execution of the processing programs are included, and duplication of the electrode numbers used among the programs is easily and certainly resolved.02-09-2012
20120036510Method for Analysing the Real-Time Capability of a System - The invention provides a method for analysing the real-time capability of a system, in particular a computer system, where various tasks are provided, wherein the tasks are repeatedly performed and wherein an execution of a task is triggered by an activation of the task and this represents an event of the task, wherein a plurality of descriptive elements are provided to describe the time correlation of the events as event stream, wherein the event streams may detect the maximum time densities of the events and/or the minimum time densities of the events, and wherein at least a further descriptive element to which an amount of event streams is assigned and which describes the time correlation of an entirety of events which are captured by at least two event streams.02-09-2012
20120036509APPARATUS AND METHODS TO CONCURRENTLY PERFORM PER-THREAD AS WELL AS PER-TAG MEMORY ACCESS SCHEDULING WITHIN A THREAD AND ACROSS TWO OR MORE THREADS - A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.02-09-2012
20080271025SYSTEM AND METHOD FOR CREATING AN ASSURANCE SYSTEM IN A PRODUCTION ENVIRONMENT - An assurance system for testing the functionality of a computer system by creating an overlay of the computer system and routing selected traffic to the overlay while assessing the performance of the system. The system may be used for purposes of managing the testing of the computer system and delivery of comprehensive reports of the likely results on the computer system based on results generated by the assurance system, including such things as configuration changes to the environment, environment load and stress conditions, environment security, software installation to the environment, and environment performance levels among other things.10-30-2008
20100083263RESOURCE INFORMATION COLLECTING DEVICE, RESOURCE INFORMATION COLLECTING METHOD, PROGRAM, AND COLLECTION SCHEDULE GENERATING DEVICE - A condition storage unit (04-01-2010
20100083260METHODS AND SYSTEMS TO PERFORM A COMPUTER TASK IN A REDUCED POWER CONSUMPTION STATE - Methods and systems to perform a computer task in a reduced power consumption state, including to virtualize physical resources with respect to an operating environment and service environment, to exit the operating environment and enter the service environment, to place a first set of one or more of the physical resources in a reduced power consumption state, and to perform a task in the service environment utilizing a processor and a second set of one or more of the physical resources. A physical resource may be assigned to an operating environment upon an initialization of the operating environment, and re-assigned to the service environment to be utilized by the service environment while other physical resources are placed in a reduced power consumption state.04-01-2010
20100088705Call Stack Protection - Call stack protection, including executing at least one application program on the one or more computer processors, including initializing threads of execution, each thread having a call stack, each call stack characterized by a separate guard area defining a maximum extent of the call stack, dispatching one of the threads of the process, including loading a guard area specification for the dispatched thread's call stack guard area from thread context storage into address comparison registers of a processor; determining by use of address comparison logic in dependence upon a guard area specification for the dispatched thread whether each access of memory by the dispatched thread is a precluded access of memory in the dispatched thread's call stack's guard area; and effecting by the address comparison logic an address comparison interrupt for each access of memory that is a precluded access of memory in the dispatched thread's guard area.04-08-2010
20100083261INTELLIGENT CONTEXT MIGRATION FOR USER MODE SCHEDULING - Embodiments for performing directed switches between user mode schedulable (UMS) thread and primary threads are disclosed. In accordance with one embodiment, a primary thread user portion is switched to a UMS thread user portion so that the UMS thread user portion is executed in user mode via the primary thread user portion. The primary thread is then transferred into kernel mode via an implicit switch. A kernel portion of the UMS thread is then executed in kernel mode using the context information of a primary thread kernel portion.04-01-2010
20100083262Scheduling Requesters Of A Shared Storage Resource - To schedule workloads of requesters of a shared storage resource, a scheduler specifies relative fairness for the requesters of the shared storage resource. In response to the workloads of the requesters, the scheduler modifies performance of the scheduler to deviate from the specified relative fairness to improve input/output (I/O) efficiency in processing the workloads at the shared storage resource.04-01-2010
20100083259DIRECTING DATA UNITS TO A CORE SUPPORTING TASKS - A computer system may comprise a plurality of cores that may process the tasks determined by the operating system. A network device may direct a first set of packets to a first core using a flow-spreading technique such as receive side scaling (RSS). However, the operating system may re-provision a task from the first core to a second core to balance the load, for example, on the computer system. The operating system may determine an identifier of the second core using a new data field in the socket calls to track the identifier of the second core. The operating system may provide the identifier of the second core to a network device. The network device may then direct a second set of packets to the second core using the identifier of the second core.04-01-2010
20090249343SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR RECEIVING TIMER OBJECTS FROM LOCAL LISTS IN A GLOBAL LIST FOR BEING USED TO EXECUTE EVENTS ASSOCIATED THEREWITH - A system, method, and computer program product are provided for receiving timer objects from local lists in a global list for being used to execute events associated therewith. A plurality of execution contexts are provided for receiving timer objects. Additionally, a plurality of local lists are provided, each corresponding with one of the execution contexts, for receiving the timer objects therefrom. Furthermore, a global list is provided for receiving the timer objects from the local lists for being used to execute events associated therewith.10-01-2009
20090187909SHARED RESOURCE BASED THREAD SCHEDULING WITH AFFINITY AND/OR SELECTABLE CRITERIA - Threads may be scheduled to be executed by one or more cores depending upon whether it is more desirable to minimize power or to maximize performance. If minimum power is desired, threads may be schedule so that the active devices are most shared; this will minimize the number of active devices at the expense of performance. On the other hand, if maximum performance is desired, threads may be scheduled so that active devices are least shared. As a result, threads will have more active devices to themselves, resulting in greater performance at the expense of additional power consumption. Thread affinity with a core may also be taken into consideration when scheduling threads in order to improve the power consumption and/or performance of an apparatus.07-23-2009
20110173623DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, STORAGE MEDIUM, AND DATA PROCESSING SYSTEM - A data processing apparatus that makes it possible for a user of a data processing apparatus to recognize whether or not descriptive contents of process definition tickets are executable on the data processing apparatus. Process definition tickets in which sequential processing flows for realizing functions are described are obtained, and it is determined whether or not the descriptive contents of the process definition tickets are executable on the data processing apparatus. A list of the process definition tickets whose descriptive contents have been determined as being executable on the data processing apparatus as a result of the determination is displayed in a manner being identifiable by the user. The user selects the process definition ticket whose descriptive contents are executable on the data processing apparatus from the list of the displayed process definition tickets, and the selection is received. The descriptive contents of the received process definition ticket are executed.07-14-2011
20110173624Process Integrated Mechanism Apparatus and Program - A method and apparatus for controlling and coordinating a multi-component system. Each component in the system contains a computing device. Each computing device is controlled by software running on the computing device. A first portion of the software resident on each computing device is used to control operations needed to coordinate the activities of all the components in the system. This first portion is known as a “coordinating process.” A second portion of the software resident on each computing device is used to control local processes (local activities) specific to that component. Each component in the system is capable of hosting and running the coordinating process. The coordinating process continually cycles from component to component while it is running. The continuous cycling of the coordinating process presents the programmer with a virtual machine in which there is a single coordinating process operating with a global view although, in fact, the data and computation remain distributed across every component in the system.07-14-2011
20110173622System and method for dynamic task migration on multiprocessor system - A multiprocessor system and a migration method of the multiprocessor system are provided. The multiprocessor system may process dynamic data and static data of a task to be operated in another memory or another processor without converting pointers, in a distributed memory environment and in a multiprocessor environment having a local memory, so that dynamic task migration may be realized.07-14-2011
20110173621PUSH-BASED OPERATORS FOR PROCESSING OF PUSH-BASED NOTIFICATIONS - A library of operators is provided for performing operations on push-based streams. The library may be implemented in a computing device. The library may be stored on a tangible machine-readable medium and may include instructions to be executed by one or more processors of a computing device. The library of operators may include groups of operators for performing various types of operations regarding push-based streams. The groups of operators may include, but not be limited to, standard sequence operators, other sequence operators, time operators, push-based operators, asynchronous operators, exception operators, functional operators, context operators, and event-specific operators.07-14-2011
20090165004Resource-aware application scheduling - In one embodiment, a method provides capturing resource monitoring information for a plurality of applications; accessing the resource monitoring information; and scheduling at least one of the plurality of applications on a selected processing core of a plurality of processing cores based, at least in part, on the resource monitoring information.06-25-2009
20100125847JOB MANAGING DEVICE, JOB MANAGING METHOD AND JOB MANAGING PROGRAM - A job managing device distributes jobs to be processed to a plurality calculation devices. The job managing device includes an information obtaining unit that obtains at least one of characteristic information or load information of the plurality of calculation devices, a job size determining unit that determines a job size to be allocated to each of the plurality of calculation devices based on the information obtained by the information obtaining unit, a job dividing unit that divides a job to be processed into divided jobs based on the job sizes determined by the job size determining unit, and a job distributing unit that distributes the divided jobs to the plurality of calculation devices.05-20-2010
20090276780Method and apparatus for dynamically processing events based on automatic detection of time conflicts - A scheduling apparatus, system, and article including a machine-accessible medium, along with a method of dynamically processing events, are disclosed. The apparatus may include a receiving module capable of receiving information associated with an event. The information may include an event name and event time. The apparatus may also include a memory capable of storing the information associated with the event, and being communicatively coupled with the receiving module. The memory may be used to store a plurality of schedule items, at least one of which may be associated with an item time. The method may include selecting an event associated with a transaction and event time, determining whether a conflict exists, and adjusting the set of events stored in the memory to include the information associated with the event if no conflict is found.11-05-2009
20090089785SYSTEM AND METHOD FOR JOB SCHEDULING IN APPLICATION SERVERS - A method and a system for job scheduling in application servers. A common metadata of a job is deployed, the job being a deployable software component. An additional metadata of the job is further deployed. A scheduler task based on the additional metadata of the job is created, wherein the task is associated with a starting condition. The scheduler task is started at an occurrence of the starting condition, and, responsive to this an execution of an instance of the job is invoked asynchronously.04-02-2009
20080209425Device Comprising a Communications Stick With A Scheduler - A scheduler is used to schedule execution of tasks by ‘engines’ that perform high resource functions as requested by ‘executive’ control code, the scheduler using its knowledge of the likelihood of engine request state transitions. The likelihood of engine request state transitions describes the likely sequence of engines which executives will impose: the scheduler can at run-time in effect, as the start of a time slice, look-forward in time to discern a number of possible schedules (i.e. sequence of future engines), assess the merits of each possible schedule using pre-defined parameters (e.g. memory and power utilisation), then apply the schedule which is most appropriate given those parameters. The process repeats at the start of the next time slice. The scheduler therefore operates as a predictive scheduler. The present invention is particularly effective in addressing the ‘multi-mode problem”: dynamically balancing the requirements of multiple communications stacks operating concurrently.08-28-2008
20080209423JOB MANAGEMENT DEVICE, CLUSTER SYSTEM, AND COMPUTER-READABLE MEDIUM STORING JOB MANAGEMENT PROGRAM - In a job management device: a request reception unit stores job-input information in a storage device on receipt of a job-execution request; and an execution instruction unit sends to one or more job-assigned calculation nodes a job-execution instruction together with execution-resource information, and stores job-assignment information in the storage device in association with a job identifier. When the contents of the job database are lost by a restart of the job management device, a reconstruction unit collects the job-input information and the job-assignment information from the storage device, collects the execution-resource information from the one or more job-assigned calculation nodes, and reconstructs the job information in the job database.08-28-2008
20080209422Deadlock avoidance mechanism in multi-threaded applications - A computer-implemented method for implementing a deadlock avoidance mechanism to prevent a plurality of threads from deadlocking in a computer system wherein a first thread of the plurality of threads request for a first resource is provided. The computer-implemented method includes employing the deadlock avoidance mechanism to intercept the request. The computer-implemented method also includes examining a status of the first resource. The computer-implemented method further includes, if the first resource is owned, identifying an owner of the first resource, analyzing the owner of the first resource to determine if the owner of the first resource is requesting a second resource, and analyzing the second resource to determine if the second resource is owned by the first thread. The computer-implemented method yet also includes, if the first thread owns the second resource, preventing deadlocking by handling a potential deadlock situation.08-28-2008
20090044191METHOD AND TERMINAL DEVICE FOR EXECUTING SCHEDULED TASKS AND MANAGEMENT TASKS - An OMA DM-based method for executing a scheduled task, includes the steps of: storing terminal resource capabilities required for executing each scheduled task in a terminal device; and executing the scheduled task after the terminal device determines that the current resource capabilities are sufficient for executing the scheduled task while it is ready to execute the scheduled task. An OMA DM-based terminal device for practicing this method includes a primary storing unit for storing the terminal resource capabilities, a judging unit for determining whether current resource capabilities of the terminal device meet the terminal resource capabilities that are required for executing the scheduled task, and a primary executing unit for executing the scheduled when the judging unit determines that the current resource capabilities are sufficient. By determining whether current resource capabilities are sufficient before executing, the success of scheduled tasks and management tasks in a terminal device is improved.02-12-2009
20090282413Scalable Scheduling of Tasks in Heterogeneous Systems - Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for scalable scheduling of tasks in heterogeneous systems is provided. According to one embodiment, the computer implemented method comprises fetching a set of tasks to form a received input, estimating run times of tasks, calculating average estimated completion times of tasks, producing a set of ordered tasks from the received input to form a task list, identifying a machine to be assigned, and assigning an identified task from the task list to an identified machine.11-12-2009
20090282412MULTI-LAYER WORKFLOW ARCHITECTURE - A multi-layer workflow architecture for a print shop is disclosed. The workflow architecture includes a workflow front end, service bus, and service providers. The workflow front end provides an interface to print shop operators. The service providers are each associated with a device in the print shop. The service bus represents the layer between the workflow front end and the service providers. In operation, the service providers report device capabilities for devices to the service bus. The workflow front end receives the device capabilities from the service bus, and provides the device capabilities to a user to allow the user to define a job ticket based on the device capabilities. The service bus identifies the processes defined in the job ticket, and identifies the service providers operable to provide the processes. The service bus then routes process messages to the identified service providers to execute the processes on the devices.11-12-2009
20090282411SCHEDULING METHOD AND SYSTEM - A scheduling method and system. The method includes receiving, by a computing system, job related data associated with a plurality of jobs to be executed by said computing system, time constraint data, and maximum time shift values associated with the time constraint data. The computing system determines that a start time for execution of a first job of the plurality of jobs should be rescheduled. The computing system receives workload statistics. The computing system determines based on the workload statistics, a first start time for the first job. The computing system compares the time constraint data with the first start time to determine if the first start time is in conflict with the time constraint data. The computing system stores the first start time.11-12-2009
20090288086LOCAL COLLECTIONS OF TASKS IN A SCHEDULER - A scheduler in a process of a computer system includes a local collection of tasks for each processing resource allocated to the scheduler and at least one general collection of tasks. The scheduler assigns each task that becomes unblocked to the local collection corresponding to the processing resource that caused the task to become unblocked. When a processing resource becomes available, the processing resource attempts to execute the most recently added task in the corresponding local collection. If there are no tasks in the corresponding local collection, the available processing resource attempts to execute a task from the general collection.11-19-2009
20090300626Scheduling for Computing Systems With Multiple Levels of Determinism - In a computing system, a method and system for scheduling software process execution and inter-process communication is introduced. Processes or groups of processes are assigned to execute within timeslots of a schedule according to associated execution frequencies, execution durations and inter-process communication requirements. The schedules allow development and test of the processes to be substantially decoupled from one another so that software engineering cycle time can be reduced.12-03-2009
20090158282Hardware accelaration for large volumes of channels - A method apparatus and system for hardware acceleration for large volumes of channels is described. In an embodiment, the invention is a method. The method includes monitoring an inbound queue for hardware jobs. The method further includes detecting an interrupt from a hardware component. The method also includes transferring a job from the inbound queue to the hardware component. The method may further include transferring a completed job from the hardware component to an outbound queue. The method may also include providing an indication of completion of a job in an outbound queue.06-18-2009
20090025003METHODS AND SYSTEMS FOR SCHEDULING JOB SETS IN A PRODUCTION ENVIRONMENT - A system of scheduling a plurality of print jobs in a document production environment may include resources and a computer-readable storage medium including programming instructions for performing a method of processing print jobs. The method may include receiving print jobs and setup characteristics corresponding to each print job. Each print job may have a corresponding job size. The print jobs may be grouped into sets based on a common characteristic and each set may be identified as a fast job set or a slow job set based on setup characteristics associated with the set and the job sizes of the print jobs in the set. The fast job set may be routed to a fast job autonomous cell and the slow job set may be routed to a slow job autonomous cell.01-22-2009
20120297392INFORMATION PROCESSING APPARATUS, COMMUNICATION METHOD, AND STORAGE MEDIUM - The invention relates to an information processing apparatus, which comprises a plurality of communication units connected to a bus in a ring shape. At least one of the plurality of communication units extends a transmission interval when it is determined that the processing unit, which is to execute the next process for received data, is the processing unit, which executes the process after the processing unit corresponding to the at least one of the plurality of communication units, and when it is detected that the process for the received data is suspended.11-22-2012
20120297391APPLICATION RESOURCE MODEL COMPOSITION FROM CONSTITUENT COMPONENTS - Techniques for composing an application resource model are disclosed. The techniques include obtaining operator-level metrics from an execution of a data stream processing application according to a first configuration, wherein the application is executed by nodes of the data stream processing system and the application includes processing elements comprised of multiple operators, wherein two or more of the operators are combined in a first combination to form a processing element according to the first configuration, generating operator-level resource functions from the first combination of operators based on the obtained operator-level metrics, and generating a processing element-level resource function using the generated operator-level resource functions to predict a model for the processing element formed by a second combination of operators, the processing element-level resource function representing an application resource model usable for predicting characteristics of the application executed according to a second configuration.11-22-2012
20120297390CREATION OF FLEXIBLE WORKFLOWS USING ARTIFACTS - Execution of flexible workflows using artifacts is described. A workflow execution engine is configured to instantiate a process execution (PE) artifact. The PE artifact includes one or more transitions. The workflow execution engine is further configured to execute the one or more transitions and determine if any of the one or more transitions are new or modified. The workflow execution engine is additionally configured to load and execute new or modified transitions, without reinstantiating the PE artifact, responsive to determining that at least one new or modified transitions exist.11-22-2012
20080216077SOFTWARE SEQUENCER FOR INTEGRATED SUBSTRATE PROCESSING SYSTEM - Embodiments of the invention generally provide apparatus and method for scheduling a process sequence to achieve maximum throughput and process consistency in a cluster tool having a set of constraints. One embodiment of the present invention provides a method for scheduling a process sequence comprising determining an initial individual schedule by assigning resources to perform the process sequence, calculating a fundamental period, detecting resource conflicts in a schedule generated from the individual schedule and the fundamental period, and adjusting the individual schedule to remove the resource conflicts.09-04-2008
20080216078Request scheduling method, request scheduling apparatus, and request scheduling program in hierarchical storage management system - A request scheduling for scheduling requests to a secondary recording media while minimizing the frequency of recording-medium mounting/removing events in a secondary storage unit of an HSM (hierarchical storage management) system by searching for one or more request(s) processed or to be executed as executable on a drive unit, in units of the drive unit. According to the searching, detecting one or more generated read request(s) to read data from a recording medium mounted on the drive unit, and setting the drive unit as an exclusive drive for the read request(s). And scheduling a drive unit having an elapsed time period for a mounted recording media not exceeding a predetermined time period to execute an executable request by priority.09-04-2008
20100005469Method and System for Defining One Flow Models with Varied Abstractions for Scalable lean Implementations - A method and system for representing one or more families of existing processes in a composite abstraction such that process improvement techniques can be implemented in a more scalable manner. The invention enables abstracting a set of pre-defined process models into a composite model that represents sufficient operational details while being compliant with process improvement techniques such as, but not limited to, Lean Six Sigma, Kaizen, and others (collectively “lean” techniques). The invention provides the ability to flexibly represent the operational and lean-related information in varied abstraction levels at different stages of the process as and when necessary. The invention provides the ability to dynamically generate and represent process models based on user-selected defining characteristics (or attributes) used for process “family” formation. This allows users to define process models based on a set of customized attributes deemed critical by that particular user, including the ability to prioritize the selected attributes.01-07-2010
20120297389SYSTEMS AND METHODS ASSOCIATED WITH A PARALLEL SCRIPT EXECUTER - According to some embodiments, a script written in a scripting programming language may be received (e.g., by a script executer). It may be determined that a first line in the script comprises a first comment, and the first comment may be interpreted as an embedded parallel part control statement. Parallel execution of a portion of the script may then be automatically arranged in accordance with the parallel part control statement.11-22-2012
20080244591INFORMATION PROCESSING SYSTEM AND STORAGE MEDIUM - An information processing system has a file memory, a schedule information memory, a reminder information memory that stores reminder information including identification information of a user, a registration deadline of the first electronic file, and a reminder submission time in connection with information indicating a registration location of the first electronic file in the file memory, a setting unit that specifies, upon arrival of the reminder submission time, a schedule item for reminding the user of the task in schedule information of the user stored in the schedule information memory as an item scheduled for the registration deadline or schedule for a day prior to the registration deadline, and a display data outputting unit that outputs, upon receipt of a request for displaying the schedule information, schedule information display data in which display information corresponding to the schedule item is associated with information on a link to the registration location.10-02-2008
20080244590METHOD FOR IMPROVING PERFORMANCE IN A COMPUTER STORAGE SYSTEM BY REGULATING RESOURCE REQUESTS FROM CLIENTS - The present invention discloses a method, apparatus and program storage device for providing non-blocking, minimum threaded two-way messaging. A Performance Monitor Daemon provides one non-blocked thread pair per processor to support a large number of connections. The thread pair includes an outbound thread for outbound communication and an inbound thread for inbound communication. The outbound thread and the inbound thread operate asynchronously.10-02-2008
20080244587Thread scheduling on multiprocessor systems - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors.10-02-2008
20080244585SYSTEM AND METHOD FOR USING FAILURE CASTING TO MANAGE FAILURES IN COMPUTER SYSTEMS - A system and method for using failure casting to manage failures in computer system. In accordance with an embodiment, the system uses a failure casting hierarchy to cast failures of one type into failures of another type. In doing this, the system allows incidents, problems, or failures to be cast into a (typically smaller) set of failures, which the system knows how to handle. In accordance with a particular embodiment, failures can be cast into a category that is considered reboot-curable. If a failure is reboot-curable then rebooting the system will likely cure the problem. Examples include hardware failures, and reboot-specific methods that can be applied to disk failures and to failures within clusters of databases. The system can even be used to handle failures that were hitherto unforeseen—failures can be cast into known failures based on the failure symptoms, rather than any underlying cause.10-02-2008
20080244589Task manager - A task list contains information related to multiple tasks to be executed in a sequential manner. A task processor is provided to execute at least one task in the task list. A task management engine retrieves information from the task list and provides task execution instructions to the task processor. The task execution instructions provided by the task management engine are based on information retrieved from the task list. The task management engine receives execution results from the task processor and provides those results to a calling program that communicates with the task management engine.10-02-2008
20090265711PROCESSING OF ELECTRONIC DOCUMENTS TO ACHIEVE MANUFACTURING EFFICIENCY - A method can be used for processing electronic documents, each of which are assigned a plurality of attributes. The documents are sorted into one or more groups based on the attributes, such that the electronic documents of each group share at least one of the attributes. The attributes of the documents in each group are analyzed to determine an appropriate processing site for each group, and then the groups are each routed to their respective processing sites determined to be appropriate therefor.10-22-2009
20110271283ENERGY-AWARE JOB SCHEDULING FOR CLUSTER ENVIRONMENTS - A job scheduler can select a processor core operating frequency for a node in a cluster to perform a job based on energy usage and performance data. After a job request is received, an energy aware job scheduler accesses data that specifies energy usage and job performance metrics that correspond to the requested job and a plurality of processor core operating frequencies. A first of the plurality of processor core operating frequencies is selected that satisfies an energy usage criterion for performing the job based, at least in part, on the data that specifies energy usage and job performance metrics that correspond to the job. The job is assigned to be performed by a node in the cluster at the selected first of the plurality of processor core operating frequencies.11-03-2011
20090119670METHOD OF CONSTRUCTING AND EXECUTING PROCESS - Disclosed is a method of constructing and executing a process. A conventional process is minutely divided into minimum unit subprocesses, and the minutely divided subprocesses are classified into a decision subprocesses and a routine subprocess by whether they require decision-making. Any subprocess which is executable using the setup condition in a specific decision subprocess is classified into the routine subprocess in such a manner that the classified routine subprocess follows on the specific decision subprocess. One or a series of decision subprocesses are combined with one or a series of routine subprocesses which are executable on the condition of the completion of the decision subprocesses to form one unit process, and a job-support computer program is created to allow the plurality of subprocesses included in the one unit process to be successively executed. A plurality of subprocesses which are executable in accordance with common input data are detected from the minutely divided minimum unit subprocesses, and a job flow is constructed to allow the respective jobs in the plurality of subprocesses to be simultaneously initiated and executed in parallel. The present invention can drastically reduce the lead-time of a process while facilitating execution of the entire process with high efficiency.05-07-2009
20080282249METHOD AND SYSTEM FOR PERFORMING REAL-TIME OPERATION - An information processing system performs a real-time operation including a combination of a plurality of tasks. The system includes a plurality of processors, a unit which stores structural description information and a plurality of programs describing procedures corresponding to the tasks, the structural description information indicating a relationship in input/output between the programs and including cost information concerning time required for executing each of the programs, a unit which determines an execution start timing and execution term of each of a plurality of threads for execution of the programs based on the structural description information, and a unit which performs a scheduling operation of assigning the threads to at least one of the processors according to a result of the determining.11-13-2008
20080235687SUPPLY CAPABILITY ENGINE WEEKLY POLLER - A method for executing and polling an operational slice of a supply capability engine. The method of polling is designed to query a DB2 table searching for a predetermined, eligible operational slice to process. When an operational slice is detected that is ready to be processed, an entry on a queue is placed, typically to a second DB2 table. The operational slices on the queue are then processed sequentially. The poller monitors the duration of the operational slice, and generates an alert if any of the operational slices placed on the queue exceed an allowable duration.09-25-2008
20080235688Enhanced Distance Calculation for Job Route Optimization - Systems and methods provide optimized distribution of jobs for execution among available workers. Categories are established for pairs of jobs based on a precise or estimated distance between each pair of jobs. Values are then assigned to the pairs of jobs and various decisions about job assignment and grouping can be made based upon the assigned value. The systems and methods allow certain job pairs to be excluded from consideration from grouping together, and emphasize which jobs are best suited for pairwise assignment, resulting in reduction of costs and necessary resources.09-25-2008
20100050177Method and apparatus for content based searching - The scheduling of multiple request to be processed by a number of deterministic finite automata-based graph thread engine (DTE) workstations is processed by a novel scheduler. The scheduler may select an entry from an instruction in a content search apparatus. Using attribute information from the selected entry, the scheduler may thereafter analyze a dynamic scheduling table to obtain placement information. The scheduler may determine an assignment of the entry, using the placement information, that may limit cache thrashing and head of line blocking occurrences. Each DTE workstation may including normalization capabilities. Additionally, the content searching apparatus may employ an address memory scheme that may prevent memory bottle neck issues.02-25-2010
20080313636SYSTEM AND METHOD FOR SECURE AUTOMATED DATA COLLECTION - The invention provides for an automated data collection having an endpoint coupled to at least one gaming machine to collect data from the at least one gaming machine, at least one concentrator in communication with the endpoint via a personal area network to obtain the data from the endpoint, and at least one remote collection server in communication with the at least one concentrator to receive the data from the at least one concentrator, wherein the data is pushed from the endpoint to the at least one remote collection server at predefined time intervals without interrupting game play on the at least one gaming machine.12-18-2008
20080244584TASK SCHEDULING METHOD - Provided is a method for scheduling activities. The method includes partitioning tasks provided for scheduling. The partitioning is accomplished by receiving at least one task including at least one data type. The data type is reviewed to determine at least one scheduling criteria and the task is routed to a queue based on the determined scheduling criteria. Each queue also has at least one queue characteristic. The method also includes scheduling the partitioned tasks. The scheduling is accomplished by retrieving the at least one task from the queue in response to a trigger. The retrieved task is routed to at least one scheduler. In a first instance the routing is based on the queue characteristic. In a second instance the routing is based on at least one scheduler characteristic. A scheduling system for performing this method is also provided.10-02-2008
20120297393Data Collecting Method, Data Collecting Apparatus and Network Management Device - The present invention provides a data collection method and apparatus and a network management device. The method includes: a network management device collecting data files to be processed reported by a network element device; assigning the data files to be processed as a plurality of tasks; adding the assigned tasks into a task queue and extracting tasks from the task queue one by one for processing. According to the present invention, the task work load can be automatically adjusted according to the computer configuration and parameter configuration, and the maximum efficiency of data processing can be achieved under different scenarios.11-22-2012
20100005468BLACK-BOX PERFORMANCE CONTROL FOR HIGH-VOLUME THROUGHPUT-CENTRIC SYSTEMS - Throughput of a high-volume throughput-centric computer system is controlled by dynamically adjusting a concurrency level of a plurality of events being processed in a computer system to meet a predetermined target for utilization of one or more resources of a computer system. The predetermined target is less than 100% utilization of said one or more resources. The adjusted concurrency level is validated using one or more queuing models to check that said predetermined target is being met. Parameters are configured for adjusting the concurrency level. The parameters are configured so that said one or more resources are shared with one or more external programs. A statistical algorithm is established that minimizes total number of samples collected. The samples may be used to measure performance used to further dynamically adjust the concurrency level. A dynamic thread sleeping method is designed to handle systems that need only a very small number of threads to saturate bottleneck resources and hence are sensitive to concurrency level changes.01-07-2010
20120144394Energy And Performance Optimizing Job Scheduling - Energy and performance optimizing job scheduling that includes queuing jobs; characterizing jobs as hot or cold, specifying a hot and a cold job sub-queue; iteratively for a number of schedules, until estimated performance and power characteristics of executing jobs in accordance with a schedule meets predefined selection criteria: determining a schedule in dependence upon a user provided parameter, the characterization of each job as hot or cold, and an energy and performance optimizing heuristic; estimating performance and power characteristics of executing the jobs in accordance with the schedule; and determining whether the estimated performance and power characteristics meet the predefined selection criteria. If the estimated performance and power characteristics do not meet the predefined selection criteria, adjusting the user-provided parameter for a next iteration and executing the plurality of jobs in accordance with the determined schedule if the estimated performance and power characteristics meet the predefined selection criteria.06-07-2012
20130191834CONTROL METHOD OF INFORMATION PROCESSING DEVICE - A method of controlling an information processing device includes selectively switching a first processor for executing a first operating system or a second processor for executing a second operating system to a user interface; storing a data table in which a first application program operating on the first operating system is associated with a second application program operating on the second operating system; sending information pertinent to activation of the first or second application program to a server device; receiving a result of a process from the server device, the process being performed by the server device for associating application programs based on the received information; updating the data table based on the received result; and activating the second application program, which is associated with the first application program being activated in the data table, in a state where the first processor has been switched to the user interface.07-25-2013
20080244586DIRECTED SAX PARSER FOR XML DOCUMENTS - A method for processing XML documents using a SAX parser, implemented in a two-thread architecture having a main thread and a parsing thread. The parsing procedure is located in a parsing thread, which implements callback functions of a SAX parser and creates and executes the SAX parser. The main thread controls the parsing thread by sending target content to be searched for and wakeup signals to the parsing thread, and receives the content found by the parsing thread for further processing. In the parsing thread, each time a callback function is invoked by the SAX parser, it is determined whether the target content has been found. If it has, the parsing thread sends the found content to the main thread with a wakeup signal, and enters a sleep mode, whereby further parsing is halted until a wakeup signal with additional target content is received from the main thread.10-02-2008
20100138839MULTIPROCESSING SYSTEM AND METHOD - A multiprocessing system executes a plurality of processes concurrently. A process execution circuit (06-03-2010
20110209152METHOD AND SYSTEM FOR SCHEDULING PERIODIC PROCESSES - A method of scheduling periodic processes for execution in an electronic system, in particular in a network, in a data processor or in a communication device, wherein the electronic system includes a controller for performing the scheduling, wherein a number of N processes P08-25-2011
20120198459ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR - A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.08-02-2012
20110209153SCHEDULE DECISION DEVICE, PARALLEL EXECUTION DEVICE, SCHEDULE DECISION METHOD, AND PROGRAM - A schedule decision method acquires dependencies of execution sequences required for a plurality of sub tasks into which a first task has been divided; generates a plurality of sub task structure candidates that satisfy said dependencies and for which a plurality of processing devices execute said plurality of sub tasks; generates a plurality of schedule candidates by further assigning at least one second task to each of said sub task structure candidates; computes an effective degree that represents effectiveness of executions of said first task and said second task for each of said plurality of schedule candidates; and decides a schedule candidate used for the executions of said first task and said second task from said plurality of schedule candidates based on said effective degrees.08-25-2011
20090165002METHOD AND SYSTEM FOR MODULE INITIALIZATION - A method for initializing a module that includes identifying a module for initialization and performing a plurality of processing phases on the module and all modules in a dependency graph of the module. Performing the processing phases includes, for each module, executing a processing phase of the plurality of processing phases on the module, determining whether the processing phase has been executed on all modules in a dependency graph of the module, and when the processing phase has been executed for all modules in the dependency graph of the module, executing a subsequent processing phase of the plurality of processing phases on the module, wherein at least one processing phase of the plurality of processing phases includes executing custom initialization code.06-25-2009
20090165005TASK EXECUTION APPARATUS, TASK EXECUTION METHOD, AND STORAGE MEDIUM - A task execution apparatus includes an execution unit configured to execute a task on a plurality of devices, an acquisition unit configured to acquire a cause of failure in execution by the execution unit, a confirmation unit configured to confirm that each device of the plurality of devices on which the execution unit failed to execute the task does not support the task based on the cause, and a re-execution unit configured to re-execute the task on each of the plurality of devices on which the execution unit failed to execute the task, wherein the re-execution unit excludes each of the plurality of devices from a re-execution target of the task, in a case where the confirmation unit confirms that each of the plurality of devices does not support the task.06-25-2009
20090165000Multiple Participant, Time-Shifted Dialogue Management - A virtual environment server. The server manages time-shifted presentation data between multiple participants in a shared virtual environment system. The server includes a routing module configurable for coupling to multiple participants, a real-time data management module coupled to the routing module, a time-shifted data management module coupled to the routing module, and a data store module coupled to the real-time data management module and to the time-shifted data management module. Participant output presentation data is received from the participants, stored as real-time presentation data, and transferred to appropriate participants. In response to requests from a requesting participant to obtain time-shifted presentation data from a time-shifted participant and any influence participants, time-shifted presentation data is retrieved from the data store module and transferred to the requesting participant. Influence participants are participants whose input presentation data are influenced by time-shifted participant and whose output presentation data influence presentation environment of requesting participant.06-25-2009
20090138878ENERGY-AWARE PRINT JOB MANAGEMENT - A printing system and method for processing print jobs in a network of printers are disclosed. The printers each have high and low operational states. A job ticket is associated with each print job. The job ticket designates one of the network printers as a target printer for printing the job and includes print job parameters related to redirection and delay for the print job. Where the target printer for the print job is in the low operational state, the print job related redirection and delay parameters for the job are identified. Based on the identified parameters, the print job may be scheduled for at least one of redirection and delay, where the parameters for redirection/delay permit, whereby the likelihood that the print job is printed sequentially with another print job on one of the network printers, without that one printer entering an intervening low operational state, is increased.05-28-2009
20090172680Discovery Directives - A mechanism for configuring and scheduling logical discovery processes in a data processing system is provided. A discovery engine communicates with information providers to collect discovery data. An information provider is a software component whose responsibility is to discover resources and relationships between the resources and write their representations in a persistent store. Discovery directives are used to coordinate the execution of information providers.07-02-2009
20090025002METHODS AND SYSTEMS FOR ROUTING LARGE, HIGH-VOLUME, HIGH-VARIABILITY PRINT JOBS IN A DOCUMENT PRODUCTION ENVIRONMENT - A system of scheduling a plurality of print jobs in a document production environment may include a plurality of print job processing resources and a computer-readable storage medium including programming instructions for performing a method of processing a plurality of print jobs. The method may include receiving a plurality of print jobs and setup characteristics corresponding to each print job, grouping each print job having a job size that exceeds a job size threshold into a large job subgroup and grouping each print job having a job size that does not exceed the job size threshold into a small job subgroup. The large job subgroup may be classified as a high setup subgroup or a low setup subgroup based on the setup characteristics corresponding to each print job in the large job subgroup. The large job subgroup may be routed to a large job autonomous cell.01-22-2009
20090025000METHODS AND SYSTEMS FOR PROCESSING HEAVY-TAILED JOB DISTRIBUTIONS IN A DOCUMENT PRODUCTION ENVIRONMENT - A production printing system for processing a plurality of print jobs may include a plurality of print job processing resources and a computer-readable storage medium including one or more programming instructions for performing a method of processing a plurality of print jobs in a document production environment. The method may include identifying a print job size distribution for a plurality of print jobs in a document production environment and determining whether the print job size distribution exhibits a heavy-tail characteristic. For each print job size distribution that exhibits a heavy-tail characteristic, the plurality of print jobs may be grouped into a plurality of subgroups such that at least one of the plurality of subgroups exhibits a non-heavy-tail characteristic, and each job in the at least one of the plurality of subgroups exhibiting the non-heavy-tail characteristic may be processed by one or more print job processing resources.01-22-2009
20090024999Methods, Systems, and Computer-Readable Media for Providing an Indication of a Schedule Conflict - Methods, systems, and computer-readable media provide for providing an indication of a schedule conflict. According to embodiments, a method for providing an indication of a schedule conflict is provided. According to the method, whether one of a plurality of technicians is scheduled but not dispatched or dispatched but not scheduled is determined. In response to determining that the one of the plurality of technicians is scheduled but not dispatched or dispatched but not scheduled, an indication that the one of the plurality of technicians is scheduled but not dispatched or dispatched but not scheduled is provided.01-22-2009
20110225588REDUCING DATA READ LATENCY IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.09-15-2011
20110225589EXCEPTION DETECTION AND THREAD RESCHEDULING IN A MULTI-CORE, MULTI-THREAD NETWORK PROCESSOR - Described embodiments provide a packet classifier of a network processor having a plurality of processing modules. A scheduler generates a thread of contexts for each tasks generated by the network processor corresponding to each received packet. The thread corresponds to an order of instructions applied to the corresponding packet. A multi-thread instruction engine processes the threads of instructions. A function bus interface inspects instructions received from the multi-thread instruction engine for one or more exception conditions. If the function bus interface detects an exception, the function bus interface reports the exception to the scheduler and the multi-thread instruction engine. The scheduler reschedules the thread corresponding to the instruction having the exception for processing in the multi-thread instruction engine. Otherwise, the function bus interface provides the instruction to a corresponding destination processing module of the network processor.09-15-2011
20110225587DUAL MODE READER WRITER LOCK - A method, system, and computer usable program product for a dual mode reader writer lock. A contention condition is determined in using an original lock. The original lock manages read and write access to a resource by several processes executing in the data processing system. The embodiment creates a set of expanded locks for use in conjunction with the original lock. The original lock and the set of expanded locks forming the dual mode reader writer lock, which operates to manage the read and write access to the resource. Using an index within the original lock, each expanded lock is indexed such that each expanded lock is locatable using the index. The contention condition is resolved by distributing requests for acquiring and releasing the read access and write access to the resource by the several processes across the original lock and the set of expanded locks.09-15-2011
20090210878SYSTEM AND METHOD FOR DATA MANAGEMENT JOB PLANNING AND SCHEDULING WITH FINISH TIME GUARANTEE - A method is disclosed for scheduling data management jobs on a computer system that uses a dual level scheduling method. Macro level scheduling using a chained timer schedules the data management job for execution in the future. Micro level scheduling using an algorithm controls the actual dispatch of the component requests of a data management job to minimize impact on foreground programs.08-20-2009
20090199189Parallel Lock Spinning Using Wake-and-Go Mechanism - A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism recognizes a programming idiom that indicates that a thread is spinning on a lock. The wake-and-go mechanism updates a wake-and-go array with a target address associated with the lock and sets a lock bit in the wake-and-go array. The thread then goes to sleep until the lock frees. The wake-and-go array may be a content addressable memory (CAM). When a transaction appears on the symmetric multiprocessing (SMP) fabric that modifies the value at a target address in the CAM, the CAM returns a list of storage addresses at which the target address is stored. The wake-and-go mechanism associates these storage addresses with the threads waiting for an even at the target addresses, and may wake the thread that is spinning on the lock.08-06-2009
20090199190System and Method for Priority-Based Prefetch Requests Scheduling and Throttling - A method, processor, and data processing system for implementing a framework for priority-based scheduling and throttling of prefetching operations. A prefetch engine (PE) assigns a priority to a first prefetch stream, indicating a relative priority for scheduling prefetch operations of the first prefetch stream. The PE monitors activity within the data processing system and dynamically updates the priority of the first prefetch stream based on the activity (or lack thereof). Low priority streams may be discarded. The PE also schedules prefetching in a priority-based scheduling sequence that corresponds to the priority currently assigned to the scheduled active streams. When there are no prefetches within a prefetch queue, the PE triggers the active streams to provide prefetches for issuing. The PE determines when to throttle prefetching, based on the current usage level of resources relevant to completing the prefetch.08-06-2009
20090049445METHOD, SYSTEM AND APPARATUS FOR TASK PROCESSING IN DEVICE MANAGEMENT - The disclosure provides a method, system and apparatus for task processing in device management so that a scheduled task may be triggered and executed normally, according to a predetermined triggering condition when the execution of the task is affected by a state of a terminal device or an operation of the terminal device. The method according to the invention includes steps of determining a scheduled task when the execution of the scheduled task is affected by a state of a terminal device or an operation of the terminal device; and prompting a user to select a processing manner for the scheduled task, and processing the affected scheduled task according to the user's selection, or processing the scheduled task in a predetermined processing manner.02-19-2009
20110145828STREAM DATA PROCESSING APPARATUS AND METHOD - A stream data processing apparatus creates a plurality of partition data on the basis of stream data, and distributes the partition data to a plurality of computers. Specifically, the stream data processing apparatus acquires from the stream data a data element group that is configured in the number of data elements based on the processing capability of the partition data destination computer, and decides an auxiliary data part of this data element group based on a predetermined value. The stream data processing apparatus creates partition data that include the acquired data element group and END data. The data element group is configured from the auxiliary data part and a result usage data part.06-16-2011
20110145827MAINTAINING A COUNT FOR LOCK-FREE LINKED LIST STRUCTURES - The present invention extends to methods, systems, and computer program products for maintaining a count for lock-free stack access. A numeric value representative of the total count of nodes in a linked list is maintained at the head node for the linked list. Commands for pushing and popping nodes appropriately update the total count at a new head node when nodes are added to and removed from the linked list. Thus, determining the count of nodes in a linked list is an order 1 (or O(1)) operation, and remains constant even when the size of a linked list changes06-16-2011
20090064152SYSTEMS, METHODS AND COMPUTER PRODUCTS FOR CROSS-THREAD SCHEDULING - Systems, methods and computer products for cross-thread scheduling. Exemplary embodiments include a cross thread scheduling method for compiling code, the method including scheduling a scheduling unit with a scheduler sub-operation in response to the scheduling unit being in a non-multithreaded part of the code and scheduling the scheduling unit with a cross-thread scheduler sub-operation in response to the scheduling unit being in a multithreaded part of the code.03-05-2009
20090055825WORKFLOW ENGINE SYSTEM AND METHOD - Provided is a workflow engine for managing data. More specifically, the workflow engine includes a receiving subsystem that is operable to receive data. An environment evaluating subsystem is also provided and is operable to evaluate an environment and determine at least one environmental parameter. A data evaluating system is in communication with the receiving subsystem and the environment evaluating subsystem. The data evaluating system is operable to determine at least one data parameter from the received data and to receive the environmental parameter. The data evaluating system will evaluate the data parameter and environment parameter and select at least one appropriate workflow rule for use in establishing a workflow job operation for execution by a job operation subsystem. An associated method of use is also provided.02-26-2009
20090055826Multicore Processor Having Storage for Core-Specific Operational Data - An integrated circuit includes a plurality of processor cores and a readable non-volatile memory that stores information expressive of at least one operating characteristic for each of the plurality of processor cores. Also disclosed is a method to operate a data processing system, where the method includes providing a multicore processor that contains a plurality of processor cores and a readable non-volatile memory that stores information, determined during a testing operation, that is indicative of at least a maximum operating frequency for each of the plurality of processor cores. The method further includes operating a scheduler coupled to an operating system and to the multicore processor, where the scheduler is operated to be responsive at least in part to information read from the memory to schedule the execution of threads to individual ones of the processor cores for a more optimal usage of energy.02-26-2009
20090055827POLLING ADAPTER PROVIDING HIGH PERFORMANCE EVENT DELIVERY - An apparatus and method for improving event delivery efficiency in a polling adapter system is configured to poll an enterprise information system (EIS) to obtain a list of events occurring in the EIS. Each event may be associated with an object key. These events may then be allocated to multiple delivery lists wherein events associated with the same object key are allocated to the same delivery list. Multiple delivery threads may then be generated, with each delivery thread being associated with a delivery list. Each delivery thread is configured to retrieve, from the EIS, events listed in the delivery list associated with the thread and deliver the events to a client.02-26-2009
20120079486INTEGRATION OF DISSIMILAR JOB TYPES INTO AN EARLIEST DEADLINE FIRST (EDF) SCHEDULE - A system for inserting jobs into a scheduler of a processor includes the processor and the scheduler. The processor executes instructions related to a plurality of jobs. The scheduler implements an earliest deadline first (EDF) scheduling model. The scheduler also receives a plurality of jobs from an EDF schedule. The scheduler also receives a separate job from a source other than the EDF schedule. The separate job has a fixed scheduling requirement. The separate job also may be a short duration sporadic job. The scheduler also inserts the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.03-29-2012
20090083744INFORMATION WRITING/READING SYSTEM, METHOD AND PROGRAM - An information writing/reading system includes a thread scheduler unit configured to control a sequence of execution for a plurality of threads, a thread execution unit, a device driver unit configured, a disk mechanism, an end time estimation unit configured to estimate an end time of execution of an issued write command, and a command management unit, wherein the thread scheduler unit is configured to temporarily suspend execution of at least one read thread of the plurality of threads if the command management unit determines that an estimated end time of execution of the issued write command is greater than an end time designated by the issued write command.03-26-2009
20080263552MULTITHREAD PROCESSOR AND METHOD OF SYNCHRONIZATION OPERATIONS AMONG THREADS TO BE USED IN SAME - The Thread Data Base 10-23-2008
20080263554Method and System for Scheduling User-Level I/O Threads - The present invention is directed to a user-level thread scheduler that employs a service that propagates at the user level, continuously as it gets updated in the kernel, the kernel-level state necessary to determine if an I/O operation would block or not. In addition, the user-level thread schedulers used systems that propagate at the user level other types of information related to the state and content of active file descriptors. Using this information, the user-level thread package determines when I/O requests can be satisfied without blocking and implements pre-defined scheduling policies.10-23-2008
20080263551OPTIMIZATION AND UTILIZATION OF MEDIA RESOURCES - Method for scheduling a new backup job within a backup application to optimize a utilization of a media resource of said backup application. The backup application includes one or more previously scheduled backup jobs. The backup application calculates a current load of the media resource as a function of the previously scheduled backup jobs and the media resource and predicts a load value for the new backup job as a function of job parameters associated with the new backup job. Then, the backup application schedules the new backup job as a function of the calculated current load and the predicted load value such that the resulting load on the media resource will yeild a minimum peak percentage utilization of the media resource. Alternatively, the backup application schedules the new backup job and previously scheduled backup jobs as function of the calculated current load and the predicted load value such that the resulting load on the media resource will yield a minimum peak percentage utilization of the media resource.10-23-2008
20080263550A SYSTEM AND METHOD FOR SCHEDULED DISTRIBUTION OF UPDATED DOCUMENTS - The subject application is directed to a system and method for scheduled distribution of updated documents. Document data corresponding to at least one electronic document associated with a meeting is first stored in an associated data storage. Next, identification data representing each invitee to the meeting is stored in the storage. Event data corresponding to the scheduled timing of the meeting event is then stored in the associated data storage. Document processing operation data, corresponding to one or more document processing operations to be performed on the received document data, is also stored in the associated data storage. The stored document data is then retrieved from the data storage at an appointed time in accordance with the stored event data. At least one of the associated document processing operations is then commenced on the retrieved document data based upon the stored document processing operation data.10-23-2008
20110231849Optimizing Workflow Engines - Techniques for implementing a workflow are provided. The techniques include merging a workflow to create a virtual graph, wherein the workflow comprises two or more directed acyclic graphs (DAGs), mapping each of one or more nodes of the virtual graph to one or more physical nodes, and using a message passing scheme to implement a computation via the one or more physical nodes.09-22-2011
20110231851ROLE-BASED MODERNIZATION OF LEGACY APPLICATIONS - Methods, systems, and techniques for role-based modernization of legacy applications are provided. Example embodiments provide a Role-Based Modernization System (“RBMS”), which enables the reorganization of (menu-based) legacy applications by role as a method of modernization and enables user access to such modernized applications through roles. In addition the RBMS supports the ability to enhance such legacy applications by blending them with non-legacy tasks and functions in a user-transparent fashion. In one embodiment, the RBMS comprises a client-side javascript display and control module and a java applet host interface and a server-side emulation control services module. These components cooperate to uniformly present legacy and non-legacy tasks that have been reorganized according to role modernization techniques.09-22-2011
20110231852METHOD AND SYSTEM FOR SCHEDULING MEDIA EXPORTS - Methods, systems and software components are described for exporting media in a library according to a schedule. At a first time, a user provides and a system receives export identification data including data identifying one or more media from the library to be exported and data identifying a second time at which the one or more media is scheduled to be exported. The first data may be a list of media identified by media identifiers and related data or may be a set of one or more criteria which are evaluated to determine which media in the library should be exported at the scheduled time. The export identification data is stored in a relational database table. At the second, scheduled time, the stored export identification data is used to select the one or more media to be exported to export the selected media from the library.09-22-2011
20090100430METHOD AND SYSTEM FOR A TASK AUTOMATION TOOL - Disclosed is a method and system for receiving a task list containing a task, determining if the task must be executed based on a context of a business scenario and executing the task. After executing the task, a result of execution of the task is analyzed based on the context of the business scenario and an operation to be performed is determined based on the result of the execution.04-16-2009
20090222826System and Method for Managing the Deployment of an Information Handling System - A system and method for automated deployment of an information handling system are disclosed. A method for managing the deployment of an information handling system may include executing a deployment application on an information handling system, the deployment application including one or more tasks associated with the deployment of the information handling system. The method may further include automatically determining for a particular task whether an execution time for the particular task is within a predetermined range of execution times. The method may further include automatically performing an error-handling task in response to determining that the execution time for the particular task is not within the predetermined range of execution times.09-03-2009
20090204970DISTRIBUTED DOCUMENT HANDLING SYSTEM - Disclosed is a networked reproduction system comprising connected scanners, printers and servers. A reproduction job to be carried out includes a number of subtasks. For the execution of these subtasks, services distributed over the network are available. A service management system selects appropriate services and links them to form paths that are able to fulfill the reproduction job. The user may define additional constraints that apply to the job. A path, optimal with respect to constraints, is selected.08-13-2009
20090235261IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, AND CONTROL METHOD OF IMAGE PROCESSING APPARATUS - An image processing system capable of enhancing the reliability of secret leakage prevention, which includes an image processing apparatus, an access control apparatus that issues authority information on each user, and a job history management apparatus that manages job histories. Authority information on a user logging in the image processing apparatus is acquired. With reference to the authority information, whether or not a job for which an execution instruction is given by the user is executable is determined. If executable, the job is executed. If the job is not executable, whether or not the job is executable on condition that a job history is transmitted to the job history management apparatus is further determined. If conditionally executable, the job is executed, and a history of the executed job is acquired and transmitted to the job history management apparatus.09-17-2009
20090254912SYSTEM AND METHOD FOR BUILDING APPLICATIONS, SUCH AS CUSTOMIZED APPLICATIONS FOR MOBILE DEVICES - A system and method for building applications, such as applications that cause a mobile device to perform a task, is described. In some examples, the system provides one or more plugins, a framework for the plugins, and configures the plugins to build a customized application for a mobile device. The plugins may include code configured to perform a task, display one or more pages associated with performance of the task, perform a transaction during performance of the task, and so on.10-08-2009
20090254907METHOD FOR MULTITHREADING AN APPLICATION USING PARTITIONING TO ALLOCATE WORK TO THREADS - A method for assigning work to a plurality of threads using a primitive data element to partition a work load into a plurality of partitions. A first partition is assigned to a first thread and a second partition is assigned to a second thread of the plurality of threads. A method for improving the concurrency of a multithreaded program by replacing a queue structure storing a plurality of tasks to be performed by a plurality of threads with a partition function. A computer system including a processor unit configured to run a plurality of threads and a system memory coupled to the processor unit that stores a multithreaded program. The multithreaded program workload is partitioned into a plurality of partitions using a primitive data element and a first partition of the plurality of partitions is assigned to a first thread of the plurality of threads for execution.10-08-2009
20120198458Methods and Systems for Synchronous Operation of a Processing Device - Embodiments of the present invention provide a method of synchronous operation of a first processing device and a second processing device. The method includes executing a process on the first processing device, responsive to a determination that execution of the process on the first device has reached a serial-parallel boundary, passing an execution thread of the process from the first processing device to the second processing device, and executing the process on the second processing device.08-02-2012
20090222828MANAGEMENT PLATFORM AND ASSOCIATED METHOD FOR MANAGING SMART METERS - The present invention relates to a management platform for monitoring and managing one or more smart meters. The management platform comprises means for communicating with smart meters and a workflow handler for executing a workflow. A workflow specifies a process for management of the smart meters.09-03-2009
20090222825DATA RACE DETECTION IN A CONCURRENT PROCESSING ENVIRONMENT - A method for detecting race conditions in a concurrent processing environment is provided. The method comprises implementing a data structure configured for storing data related to at least one task executed in a concurrent processing computing environment, each task represented by a node in the data structure; and assigning to a node in the data structure at least one of a task number, a wait number, and a wait list; wherein the task number uniquely identifies the respective task, wherein the wait number is calculated based on a segment number of the respective task's parent node, and wherein the wait list comprises at least an ancestor's wait number. The method may further comprise monitoring a plurality of memory locations to determine if a first task accesses a first memory location, wherein said first memory location was previously accessed by a second task.09-03-2009
20080271026Systems and Media for Controlling Temperature in a Computer System - Systems and media for controlling temperature of a system are disclosed. More particularly, hardware, software and/or firmware for controlling the temperature of a computer system are disclosed. Embodiments may include receiving component temperatures for a group of components and selecting a component to perform an activity based at least partially on the component temperatures. In one embodiment, the lowest temperature component may be selected to perform the activity. Other embodiments may provide for determining an average temperature of the components, and if the average temperature exceeds a threshold, delaying or reducing the performance of the components. In some embodiments, components may include computer processors, memory modules, hard drives, etc.10-30-2008
20090222827CONTINUATION BASED DECLARATIVE DEFINITION AND COMPOSITION - Declarative definition and composition of activities of a continuation based runtime. When formulating such a declarative activity of a continuation-based runtime, the activity may be formulated in accordance with a declarative activity schema and include a properties portion that declaratively defines one or more interface parameters of the declarative activity, and a body portion that declaratively defines an execution behavior of the declarative activity. The declarative activities may be hierarchically structured such that a parent declarative activity may use one or more child activities to define its behavior, where one or more of the child activities may also be defined declaratively.09-03-2009
20090235259Synchronous Adaption of Asynchronous Modules - A program disposed on a computer readable medium, having a main program with a first routine for issuing commands in an asynchronous manner and a second routine for determining whether the commands have been completed in an asynchronous manner. An auxiliary program adapts the main program to behave in a synchronous manner, by receiving control from the first routine, waiting a specified period of time with a wait routine, passing control to the second routine to determine whether any of the commands have been completed during the specified period of time, receiving control back from the second routine, and determining whether all of the commands have been completed. When all of the commands have not been completed, then the auxiliary program passes control back to the wait routine. When all of the commands have been completed, then the auxiliary program ends.09-17-2009
20090222829METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM - A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks.09-03-2009
20090254910PRINTING SYSTEM SCHEDULER METHODS AND SYSTEMS - Provided are printing system scheduler methods and systems. Specifically, a shadow scheduler is disclosed which provides alternative modular printing system configurations, relative to a base modular printing system configuration.10-08-2009
20090254908CUSTOM SCHEDULING AND CONTROL OF A MULTIFUNCTION PRINTER - A method and system for implementing custom scheduling policies including making alterations to internal task scheduling policies or firmware operating within the MFP throughout the lifetime of the MFP. Internal task scheduling policy alterations can be made either remotely or on-site at a customer location. Custom scheduling policies can be implemented for different periods of time. The MFP includes a task run-time controller to receive and process the internal task scheduling policy alterations. The task run-time controller includes a task tuner, which may implement the internal task scheduling policy alterations responsive to usage characteristics of the MFP.10-08-2009
20090249346IMAGE FORMING APPARATUS, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An image forming apparatus is provided. The image forming apparatus includes: a job reception unit configured, in response to an execution request for a job, to receive a second program in which an insertion process for a first program that executes the job is described or to receive identification information of the second program; an application unit configured to apply the second program to the first program loaded in a memory; and a job execution unit configured to execute the job based on the first program to which the second program is applied.10-01-2009
20130219400ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining.08-22-2013
20130219399MECHANISM FOR INSTRUCTION SET BASED THREAD EXECUTION OF A PLURALITY OF INSTRUCTION SEQUENCERS - In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed.08-22-2013
20130219397Methods and Apparatus for State Objects in Cluster Computing - Embodiments of a mobile state object for storing and transporting job metadata on a cluster computing system may use a database as an envelope for the metadata. A state object may include a database that stores the job metadata and wrapper methods. A small database engine may be employed. Since the entire database exists within a single file, complex, extensible applications may be created on the same base state object, and the state object can be sent across the network with the state intact, along with history of the object. An SQLite technology database engine, or alternatively other single file relational database engine technologies, may be used as the database engine. To support the database engine, compute nodes on the cluster may be configured with a runtime library for the database engine via which applications or other entities may access the state file database.08-22-2013
20120198460Deadlock Detection Method and System for Parallel Programs - A deadlock detection method and computer system for parallel programs. A determination is made that a lock of the parallel programs is no longer used in a running procedure of the parallel programs. A node corresponding to the lock that is no longer used, and edges relating to the lock that is no longer used, are deleted from a lock graph corresponding to the running procedure of the parallel programs in order to acquire an updated lock graph. The lock graph is constructed according to a lock operation of the parallel programs. Deadlock detection is then performed on the updated lock graph.08-02-2012
20120198457Method and apparatus for triggering workflow deployment and/or execution - A system and method for triggering deployment of a workflow are provided. The method includes issuing, to a first device (e.g., a server) from application software executing on a second device (e.g., a client computer), an instruction to execute a workflow previously deployed at the first device. The workflow is formed as a function of information associated with a graphical representation of the workflow. The application software may be, for example, software for one or more word-processing, spreadsheet, database, email, instant messenger, presentation, browser, calendar, organizer, media, image-display applications; file management programs and/or operating system shells. Alternatively, the application software may be or include a module associated with such application software. This module may include or be formed as or from one or more plug-ins, add-ons, applets, shared libraries, and/or extensions.08-02-2012
20100153956Multicore Processor And Method Of Use That Configures Core Functions Based On Executing Instructions - A multiprocessor system having plural heterogeneous processing units schedules instruction sets for execution on a selected of the processing units by matching workload processing characteristics of processing units and the instruction sets. To establish an instruction set's processing characteristics, the homogeneous instruction set is executed on each of the plural processing units with one or more performance metrics tracked at each of the processing units to determine which processing unit most efficiently executes the instruction set. Instruction set workload processing characteristics are stored for reference in scheduling subsequent execution of the instruction set.06-17-2010
20100153954Apparatus and Methods for Adaptive Thread Scheduling on Asymmetric Multiprocessor - Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described.06-17-2010
20090254911INFORMATION PROCESSING APPARATUS - An information processing apparatus having a storage that stores identification information for identifying an event occurring in a forefront module and completion information for identifying a module having completed the corresponding process, an identifier that identifies an event that any module has not completed the process based on the completion information, an instructor that provides the identification information related to the event identified by the identifier to the forefront module, and instructs the forefront module to execute the process related to the identified event. Each of the modules operates as a determiner that reads the completion information corresponding to the received identification information, and determines whether to skip the process of its own module, and a deliverer that delivers, the identification information to the immediately succeeding module in a case where the determiner determines to skip the process of its own module.10-08-2009
20080313637PREDICTION-BASED DYNAMIC THREAD POOL MANAGEMENT METHOD AND AGENT PLATFORM USING THE SAME - The present invention relates to a prediction-based dynamic thread pool management method and an agent platform using the same. An prediction-based dynamic thread pool management method according to the present invention includes: (a) calculating a thread variation to a variation of the number of threads at a time t12-18-2008
20080313635JOB ALLOCATION METHOD FOR DOCUMENT PRODUCTION - Methods and systems of processing print jobs are disclosed. A feasible route for processing each of a plurality of jobs is determined- For each feasible route, the time to process the job via the feasible route is determined. Each job is assigned to a first feasible route. A first objective function value is determined using a time to process each job assigned to each autonomous cell. A job is selected. A second feasible route is selected for the selected job. A second objective function value is determined by substituting the second feasible route for the first feasible route for the selected job. If the first value plus a threshold exceeds the second value, the second value replaces the first value, and the second feasible route replaces the first feasible route. Selection and substitution are repeated for each job. The jobs are then processed.12-18-2008
20090249348METHOD AND APPARATUS FOR OPERATING A THREAD - A method and apparatus for operating a thread are disclosed. The method includes: receiving a thread operation request that carries a thread operation ID and thread information related operation; and operating a thread according to the operation request. The thread in the embodiments of the present disclosure is independent of the actual content. Therefore, the thread file can be operated according to the requirements of the user, and thus the user experience is improved.10-01-2009
20090249345Operating System Fast Run Command - A fast sub-process is provided in an operating system for a digital signal processor (DSP). The fast sub-process executes a sub-process without a kernel first determining whether the sub-process resides in an internal memory, as long as certain conditions have been satisfied. One of the conditions is that a programmer determines that the sub-process has been previously loaded into internal memory and executed. Another condition is that the programmer has ensured that a process calling the sub-process has not called any other sub-process between the last execution and the current execution request. Yet another condition is that the programmer ensures that the system has not called another overlapping sub-process between the last execution and the current execution request.10-01-2009
20090249344METHOD AND APPARATUS FOR THREADED BACKGROUND FUNCTION SUPPORT - The present invention provides a computer implemented method and apparatus for a built-in function of a shell to execute in a thread of an interactive shell process. The data processing system receives a request to execute the built-in function. The data processing system determines that the request includes a thread creating indicator. The data processing system schedules a thread to execute the built-in function, in response to a determination that the request includes the thread creating indicator, wherein the thread is controlled by the interactive shell process and shares an environment of the interactive shell process. The data processing system declares a variable based on at least one instruction of the built-in function. Finally, the data processing system may access the variable.10-01-2009
20100162251SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR CLASSIFYING PROBLEM QUERIES TO REDUCE EXCEPTION PROCESSING - A system, method, and computer-readable medium that facilitate classification of database requests as problematic based on estimated processing characteristics of the request are provided. Estimated processing characteristics may include estimated skew including central processing unit skew and input/output operation skew, central processing unit duration per input/output operation, and estimated memory usage. The estimated processing characteristics are made on a request step basis. The request is classified as problematic responsive to determining one or more of the estimated characteristics of a request step exceed a corresponding threshold. In this manner, mechanisms for predicting bad query behavior are provided. Workload management of those requests may then be more successfully provided through workload throttles, filters, or even a more confident exception detection that correlates with the estimated bad behavior.06-24-2010
20080307424Scheduling Method For Polling Device Data - A dispatching method for polling device data. The method comprises: sorting managed devices according to their types, sorting various types of data of each device so as to form different modules, and assigning a priority attribute and a polling period attribute to each module; dividing the managed devices into two sets: one set consisting of devices to be polled and the other set consisting of devices whose connection states need to be detected; and polling each module in the set consisting of devices to be polled according to its priority and polling period periodically. Different polling periods can be set and different polling policies can be applied according to data changeability. Polling policies can be changed in real time and flexibly based on the condition of devices.12-11-2008
20080307421FLOW PROCESS EXECUTION METHOD, APPARATUS AND PROGRAM - A flow process executing apparatus receives an instruction specifying a position in a first flow process description document. When the process reaches the specified position during execution of a flow process in accordance with the first flow process description document, the flow process executing apparatus stops the flow process in accordance with the first flow process description document, and resumes the stopped flow process in accordance with a second flow process description document.12-11-2008
20080307423Schedule Based Cache/Memory Power Minimization Technique - A system includes a task scheduler (12-11-2008
20080307422SHARED MEMORY FOR MULTI-CORE PROCESSORS - A shared memory for multi-core processors. Network components configured for operation in a multi-core processor include an integrated memory that is suitable for, e.g., use as a shared on-chip memory. The network component also includes control logic that allows access to the memory from more than one processor core. Typical network components provided in various embodiments of the present invention include routers and switches.12-11-2008
20080307420Scheduler Supporting Web Service Invocation - The present invention proposes a method and a corresponding system for scheduling invocation of web services from a central point of control. A scheduler accesses a workload database, which associates an execution agent and a descriptor with each submitted job. The descriptor identifies a desired web service, an address of a corresponding WSDL document, and the actual content of a request message to be passed to the web service. Whenever the job is submitted for execution, the scheduler sends the job's descriptor to the associated agent. In response thereto, the agent downloads the WSDL document that specifies the structure of the messages supported by the web service. The scheduler builds a request message for the web service embedding the desired content into the structure specified in the WSDL document. The agent sends the request message to an endpoint implementing the web service, so as to cause its invocation.12-11-2008
20110113429INCIDENT MANAGEMENT METHOD AND OPERATION MANAGEMENT SERVER - An operation management server, including an incident-job relation specifying unit, is responsive to the occurrence of an incident generated in an business system to refer to the incident table for relating the incident to hosts and the job group definition table from a job management server in order to specify the job and job group to be executed by the host on which the incident is generated, a job execution estimation unit for specifying the job to be reexecuted due to the occurrence of the incident and the unexecuted job in the job group, and an impact on job execution calculation unit for determining the impact on job execution which is the influence by the incident on the business system by relating the incident to the specified job.05-12-2011
20100162254Apparatus and Method for Persistent Report Serving - A computer-readable medium is configured to receive a report processing request at a hierarchical report processor. The hierarchical report processor includes a parent process and at least one child process executing on a single processing unit, and is configured to process the report processing request as a task on the single processing unit.06-24-2010
20100162253Real-time scheduling method and central processing unit based on the same - A central processing unit (CPU) and a real-time scheduling method applicable in the CPU are disclosed. The CPU may determine a first task set and a second task set from among assigned tasks, schedule the determined first task set in a single core to enable the task to be processed, and schedule the determined second task set in a multi-core to enable the task to be processed.06-24-2010
20090064151METHOD FOR INTEGRATING JOB EXECUTION SCHEDULING, DATA TRANSFER AND DATA REPLICATION IN DISTRIBUTED GRIDS - Scheduling of job execution, data transfers, and data replications in a distributed grid topology are integrated. Requests for job execution for a batch of jobs are received, along with a set of job requirements. The set of job requirements includes data objects needed for executing the jobs, computing resources needed for executing the jobs, and quality of service expectations. Execution sites are identified within the grid for executing the jobs based on the job requirements. Data transfers needed for providing the data objects for executing the batch of jobs are determined, and data for replication is identified. A set of end-points is identified in the distributed grid topology for use in data replication and data transfers. A schedule is generated for data transfer, data replication and job execution in the grid in accordance with global objectives.03-05-2009
20100169887Apparatus and Method for Parallel Processing of A Query - A computer readable storage medium comprises executable instructions to receive a query. A graph is built to represent jobs associated with the query. The jobs are assigned to parallel threads according to the graph.07-01-2010
20100192152INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing device which has a plurality of process units for performing various kinds of processes includes a detecting unit that detects a processing loads of the process units; a determining unit that determines whether a total amount of the processing loads detected by the detecting unit is equal to or larger than a specific value; a designating unit that designates a process unit having a process state to be controlled, based on the processing loads of the process units detected by the detecting unit, when the determining unit determines that the total amount is equal to or larger than the specific value; a process identifying unit that identifies a process having an execution state to be controlled among processes being performed by the process unit designated by the designating unit; and a control unit that controls the execution state of the process identified by the process identifying unit.07-29-2010
20090300629Scheduling of Multiple Tasks in a System Including Multiple Computing Elements - A method for controlling parallel process flow in a system including a central processing unit (CPU) attached to and accessing system memory, and multiple computing elements. The computing elements (CEs) each include a computational core, local memory and a local direct memory access (DMA) unit. The CPU stores in the system memory multiple task queues in a one-to-one correspondence with the computing elements. Each task queue, which includes multiple task descriptors, specifies a sequence of tasks for execution by the corresponding computing element. Upon programming the computing element with task queue information of the task queue, the task descriptors of the task queue in system memory are accessed. The task descriptors of the task queue are stored in the local memory of the computing element. The accessing and the storing of the data by the CEs is performed using the local DMA unit. When the tasks of the task queue are executed by the computing element, the execution is typically performed in parallel by at least two of the computing elements. The CPU is interrupted respectively by the computing elements only upon their fully executing the tasks of their respective task queues.12-03-2009
20100175065WORKFLOW MANAGEMENT DEVICE, WORKFLOW MANAGEMENT METHOD, AND PROGRAM - This invention is directed to a workflow execution method capable of allocating a necessary license in accordance with the workflow contents and the license states of all task processing devices capable of executing a task, and preferentially utilizing the license in task execution in a cooperative task processing system capable of executing a plurality of tasks for document data as a workflow by a plurality of task processing devices.07-08-2010
20100262968Execution Environment for Data Transformation Applications - The execution environment provides for scalability where components will execute in parallel and exploit various patterns of parallelism. Dataflow applications are represented by reusable dataflow graphs called map components, while the executable version is called a prepared map. Using runtime properties the prepared map is executed in parallel with a thread allocated to each map process. The execution environment not only monitors threads, detects and corrects deadlocks, logs and controls program exceptions, but also data input and output ports of the map components are processed in parallel to take advantage of data partitioning schemes. Port implementation supports multi-state null value tokens to more accurately report exceptions. Data tokens are batched to minimize synchronization and transportation overhead and thread contention.10-14-2010
20100262966MULTIPROCESSOR COMPUTING DEVICE - A computing device includes a first processor configured to operate at a first speed and consume a first amount power and a second processor configured to operate at a second speed and consume a second amount of power. The first speed is greater than the second speed and the first amount of power is greater than the second amount of power. The computing device also includes a scheduler configured to assign processes to the first processor only if the processes utilize their entire timeslice.10-14-2010
20100192151Method for arranging schedules and computer using the same - A method for arranging schedules and a computer using the method are disclosed. The method comprises the steps of: recording a user behavior record in a predetermined time interval; filtering the user behavior record to generate an effective user behavior record; and generating a schedule according to the effective user behavior record.07-29-2010
20080301684Multiple instance management for workflow process models - A first instance and a second instance of an activity of a process model may be executed, the first instance, the second instance, and the activity being associated with activity state data describing one or more states thereof. A co-process associated with the first instance, the second instance, and the activity may be spawned, and the co-process may be executed based on the activity state data.12-04-2008
20090077560Strongly-Ordered Processor with Early Store Retirement - In one embodiment, a processor comprises a retire unit and a load/store unit coupled thereto. The retire unit is configured to retire a first store memory operation responsive to the first store memory operation having been processed at least to a pipeline stage at which exceptions are reported for the first store memory operation. The load/store unit comprises a queue having a first entry assigned to the first store memory operation. The load/store unit is configured to retain the first store memory operation in the first entry subsequent to retirement of the first store memory operation if the first store memory operation is not complete. The queue may have multiple entries, and more than one store may be retained in the queue after being retired by the retire unit.03-19-2009
20090077559System Providing Resources Based on Licensing Contract with User by Correcting the Error Between Estimated Execution Time from the History of Job Execution - A network system includes an application service provider (ASP) which is connected to the Internet and executes an application, and a CPU resource provider which is connected to the Internet and provides a processing service to a particular computational part (e.g., computation intensive part) of the application, wherein: when requesting a job from the CPU resource provider, the application service provider (ASP) sends information about estimated computation time of the job to the CPU resource provider via the Internet; and the CPU resource provider assigns the job by correcting this estimated computation time based on the estimated computation time sent from the application service provider (ASP).03-19-2009
20090077558Methods and apparatuses for heat management in information systems - In some embodiments, an information system is divided into sections, with one or more first computers located in a first section and one or more second computers located in a second section, including a first temperature sensor sensing a temperature condition for the first section and a second temperature sensor sensing a temperature condition for the second section. In some embodiments, when heat distribution determined from the first and second temperature conditions is not in conformance with a predetermined rule for heat distribution, the information system is configured to relocate a portion of the processing load of the first computers to the second computers, or vice versa, for bringing the heat distribution into conformance with the rule. In some embodiments, the effect of other equipment, such as storage system or switches in the sections is also considered, and loads on this equipment may also be relocated between sections.03-19-2009
20090077557METHOD AND COMPUTER FOR SUPPORTING CONSTRUCTION OF BACKUP CONFIGURATION - For a storage system which holds backup data of a first data storage extent in one or more second data storage extents in use of a first backup method, a backup status in a first backup method in a prescribed period is acquired and a first backup performance in a first backup configuration is computed based on this backup status. Meanwhile, a second backup performance in a second backup configuration is estimated based on a prescribed assumption in a prescribed period. Information is outputted based on the computed first backup performance and the estimated second backup performance.03-19-2009
20120174110AMORTIZING COSTS OF SHARED SCANS - Techniques for scheduling a plurality of jobs sharing input are provided. The techniques include partitioning one or more input datasets into multiple subcomponents, analyzing a plurality of jobs to determine which of the plurality of jobs require scanning of one or more common subcomponents of the one or more input datasets, and scheduling a plurality of jobs that require scanning of one or more common subcomponents of the one or more input datasets, facilitating a single scanning of the one or more common subcomponents to be used as input by each of the plurality of jobs.07-05-2012
20100153955SAVING PROGRAM EXECUTION STATE - Techniques are described for managing distributed execution of programs. In at least some situations, the techniques include decomposing or otherwise separating the execution of a program into multiple distinct execution jobs that may each be executed on a distinct computing node, such as in a parallel manner with each execution job using a distinct subset of input data for the program. In addition, the techniques may include temporarily terminating and later resuming execution of at least some execution jobs, such as by persistently storing an intermediate state of the partial execution of an execution job, and later retrieving and using the stored intermediate state to resume execution of the execution job from the intermediate state. Furthermore, the techniques may be used in conjunction with a distributed program execution service that executes multiple programs on behalf of multiple customers or other users of the service.06-17-2010
20100251249DEVICE MANAGEMENT SYSTEM AND DEVICE MANAGEMENT COMMAND SCHEDULING METHOD THEREOF - A device management system and device management scheduling method thereof, in which a server transmits to a client a scheduling context including a device management command and a schedule for the performing of the device management command, and the client generates a device management tree using the device management scheduling context, performs the command when a specific scheduling condition is satisfied, and, if necessary, reports the command performance result to the server, whereby the server performs a device management such as requesting a command to be performed under a specific condition, dynamically varying the scheduling condition, and the like.09-30-2010
20100235840POWER MANAGEMENT USING DYNAMIC APPLICATION SCHEDULING - One embodiment provides a method of managing power in a datacenter having a plurality of servers. A number of policy settings are specified for the power center, including a power limit for the datacenter. The power consumption attributable to each of a plurality of applications executable as a job on one or more of the servers is determined. The power consumption attributable to each application may be further qualified according to the type of server on which the application is executed. Having determined the power consumption attributable to various applications executable as jobs, the applications may be executed on the servers as jobs such that the total power consumption attributable to the currently executed jobs remains within the selected datacenter power limit.09-16-2010
20100235841INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING SAME - There is disclosed an information processing apparatus and method for executing a workflow having a plurality of steps (and corresponding method). The information processing apparatus registers the workflow having a plurality of steps and manages a start parameter for indicating a condition for starting each step included in the workflow and an end parameter that is generated at an end of the each step. The apparatus determines a second step for following a first step based on the end parameter of the first step and the managed start parameters.09-16-2010
20100242040SYSTEM AND METHOD OF TASK ASSIGNMENT DISTRIBUTED PROCESSING SYSTEM - A method of task assignment in a distributed processing system including a plurality of processors is proposed. The method of task assignment includes calculating utilities of tasks to be processed in execution units included in each processor and arranging the calculated results in descending order; calculating utility difference values between the execution units included in each processor and outputting a highest difference value; comparing a utility of the task with the output highest difference value; designating the task to be assigned to the execution unit having the lowest utility in a processor in which the highest difference value is generated when the utility of the task is less than or equal to the output highest difference value; repeating the calculating, comparing, and designating in the order of the arranged tasks; and assigning the tasks to the designated targets.09-23-2010
20100251246System and Method for Generating Job Schedules - A system and method for generating a test environment schedule containing an order of executing job control language (JCL) jobs in a test computing environment is provided. The system comprises a memory which stores a seed schedule containing a plurality of members having common JCL jobs appropriate for different test environments, with each member containing a plurality of JCL jobs in a predetermined order of execution. The memory also stores a parameter file containing parameters for modifying the seed schedule according to a specific test environment. The system also includes an environment schedule module executable by a processor and is adapted to convert the seed schedule to the test environment schedule to be executed in the specific test environment as specified in the stored parameter file.09-30-2010
20100211953MANAGING TASK EXECUTION - Managing task execution includes: receiving a specification of a plurality of tasks to be performed by respective functional modules; processing a flow of input data using a dataflow graph that includes nodes representing data processing components connected by links representing flows of data between data processing components; in response to at least one flow of data provided by at least one data processing component, generating a flow of messages; and in response to each of the messages in the flow of messages, performing an iteration of a set of one or more tasks using one or more corresponding functional modules.08-19-2010
20100218190PROCESS MAPPING IN PARALLEL COMPUTING - A method of mapping processes to processors in a parallel computing environment where a parallel application is to be run on a cluster of nodes wherein at least one of the nodes has multiple processors sharing a common memory, the method comprising using compiler based communication analysis to map Message Passing Interface processes to processors on the nodes, whereby at least some more heavily communicating processes are mapped to processors within nodes. Other methods, apparatus, and computer readable media are also provided.08-26-2010
20100251247CHANGE MANAGEMENT AUTOMATION TOOL - A change management system for an IT environment or other enterprise level environment may comprise a server comprising memory and a controller. A change management application comprising machine readable instructions may be stored in the memory. The change management application may be arranged to perform the following steps: receive a plurality of work orders via a network to be performed during a maintenance period, concatenate the plurality of work orders to generate a master plan for performing the work orders during the maintenance period, and receive status updates for the work orders during the maintenance period down to the individual step level. A display may display a view of the master plan during the maintenance period. The view may include information related to the work orders and a status of the work orders. The status may be updated automatically based on status updates received by the change management application.09-30-2010
20080250412Cooperative process-wide synchronization - One embodiment relates to a computer-implemented method of concurrently performing a process-wide operation in a multi-threaded process being executed on a computer system so as to result in more efficient performance of the computer system. A plurality of threads of the process concurrently participate in the process-wide operation. Finishing steps of the process-wide operation are performed by a last thread participating in the process-wide operation, regardless of whether the last thread is an initiator thread or a target thread. Other embodiments, aspects, and features are also disclosed.10-09-2008
20090158285APPARATUS AND METHOD FOR CONTROLLING RESOURCE SHARING SCHEDULE IN MULTI-DECODING SYSTEM - An apparatus for controlling a resource sharing schedule in a multi-decoding system including a multi-decoder formed of a plurality of resources, the apparatus including: a storage unit storing status information of the resources and information required in controlling the resource sharing schedule; and a controller, when a source resource requests assignment of a target resource, assigning the target resource, outputting information of the target resource to the source resource, and updating statuses of the resources, wherein the apparatus controls the resource sharing schedule while bidirectionally connected to the resources to share the resources between the multi-decoders. Accordingly, it is possible to reduce an overall decoding time and controlling a resource usage schedule.06-18-2009
20100138838METHOD FOR EXECUTING SCHEDULED TASK - A scheduled task executing method is used in a computer system and a peripheral device. The computer system has a time generator for generating time information and a memory. When the computer system is in a working state, a user input interface is provided, a scheduled time is set via the user input interface, and the scheduled time is automatically stored in the memory. When the computer system is in a power off state, electricity is continuously supplied to the time generator and the memory. If the time information and the scheduled time comply with a specified relation, a power control signal is generated. In response to the power control signal, the computer is switched from the power off state to the working state. When the computer system is in the working status, the peripheral device is activated to execute a scheduled task item corresponding to the scheduled time.06-03-2010
20100064289INFORMATION PROCESSING METHOD, APPARATUS, AND SYSTEM FOR CONTROLLING COMPUTER RESOURCES, CONTROL METHOD THEREFOR, STORAGE MEDIUM, AND PROGRAM - An operation request from a process or OS for computer resource(s) managed by the OS, such as a file, network, storage device, display screen, or external device, is trapped before access to the computer resource. It is determined whether an access right for the computer resource designated by the trapped operation request is present. If the access right is present, the operation request is transferred to the operating system, and a result from the OS is returned to the request source process. If no access right is present, the operation request is denied, or the request is granted by charging in accordance with the contents of the computer resource.03-11-2010
20110113430INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing system includes a setting unit, an obtaining unit, a calculating unit, a display information generating unit, and an updating unit. The setting unit sets activity schedule information indicating an activity schedule of a user in an evaluation target period on the basis of an activity which is selected from among plural activities. The obtaining unit obtains activity information specifying the activity that has been performed by at least one point of time within the evaluation target period. The calculating unit calculates a total environmental load value of the activity in the evaluation target period. The display information generating unit generates display information including the total environmental load value and a target value of an environmental load. The updating unit updates the activity scheduled in the activity schedule information.05-12-2011
20120144393MULTI-ISSUE UNIFIED INTEGER SCHEDULER - A method and apparatus for scheduling execution of instructions in a multi-issue processor. The apparatus includes post wake logic circuitry configured to track a plurality of entries corresponding to a plurality of instructions to be scheduled. Each instruction has at least one associated source address and a destination address. The post wake logic circuitry is configured to drive a ready input indicating an entry that is ready for execution based on a current match input. A picker circuitry is configured to pick an instruction for execution based the ready input. A compare circuit is configured to determine the destination address for the picked instruction, compare the destination address to the source address for all entries and drive the current match input.06-07-2012
20090254909Methods and Apparatus for Power-aware Workload Allocation in Performance-managed Computing Environments - An exemplary method of allocating a workload among a set of computing devices includes obtaining at least one efficiency model for each device. The method also includes, for each of a set of allocations of the workload among the devices, determining, for each device, the power consumption for the device to perform the workload allocated to the device by the allocation, the power consumption being determined based on the at least one efficiency model for each device; and determining a total power consumption of the devices. The method also includes selecting an allocation of the workload among the devices based at least in part on the total power consumption of the devices for each allocation. The method also includes implementing the selected allocation of the workload among the devices.10-08-2009
20090328048Distributed Processing Architecture With Scalable Processing Layers - The present invention is a system on chip architecture having scalable, distributed processing and memory capabilities through a plurality of processing layers. In a preferred embodiment, a distributed processing layer processor comprises a plurality of processing layers, a processing layer controller, and a central direct memory access controller. The processing layer controller manages the scheduling of tasks and distribution of processing tasks to each processing layer. Within each processing layer, a plurality of pipelined processing units (PUs), specially designed for conducting a defined set of processing tasks, are in communication with a plurality of program memories and data memories. One application of the present invention is in a media gateway that is designed to enable the communication of media across circuit switched and packet switched networks. The hardware system architecture of the said novel gateway is comprised of a plurality of DPLPs, referred to as Media Engines that are interconnected with a Host Processor or Packet Engine, which, in turn, is in communication with interfaces to networks. Each of the PUs within the processing layers of the Media Engines are specially designed to perform a class of media processing specific tasks, such as line echo cancellation, encoding or decoding data, or tone signaling.12-31-2009
20090328047DEVICE, SYSTEM, AND METHOD OF EXECUTING MULTITHREADED APPLICATIONS - Device, system, and method of executing multithreaded applications. Some embodiments include a task scheduler to receive application information related to one or more parameters of at least one multithreaded application to be executed by a multi-core processor including a plurality of cores and, based on the application information and based on architecture information related to an arrangement of the plurality of cores, to assign one or more tasks of the multithreaded application to one or more cores of the plurality of cores. Other embodiments are described and claimed.12-31-2009
20090328046METHOD FOR STAGE-BASED COST ANALYSIS FOR TASK SCHEDULING - One embodiment may estimate the processing time of tasks requested by an application by maintaining a state-model for the application. The state model may include states that represent the tasks requested by the application, with each state including the average run-time of each task. In another embodiment, a state model may estimate which task is likely to be requested for processing after the current task is completed by providing edges in the state model connecting the states. Each edge in the state model may track the number of times the application transitions from one task to the next. Over time, data may be gathered representing the percentage of time that each edge is from a state node. Given this information, the scheduler may estimate the CPU cost of the next task based on the current state, the most likely transition, and the cost of the predicted next task. The state model may also track multiple users of the application and modify or create the state model as the users traverse through the state model.12-31-2009
20090276779JOB MANAGEMENT APPARATUS - When there is a job activation request accompanied with variable information in which an execution attribute and an identifier of a job are associated, a job definition in which an execution attribute is described with an arbitrary identifier is referred, and based on the variable information, an identifier within the job definition is replaced with the execution attribute to create a job. Then, the job created in this manner is activated.11-05-2009
20110067031Information Processing Apparatus and Control Method of the Same - According to one embodiment, an information processing apparatus for executing at least one executing target program, the apparatus includes: a sensor module configured to detect whether an operator is absent or not; a log information acquiring module configured to acquire log information including information about a date and time on which whether the operator is absent or not is detected by the sensor module and information about whether the operator is absent or not; a scheduling module configured to analyze an absence time zone in which the operator is absent based on the log information acquired by the log information acquiring module and to set to execute the at least one executing target program in the absence time zone based on a result of the analysis; and a processor configured to execute the at least one executing target program in the absence time zone.03-17-2011
20110067030FLOW BASED SCHEDULING - A job scheduler may schedule concurrent distributed jobs in a computer cluster by assigning tasks from the running jobs to compute nodes while balancing fairness with efficiency. Determining which tasks to assign to the compute nodes may be performed using a network flow graph. The weights on at least some of the edges of the graph encode data locality, and the capacities provide constraints that ensure fairness. A min-cost flow technique may be used to perform an assignment of the tasks represented by the network flow graph. Thus, online task scheduling with locality may be mapped onto a network flow graph, which in turn may be used to determine a scheduling assignment using min-cost flow techniques. The costs may encode data locality, fairness, and starvation-freedom.03-17-2011
20090235263JOB ASSIGNMENT APPARATUS, JOB ASSIGNMENT METHOD, AND COMPUTER-READABLE MEDIUM - A management node at first extracts free computation nodes executing none of jobs in order to assign a new job to any one of computation nodes, and specifies a communication target computation node when executing an execution target job. Subsequently, the management node calculates, with respect to all of the computation nodes executing none of the jobs at that point of time, a determination value V09-17-2009
20090249347VIRTUAL MULTIPROCESSOR, SYSTEM LSI, MOBILE PHONE, AND CONTROL METHOD FOR VIRTUAL MULTIPROCESSOR - A virtual multiprocessor according to the present invention includes: one or more processors that execute programs while switching between the programs at each of assigned times; a scheduling unit that performs scheduling that determines execution sequence of the programs and the one or more processors that are to execute one or more of the programs, wherein the scheduling unit performs the scheduling at a timing dependent on an assigned time associated with a corresponding one of the programs being executed by the one or more processors, in the case where a first mode is set, and performs the scheduling at a timing not dependent on the assigned time so that at least one of the one or more processors does not execute the programs, in the case where a second mode is set.10-01-2009
20090320029DATA PROTECTION SCHEDULING, SUCH AS PROVIDING A FLEXIBLE BACKUP WINDOW IN A DATA PROTECTION SYSTEM - A data protection scheduling system provides a flexible or rolling data protection window that analyzes various criteria to determine an optimal or near optimal time for performing data protection or secondary copy operations. While prior systems may have scheduled backups at an exact time (e.g., 2:00 a.m.), the system described herein dynamically determines when to perform the backups and other data protection storage operations, such as based on network load, CPU load, expected duration of the storage operation, rate of change of user activities, frequency of use of affected computer systems, trends, and so on.12-24-2009
20090320028SYSTEM AND METHOD FOR LOAD-ADAPTIVE MUTUAL EXCLUSION WITH WAITING PROCESS COUNTS - A system and associated method for mutually exclusively executing a critical section by a process in a computer system. The critical section accessing a shared resource is controlled by a lock. The method measures a detection time when a lock contention is detected, a wait time representing a duration of wait for the lock at each failed attempt to acquire the lock, and a delay representing a total lapse of time from the detection time till the lock is acquired. The delay is logged and used to calculate an average delay, which is compared with a suspension overhead time of the computer system to determine whether to spin or to suspend the process while waiting for the lock to be released. The number of processes waiting for the lock and the number of processes suspended are respectively counted to optimize the method.12-24-2009
20090178043SWITCH-BASED PARALLEL DISTRIBUTED CACHE ARCHITECTURE FOR MEMORY ACCESS ON RECONFIGURABLE COMPUTING PLATFORMS - A computing architecture comprises a plurality of processing elements to perform data processing calculations, a plurality of memory elements to store the data processing results, and a reconfigurable interconnect network to couple the processing elements to the memory elements. The reconfigurable interconnect network includes a switching element, a control element, a plurality of processor interface units, a plurality of memory interface units, and a plurality of application control units. In various embodiments, the processing elements and the interconnect network may be implemented in a field-programmable gate array.07-09-2009
20090320031Power state-aware thread scheduling mechanism - A system filter is maintained to track which single-thread cores [or which multi-threaded logical CPUs] are in a low-latency power state. For at least one embodiment, low-latency power states include an active C12-24-2009
20090320027FENCE ELISION FOR WORK STEALING - Methods and systems for statistically eliding fences in a work stealing algorithm are disclosed. A data structure comprising a head pointer, tail pointer, barrier pointer and an advertising flag allows for dynamic load-balancing across processing resources in computer applications.12-24-2009
20090165006DETERMINISTIC MULTIPROCESSING - A hardware and/or software facility for controlling the order of operations performed by threads of a multithreaded application on a multiprocessing system is provided. The facility may serialize or selectively-serialize execution of the multithreaded application such that, given the same input to the multithreaded application, the multiprocessing system deterministically interleaves operations, thereby producing the same output each time the multithreaded application is executed. The facility divides the execution of the multithreaded application code into two or more quantum specifying a deterministic number of operations, and the facility specifies a deterministic order in which the threads execute the two or more quantum. The facility may operate together with a transactional memory system. When the facility operates together with a transactional memory system, each quantum is encapsulated in a transaction that, may be executed concurrently with other transactions, and is committed according to the specified deterministic order.06-25-2009
20090113434APPARATUS, SYSTEM AND METHOD FOR RAPID RESOURCE SCHEDULING IN A COMPUTE FARM - Disclosed herein is a method for scheduling computing jobs for a compute farm. The method includes: receiving a plurality of computing jobs at a scheduler; assigning a signature to each computing job based on at least one computing resource requirement of the computing job; storing each computing job in a signature classification corresponding to the signature of the computing job; and scheduling at least one of the plurality of computing jobs for processing in the compute farm as a function of the signature classification.04-30-2009
20090113432METHOD AND SYSTEM FOR SIMULATING A MULTI-QUEUE SCHEDULER USING A SINGLE QUEUE ON A PROCESSOR - A method and system for scheduling tasks on a processor, the tasks being scheduled by an operating system to run on the processor in a predetermined order, the method comprising identifying and creating task groups of all related tasks; assigning the tasks in the task groups into a single common run-queue; selecting a task at the start of the run-queue; determining if the task at the start of the run-queue is eligible to be run based on a pre-defined timeslice allocated and on the presence of older starving tasks on the runqueue; executing the task in the pre-defined time slice; associating a starving status to all unexecuted tasks and running all until all tasks in the run-queue complete execution and the run-queue become empty.04-30-2009
20110107341JOB SCHEDULING WITH OPTIMIZATION OF POWER CONSUMPTION - A scheduler is provided, which takes into account the location of the data to be accessed by a set of jobs. Once all the dependencies and the scheduling constraints of the plan are respected, the scheduler optimizes the order of the remaining jobs to be run, also considering the location of the data to be accessed. Several jobs needing an access to a dataset on a specific disk may be grouped together so that the grouped jobs are executed in succession, e.g., to prevent activating and deactivating the storage device several times, thus improving the power consumption and also avoiding input output performances degradation.05-05-2011
20110107340Clustering Threads Based on Contention Patterns - Techniques for grouping two or more threads based on lock contention information are provided. The techniques include determining lock contention information with respect to two or more threads, using the lock contention information with respect to the two or more threads to determine lock affinity between the two or more threads, using the lock affinity between the two or more threads to group the two or more threads into one or more thread clusters, and using the one or more thread clusters to perform scheduling of one or more threads.05-05-2011
20110107339Inner Process - Methods, systems, and products for computer processing. In one general embodiment, the method comprises running an inner process in the context of an executing thread wherein the thread has an original address space in memory and hiding at least a portion of the memory from the inner process. The inner process may run on the same credentials as the thread. Running the inner process may include creating a new address space for the inner process in the memory and assigning the new address space to the thread, so that the inner process comprises its own address space. The inner process may he allowed to access only the new address space. The kernel may maintain the thread's original address space along with the new address space, so that multiple address spaces exist for a particular thread. The kernel may pass selected data from the thread to the inner process.05-05-2011
20110107337Hierarchical Reconfigurable Computer Architecture - A reconfigurable hierarchical computer architecture having N levels, where N is an integer value greater than one, wherein said N levels include a first level including a first computation block including a first data input, a first data output and a plurality of computing nodes interconnected by a first connecting mechanism, each computing node including an input port, a functional unit and an output port, the first connecting mechanism capable of connecting each output port to the input port of each other computing node; and a second level including a second computation block including a second data input, a second data output and a plurality of the first computation blocks interconnected by a second connecting means for selectively connecting the first data output of each of the first computation blocks and the second data input to each of the first data inputs and for selectively connecting each of the first data outputs to the second data output.05-05-2011
20090113436Techniques for switching threads within routines - Various technologies and techniques are disclosed for switching threads within routines. A controller routine receives a request from an originating routine to execute a coroutine, and executes the coroutine on an initial thread. The controller routine receives a response back from the coroutine when the coroutine exits based upon a return statement. Upon return, the coroutine indicates a subsequent thread that the coroutine should be executed on when the coroutine is executed a subsequent time. The controller routine executes the coroutine the subsequent time on the subsequent thread. The coroutine picks up execution at a line of code following the return statement. Multiple return statements can be included in the coroutine, and the threads can be switched multiple times using this same approach. Graphical user interface logic and worker thread logic can be co-mingled into a single routine.04-30-2009
20090113433THREAD CLASSIFICATION SUSPENSION - The exemplary embodiments provide a computer-implemented method, apparatus, and computer-usable program code for managing memory. A notice of a shortage of real memory is received. For each active thread, the thread classification of the active thread is compared to a global hierarchy of thread classifications to determine a thread to affect. The global hierarchy of thread classifications defines the relative importance of each thread classification. An action to take for the determined thread is determined. The determined action is performed for the determined thread.04-30-2009
20080276240Reordering Data Responses - A system includes a deterministic system, and a controller electrically coupled to the deterministic system via a link, wherein the controller comprises a transaction scheduling mechanism that allows data responses from the deterministic system, corresponding to requests issued from the controller, to be returned out of order.11-06-2008
20120246658Transactional Memory Preemption Mechanism - Mechanisms for executing a transaction in the data processing system are provided. A transaction checkpoint data structure is generated in internal registers of a processor. The transaction checkpoint data structure stores transaction checkpoint data representing a state of program registers at a time prior to execution of a corresponding transaction. The transaction, which comprises a first portion of code that is to be executed by the processor, is executed. An interrupt of the transaction is received while executing the transaction and, as a result, the transaction checkpoint data is stored to a data structure in a memory of the data processing system. A second portion of code is then executed. A state of the program registers is restored using the data structure in the memory of the data processing system in response to an event occurring causing a switch of execution of the processor back to execution of the transaction.09-27-2012
20120246657EXECUTING INSTRUCTION SEQUENCE CODE BLOCKS BY USING VIRTUAL CORES INSTANTIATED BY PARTITIONABLE ENGINES - A method for executing instructions using a plurality of virtual cores for a processor. The method includes receiving an incoming instruction sequence using a global front end scheduler, and partitioning the incoming instruction sequence into a plurality of code blocks of instructions. The method further includes generating a plurality of inheritance vectors describing interdependencies between instructions of the code blocks, and allocating the code blocks to a plurality of virtual cores of the processor, wherein each virtual core comprises a respective subset of resources of a plurality of partitionable engines. The code blocks are executed by using the partitionable engines in accordance with a virtual core mode and in accordance with the respective inheritance vectors.09-27-2012
20120246655AUTOMATED TIME TRACKING - In a method for automatically tracking time, a computer receives a user identification. The computer automatically starts a first task, based on the received user identification. The computer records a start time for the first task. The computer monitors a state of the first task. The computer automatically records an end time for the first task in response to determining that the state of the first task has changed.09-27-2012
20110078689Address Mapping for a Parallel Thread Processor - A method for thread address mapping in a parallel thread processor. The method includes receiving a thread address associated with a first thread in a thread group; computing an effective address based on a location of the thread address within a local window of a thread address space; computing a thread group address in an address space associated with the thread group based on the effective address and a thread identifier associated with a first thread; and computing a virtual address associated with the first thread based on the thread group address and a thread group identifier, where the virtual address is used to access a location in a memory associated with the thread address to load or store data.03-31-2011
20080250414Dynamically Partitioning Processing Across A Plurality of Heterogeneous Processors - A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor.10-09-2008
20120246654Constant Time Worker Thread Allocation Via Configuration Caching - Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads.09-27-2012
20110119673CROSS-CHANNEL NETWORK OPERATION OFFLOADING FOR COLLECTIVE OPERATIONS - A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation.05-19-2011
20110119672Multi-Core System on Chip - A multi-core system on a chip (05-19-2011
20080229315DISTRIBUTED PROCESSING PROGRAM, SYSTEM, AND METHOD - According to an aspect of an embodiment, a method for controlling of a distributed processing system comprising a management computer for managing distributed processing of a job program and a plurality of execution computers for executing a plurality of jobs, comprising: dividing a request of processing of the job program into a plurality of jobs by the management computer; assigning said jobs from said management computer to said execution computers; transferring processed information obtained by executing said jobs by said execution computers to said management computer; storing said processed information into said execution computer; and resuming dividing a request of processing of the job program and assigning the jobs to said execution computers by management computer, wherein assignment of the jobs is arranged such that at least one of the jobs for which the processed information stored is available is omitted from assignment.09-18-2008
20090187911Computer device with reserved memory for priority applications - A computer device comprises a processor, a memory, and an operating system kernel. The kernel comprises instructions for managing the execution of processes and for allocating memory to such processes. The device is able to execute stored applications that can be broken down into processes. The device comprises a special instruction sequence able to create an inactive process with reservation of a certain quantity of memory, and an application launcher, arranged in such a manner as to remove the inactive process, thus freeing up the reserved memory, which is followed consecutively by commanding the launch of at least one particular application. The memory reserved beforehand is thus made quickly available for execution of the particular application.07-23-2009
20080229314STORAGE MEDIUM CONTAINING BATCH PROCESSING PROGRAM, BATCH PROCESSING METHOD AND BATCH PROCESSING APPARATUS - Batch processing program is performed in a computer. Job steps are executed in a manner that, when the number of job steps is determined by the determining means to exceed the maximum number of processes, successive job steps defined as pipe processing objects are divided in units of a maximum number of job steps corresponding to the maximum number of processes. A pipe is used for data transfer between respective job steps within a same segment divided, and a temporary file is used for data transfer between each set of adjacent job steps each belonging to a different segment.09-18-2008
20100299669Generation of a Comparison Task List of Task Items - A computing system generates and displays a comparison task list that reports differences between a source task list for a project and a modified task list for the project. The comparison task list may enable a user to determine the implications of changes to the project by providing a comparison of the source task list and the modified task list. The computing system generates the comparison task list by generating the comparison task list as a copy of the source task list. The computing system automatically adds each task item in the modified task list that does not have an equivalent task item in the comparison task list to the comparison task list at positions that depend on whether the task items have previously-processed sibling task items in the modified task list. When the computing system has processed each task item in the modified task list, the computing system displays the comparison task list.11-25-2010
20090037916PROCESSOR - The present invention provides a processor that cyclically executes a plurality of threads in accordance with an execution time allocated to each of the threads, comprising a reconfigurable integrated circuit. The processor stores circuit configuration information sets respectively corresponding to the plurality of threads, reconfigures a part of the integrated circuit based on the circuit configuration information sets, and sequentially executes each thread using the integrated circuit that has been reconfigured based on one of the configuration information sets that corresponds to the thread. While executing a given thread, the processor selects a thread to be executed next, and reconfigures a part of the integrated circuit where is not currently used for execution of the given thread, based on a circuit configuration information set corresponding to the selected thread.02-05-2009
20090070763METHOD AND SYSTEM FOR CHARACTERIZING ELEMENTS OF A PRINT PRODUCTION QUEUING MODEL - Methods and systems for characterizing performance of resources in a production environment are disclosed. Timing information for a plurality of print jobs may be received at a resource characterization system from one or more resources. A service time distribution may be determined based on the timing information. Resource performance for the one or more resources may be characterized based on the service time distribution using a queuing model. One or more performance characteristics may be provided for the one or more resources based on the characterized resource performance.03-12-2009
20090070764Handling queues associated with web services of business processes - A method and apparatus for handling queues associated with web services of a business process. The method may include automatically generating deployment descriptors for executing a business process as a web application, and determining a default queue for the business process using a business process management (BPM) configuration file. During execution of the business process, users are allowed to monitor the message load associated with the default queue. If a user decides to re-distribute the message load, the user is allowed to specify a new set of queues for the business process to improve performance of the business process at runtime.03-12-2009
20090070762SYSTEM AND METHOD FOR EVENT-DRIVEN SCHEDULING OF COMPUTING JOBS ON A MULTI-THREADED MACHINE USING DELAY-COSTS - A computer system includes N multi-threaded processors and an operating system. The N multi-threaded processors each have O hardware threads forming a pool of P hardware threads, where N, O, and P are positive integers and P is equal to N times O. The operating system includes a scheduler which receives events for one or more computing jobs. The scheduler receives one of the events and allocates R hardware threads of the pool of P hardware threads to one of the computing jobs by optimizing a sum of priorities of the computing jobs, where each priority is based in part on the number of logical processors requested by a corresponding computing job and R is an integer that is greater than or equal to 0.03-12-2009
20130132962SCHEDULER COMBINATORS - Scheduler combinators facilitate scheduling. One or more combinators, or operators, can be applied to an existing scheduler to compose a new scheduler or decompose an existing scheduler into multiple facets.05-23-2013
20100306777WORKFLOW MESSAGE AND ACTIVITY CORRELATION - Embodiments are directed to generating trace events that are configured to report an association between a workflow activity and a message. A computer system receives a message over a communication medium, where the workflow activity includes a unique workflow activity identifier (ID) that uniquely identifies the workflow activity. The message also includes a unique message ID that uniquely identifies the message. The computer system generates a trace event that includes a combination of the unique workflow activity ID and the unique message ID. The trace event is configured to report the association between the workflow activity and the message. The computer system also stores the generated trace event in a data store.12-02-2010
20100325632Workload scheduling method and system with improved planned job duration updating scheme - A method for scheduling execution of a work unit in a data processing system comprises assigning to the work unit an expected execution duration; executing the work unit determining an actual execution duration of the work unit; determining a difference between the actual execution duration and the expected duration; and conditionally adjusting the expected execution duration assigned to the work unit based on the measured actual execution duration, wherein the conditionally adjusting includes preventing the adjustment of the expected execution duration in case said difference exceeds a predetermined threshold. The method further includes associating to the work unit a parameter having a prescribed value adapted to provide an indication of unconditional adjustment of the expected execution duration: in case said parameter takes the prescribed value, the expected duration assigned with the work unit based on the measured actual execution duration even if the difference in durations exceeds the predetermined threshold.12-23-2010
20130139165SYSTEM AND METHOD FOR DISTRIBUTING PROCESSING OF COMPUTER SECURITY TASKS - In a computer system, processing of security-related tasks is delegated to various agent computers. According to various embodiments, a distributed computing service obtains task requests to be performed for the benefit of beneficiary computers, and delegates those tasks to one or more remote agent computers for processing. The delegation is based on a suitability determination as to whether each of the remote agent computers is suitable to perform the processing. Suitability can be based on an evaluation of such parameters as computing capacity and current availability of the remote agent computers against the various tasks to be performed and their corresponding computing resource requirements. This evaluation can be performed according to various embodiments by the agent computers, the distributed computing service, or by a combination thereof.05-30-2013
20100325631METHOD AND APPARATUS FOR INCREASING LOAD BANDWIDTH - A method and apparatus for dual-target register allocation is described, intended to enable the efficient mapping/renaming of registers associated with instructions within a pipelined microprocessor architecture.12-23-2010
20090138879Clock Control - The present invention provides a processor comprising: an execution unit arranged to execute a plurality of program threads, clock generating means for generating first and second clock signals, and storage means for storing at least one thread-specific clock-control bit. The execution unit is configured to execute a first one of the threads in dependence on the first clock signal and to execute a second one of the threads in dependence on the second clock signal. The clock generating means is operable to generate the second clock signal with the second frequency selectively differing from the first frequency in dependence on the at least one clock-control bit. A corresponding method and computer program product are also provided.05-28-2009
20100333097METHOD AND SYSTEM FOR MANAGING A TASK - A computer readable storage medium including executable instructions for managing a task. Instructions include receiving a request. Instructions further include determining a task corresponding with the request using a request-to-task mapping. Instructions include obtaining a task entry corresponding with the task from a task store, where the task entry associates the task with an action and a predicate for performing the action. Instructions further include creating a task object in a task pool using the task entry. Instructions further include receiving an event notification at the task engine, where the event notification is associated with an event. Instructions further include determining whether the predicate for performing the action is satisfied by the event. Instructions further placing the task object in a task queue when the predicate for performing the action is satisfied by the event.12-30-2010
20100333096Transactional Locking with Read-Write Locks in Transactional Memory Systems - A system and method for transactional memory using read-write locks is disclosed. Each of a plurality of shared memory areas is associated with a respective read-write lock, which includes a read-lock portion indicating whether any thread has a read-lock for read-only access to the memory area and a write-lock portion indicating whether any thread has a write-lock for write access to the memory area. A thread executing a group of memory access operations as an atomic transaction acquires the proper read or write permissions before performing a memory operation. To perform a read access, the thread attempts to obtain the corresponding read-lock and succeeds if no other thread holds a write-lock for the memory area. To perform a write-access, the thread attempts to obtain the corresponding write-lock and succeeds if no other thread holds a write-lock or read-lock for the memory area.12-30-2010
20110010718ELECTRONIC DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT HAVING COMPUTER-READABLE INFORMATION PROCESSING PROGRAM - An electronic device includes a status management unit detecting a change in a status of the electronic device and recognizing the status; an added program control unit applying an added program to a program of the electronic device in response to a validation request for the added program, the added program being capable of dynamically interrupting the program of the electronic device with a process; and an application determination information storage unit storing application determination information indicating whether the added program can be applied to the program of the electronic device depending on the status of the electronic device recognized by the status management unit. The added program control unit determines whether the added program can be applied based on (1) the status of the electronic device recognized by the status management unit upon reception of the validation request, and (2) the application determination information.01-13-2011
20100333095Bulk Synchronization in Transactional Memory Systems - A method and system for acquiring multiple software locks in bulk is disclosed. When multiple locks need to be acquired, such as for atomic transactions in transactional memory systems, the disclosed techniques may be applied to consolidate computationally expensive memory barrier operations across the lock acquisitions. A system may acquire multiple locks in bulk, at least in part, by modifying values in one or more fields of multiple locks and by then performing a memory barrier operation to ensure that the modified values in the multiple locks are visible to other application threads. The technique may be repeated for locks that the system fails to acquire during earlier iterations until all required locks are acquired. The described technique may be applied to various scenarios including static and/or dynamic transactional locking protocols.12-30-2010
20100333094JOB-PROCESSING NODES SYNCHRONIZING JOB DATABASES - A first node of a network updates a first job database to indicate that a first job is executing or is about to be executed on the first node. Network nodes are synchronized so that other nodes update their respective job databases to indicate that the first job is executing on said first node.12-30-2010
20090187908OPTIMIZED METHODOLOGY FOR DISPOSITIONING MISSED SCHEDULED TASKS - The present invention provides for a method and system for the disposition of tasks which failed to run during their originally scheduled time. The determination of whether to run missed or delayed tasks is based on calculated ratios rather than on fixed window sizes. A Lateness Ratio is calculated to determine if the time elapsed between the missed task and the scheduled run time is small enough to still allow a late task to run. A Closeness Ratio is calculated to determine if the next available run time for the missed task is close enough to the next scheduled execution of the task that the missed task will be run in place of the upcoming scheduled task. Each ratio is compared to a user defined ratio limit, so if the calculated ratio does not exceed the limit, then the missed task is executed at the first available opportunity.07-23-2009
20110010720SYSTEM AND METHOD FOR MANAGING ELECTRONIC ASSETS - An asset management system is provided which comprises one or more controllers, which operate as main servers and can be located at the headquarters of an electronic device manufacturer to remotely control their operations at any global location. The controller can communicate remotely over the Internet or other network to control one or more secondary or remote servers, herein referred to as appliances. The appliances can be situated at different manufacturing, testing or distribution sites. The controller and appliances comprise hardware security modules (HSMs) to perform sensitive and high trust computations, store sensitive information such as private keys, perform other cryptographic operations, and establish secure connections between components. The HSMs are used to create secure end-points between the controller and the appliance and between the appliance and the secure point of trust in an asset control core embedded in a device.01-13-2011
20110010717JOB ASSIGNING APPARATUS AND JOB ASSIGNMENT METHOD - A job assigning apparatus connected to a plurality of arithmetic units for assigning a job to each of the arithmetic units, the job assigning apparatus includes a power consumption acquiring processor for acquiring power consumptions with respect to each of the arithmetic units, a selector for selecting one of the arithmetic units as a submission destination in increasing order of the power consumptions acquired by the power consumption acquiring processor, and a job submitting processor for submitting a job to the submission destination.01-13-2011
20110010716Domain Bounding for Symmetric Multiprocessing Systems - Methods and apparatuses for developing symmetric and asymmetric software applications on a single monolithic symmetric multiprocessing operating system are disclosed. An enabling framework for one or all of the following software design patterns; application work load sharing between all processors present in a multi-processor system in a symmetric fashion, application work load sharing between all processors present in a multi-processor system in a asymmetric fashion using task to processor soft affinity declarations, application work load sharing between all processors present in a multi-processor system using bound computational domains may be provided. Further, a particular computational task or a set of computational tasks may be bound to a particular processing unit. Subsequently, when one such task is to be scheduled, the symmetric multiprocessing operating system ensures that the bound processing unit processes the instruction. When the bound processing unit is not processing the particular computational instruction, the bound processing unit may enter a low power or idle state.01-13-2011
20090144738Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - Apparatus for evaluating the performance of DMA-based algorithmic tasks on a target multi-core processing system includes a memory and at least one processor coupled to the memory. The processor is operative: to input a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; to evaluate performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and to provide results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system.06-04-2009
20110029977POLICY BASED INVOCATION OF WEB SERVICES - Techniques for orchestrating workflows are disclosed herein. In an embodiment, a method of orchestrating a workflow is disclosed. In an embodiment, data is stored in a policy file which associates attributes with processes. User input is received. A process associated with an attribute is selected, where the attribute is based on the user input. The selected process is performed as part of the workflow. Also, processes may be added dynamically as part of any category inside the policy file without having to recompile or redesign the logic of the BPEL project.02-03-2011
20110029976PROCESSING SINGLETON TASK(S) ACROSS ARBITRARY AUTONOMOUS SERVER INSTANCES - Large scale internet services may be implemented using multiple discrete server instances. Some tasks of the large scale internet services may be singleton tasks, which may be advantageously processed by a sub-set of the server instances (e.g., merely one instance). Accordingly, as provided herein, a singleton task may be processed in a reliable manner based upon one or more instances of a protocol executed across a set of arbitrary autonomous server instances. In one example, the protocol may determine whether a lease for a singleton task is valid or expired. If the lease is expired, then an attempt to claim the lease may be performed by updating a current lease expiration with a new lease expiration. If the attempt is successful, then the singleton task may be processed until the new lease expiration expires.02-03-2011
20100162252SYSTEM AND METHOD FOR SHIFTING WORKLOADS ACROSS PLATFORM IN A HYBRID SYSTEM - A system and associated method for shifting workloads across platform in a hybrid system. A first kernel governing a first platform of the hybrid system starts a process that is executable in a second platform of the hybrid system. The first kernel requests a second kernel governing the second platform to create a duplicate process of the process such that the process is executed in the second platform. The process represents the duplicate process in the first platform without consuming clock cycles of the first platform. During an execution of the duplicate process in the second platform, the first kernel services an I/O request of the duplicate process that is transferred from the second kernel to the first kernel. When the duplicate process is terminated, the process in the first platform is removed first before the duplicate process releases resources.06-24-2010
20110035749Credit Scheduler for Ordering the Execution of Tasks - A method for scheduling the execution of tasks on a processor is disclosed. The purpose of the method is in part to serve the special needs of soft real-time tasks, which are time-sensitive. A parameter Δ is an estimate of the amount of time required to execute the task. Another parameter Γ is the maximum amount of time that the task is to spend in a queue before being executed. In the illustrative embodiment, the preferred wait time Γ02-10-2011
20100153957SYSTEM AND METHOD FOR MANAGING THREAD USE IN A THREAD POOL - A method and system for managing a thread pool of a plurality of first type threads and a plurality of second type threads in a computer system using a thread manager, specifically, a method for prioritizing, cancelling, balancing the work load between first type threads and second type threads, and avoiding deadlocks in the thread pool. A queue stores a first type task and a second type task, the second type task being executable by at least one of the plurality of second type threads. The availability of at least one of the plurality of first type threads is determined, and if none are available, the availability of at least one of the plurality of second type threads is determined. An available second type thread is selected to execute the first type task.06-17-2010
20110023041PROCESS MANAGEMENT SYSTEM AND METHOD FOR MONITORING PROCESS IN AN EMBEDDED ELECTRONIC DEVICE - A checking method for a process in an embedded electronic device includes the following steps. A name of an application is recorded to an application recorder. The application is executed by a system processor. An active application list is acquired from the system processor. An execute control may determine if the name of the recorded application in the application recorder exists in the active application list. If the name of the recorded application does not exist, the system processor may shut down at least one child process related to the application.01-27-2011
20110214128ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item.09-01-2011
20110088035RETROSPECTIVE EVENT PROCESSING PATTERN LANGUAGE AND EXECUTION MODEL EXTENSION - A novel and useful method, system and framework for extending event processing pattern language to include constructs and patterns in the language to support historical patterns and associated retrospective event processing that enable a user to define patterns that consist of both on-line streaming and historical (retrospective) patterns. This enables entire functions to be expressed in a single pattern language and also enables event processing optimization whereby function processing is mapped to a plurality of event processing agents (EPAs). The EPAs in turn are assigned to a physical processor and to threads within the processor.04-14-2011
20090083742INTERRUPTABILITY MANAGEMENT VIA SCHEDULING APPLICATION - A system and methodology that facilitates management of user accessibility via a scheduling application is provided. A user can link or map interruptability levels to schedule entries, such as calendar entries or tasks thereby facilitating automatic communication management. Essentially, interruptability rules (and corresponding categories) can be associated to calendar entries and tasks thereby automating implementation of interruptability rules to manage communications received during calendar entries, tasks, meeting, appointments, etc.03-26-2009
20090083741Techniques for Accessing a Resource in a Processor System - A technique of accessing a resource includes receiving, at a master scheduler, resource access requests. The resource access requests are translated into respective slave state machine work orders that each include one or more respective commands. The respective commands are assigned, for execution, to command streams associated with respective slave state machines. The respective commands are then executed responsive to the respective slave state machines.03-26-2009
20110093857Multi-Threaded Processors and Multi-Processor Systems Comprising Shared Resources - An apparatus is provided comprising at least two processing entities. Shared resources are usable by a first and a second processing entity. A use of the shared resources is detected, and the execution of instructions associated with said processing entities is controlled based on the detection.04-21-2011
20110214127Strongly-Ordered Processor with Early Store Retirement - In one embodiment, a processor comprises a retire unit and a load/store unit coupled thereto. The retire unit is configured to retire a first store memory operation responsive to the first store memory operation having been processed at least to a pipeline stage at which exceptions are reported for the first store memory operation. The load/store unit comprises a queue having a first entry assigned to the first store memory operation. The load/store unit is configured to retain the first store memory operation in the first entry subsequent to retirement of the first store memory operation if the first store memory operation is not complete. The queue may have multiple entries, and more than one store may be retained in the queue after being retired by the retire unit.09-01-2011
20100223618SCHEDULING JOBS IN A CLUSTER - There is provided a method and system for scheduling a job in a cluster, the cluster comprises multiple computing nodes, and the method comprises: defining rules for constructing virtual sub-clusters of the multiple computing nodes; constructing the multiple nodes in the cluster into multiple virtual sub-clusters based on the rules, wherein one computing node can only be included in one virtual sub-cluster; dispatching a received job to a selected virtual sub-cluster; and scheduling at least one computing node for the dispatched job in the selected virtual sub-cluster. Further, the job is dispatched to the selected virtual sub-cluster based on characteristics of the job and/or characteristics of virtual sub-clusters. The present invention can increase the throughput of scheduling effectively.09-02-2010
20090313630COMPUTER PROGRAM, APPARATUS, AND METHOD FOR SOFTWARE MODIFICATION MANAGEMENT - A software modification management program is executed by a computer, whereby, when input with modification data, a modification application scheduled node decision unit generates a modification application scheduled node list. A modification applicable node selection unit successively extracts the node IDs of nodes which are not executing a job, from the modification application scheduled node list to set the extracted node IDs as modification applicable node IDs until the value of a modification-in-progress node counter indicating the number of nodes to which software modification is being applied reaches a predetermined upper limit value. A service management unit stops the service of nodes corresponding to the modification applicable node IDs. In accordance with the modification data, a modification unit modifies target software installed on the nodes whose service has been stopped.12-17-2009
20090328045TECHNIQUE FOR FINDING RELAXED MEMORY MODEL VULNERABILITIES - A system and method capable of finding relaxed memory-model vulnerabilities in a computer program caused by running on a machine having a relaxed memory model. A relaxed memory model vulnerability in a computer program includes the presence of program executions that are not sequentially consistent. In one embodiment, non-sequentially consistent executions are detected by exploring sequentially consistent executions.12-31-2009
20090106759INFORMATION PROCESSING SYSTEM AND RELATED METHOD THEREOF - An information processing system includes a first electronic device, a second electronic device and a processing module. The first electronic device processes a first task. The second electronic device processes a second task. The processing module, controls, without utilizing an operating system, the second electronic device to process the second task for a first specific time period during which the first electronic device does not process the first task which was being processed before the first specific time period.04-23-2009
20090106760METHOD AND APPARATUS FOR SELECTING A WORKFLOW ROUTE - A method for selecting a workflow route that includes determining a next processing phase of the work sheet to be processed and the work sheet properties in the phase, querying a pre-configured mapping table between work sheet properties and processing owners according to the work sheet properties in the next processing phase, and obtaining a matched processing owner for the next processing phase of the work sheet to be processed. An apparatus for selecting a workflow route includes a work sheet predefining module, a processing owner mat ching module, an inputting module, and a matching module. The technical solution provided in an embodiment of the disclosure may solve the problem of too heavy workload and proneness to errors caused by manually selecting a processing owner for a phase of a work sheet and may also solve the problem of too many processes caused by binding processing owners with the processes.04-23-2009
20100070976METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS WITH WHICH JOB EXECUTION HISTORY IS READILY CHECKED, AND RECORDING MEDIUM - Whether a job execution instruction has been issued or not is determined. When it is determined that the job execution instruction has been issued, a job ID is issued. Contents of the job in accordance with the job execution instruction are checked. Then, whether the job is a Scan to USB memory job in an emulation mode or not is determined. When it is determined that the job is the Scan to USB memory job in the emulation mode, a sub job ID brought in correspondence with the issued job ID is issued.03-18-2010
20090031316Scheduling in a High-Performance Computing (HPC) System - In one embodiment, a method for scheduling in a high-performance computing (HPC) system includes receiving a call from a management engine that manages a cluster of nodes in the HPC system. The call specifies a request including a job for scheduling. The method further includes determining whether the request is spatial, compact, or nonspatial and noncompact. The method further includes, if the request is spatial, generating one or more spatial combinations of nodes in the cluster and selecting one of the spatial combinations that is schedulable. The method further includes, if the request is compact, generating one or more compact combinations of nodes in the cluster and selecting one of the compact combinations that is schedulable. The method further includes, if the request is nonspatial and noncompact, identifying one or more schedulable nodes and generating a nonspatial and noncompact combination of nodes in the cluster.01-29-2009
20090031313EXTENSIBLE WEB SERVICES SYSTEM - Techniques for extending a Web services system are provided. One or more Web service applications (WSA) execute on a device. Each WSA provides at least one service. A WSA implements a particular version of a Web Services (WS) specification that is previous to a current version of the WS specification. In one technique, an orchestration module is added that coordinates the interaction between the WSA and one or more extension modules. While processing the request, the WSA calls the orchestration module. The orchestration module, based on one or more attributes of a request, determines whether an extension module, that comprises logic, should be called to process a portion of the request. The logic corresponds to a difference between the previous version and the current version. After an extension module finishes processing the portion of the request, the WSA is caused to further process the request.01-29-2009
20090031314FAIRNESS IN MEMORY SYSTEMS - Architecture for a multi-threaded system that applies fairness to thread memory request scheduling such that access to the shared memory is fair among different threads and applications. A fairness scheduling algorithm provides fair memory access to different threads in multi-core systems, thereby avoiding unfair treatment of individual threads, thread starvation, and performance loss caused by a memory performance hog (MPH) application. The thread slowdown is determined by considering the thread's inherent memory-access characteristics, computed as the ratio of the real latency that the thread experiences and the latency (ideal latency) that the thread would have experienced if it had run as the only thread in the same system. The highest and lowest slowdown values are then used to generate an unfairness parameter which when compared to a threshold value provides a measure of fairness/unfairness currently occurring in the request scheduling process. The architecture provides a balance between fairness and throughput.01-29-2009
20090031312Method and Apparatus for Scheduling Grid Jobs Using a Dynamic Grid Scheduling Policy - The illustrative embodiments described herein provide a computer-implemented method, apparatus, and computer program product for scheduling grid jobs. In one embodiment, a process identifies information describing available resources on a set of nodes on a heterogeneous grid computing system to form resource availability information. The process identifies a set of static scheduling policies for a set of static schedulers that manage the set of nodes. The process also identifies a static scheduling status for a portion of the set of static schedulers. The process creates a dynamic grid scheduling policy using the resource availability information, the set of static scheduling policies, and the static scheduling status. The process also schedules a set of grid jobs for execution by the available resources using the dynamic grid scheduling policy.01-29-2009
20100262967Completion Arbitration for More than Two Threads Based on Resource Limitations - A mechanism is provided for thread completion arbitration. The mechanism comprises executing more than two threads of instructions simultaneously in the processor, selecting a first thread from a first subset of threads, in the more than two threads, for completion of execution within the processor, and selecting a second thread from a second subset of threads, in the more than two threads, for completion of execution within the processor. The mechanism further comprises completing execution of the first and second threads by committing results of the execution of the first and second threads to a storage device associated with the processor. At least one of the first subset of threads or the second subset of threads comprise two or more threads from the more than two threads. The first subset of threads and second subset of threads have different threads from one another.10-14-2010
20090313629Task processing system and task processing method - Provided are a task processing system and a task processing method that can reduce power consumption and prevent overhead or processing load from increasing even with a system which performs frequency switching frequently. A main processor determines at least one of tasks to be executed by a sub processor in each of a plurality of time segment each having a predetermined length and determines, by the end of an nth (n is an integer that satisfies n≧1) time segment, a clock frequency necessary for executing the task within an (n+1)th time segment based on information of a required number of cycles for the task to be executed by the sub processor in the (n+1)th time segment. The clock generation/control circuit supplies, in the (n+1)th time segment, to the sub processor a clock signal according to the clock frequency determined by the main processor in the nth time segment.12-17-2009
20100037227METHOD FOR DIGITAL PHOTO FRAME TASK SCHEDULE - A method for executing a task schedule on a DPF is disclosed. The method includes loading a task configuration file comprising at least one task capable of being executed at any given time, reading a current time from a clock within the DPF, checking if there is the task waiting to be executed, executing the task if there exists the task waiting to be run, and repeating the reading a current time step, after a wait time, if no tasks have been scheduled for current execution.02-11-2010
20100037226GROUPING AND DISPATCHING SCANS IN CACHE - A method, system, and computer program product for grouping and dispatching scans in a cache directory of a processing environment is provided. A plurality of scan tasks is aggregated from a scan wait queue into a scan task queue. The plurality of scan tasks is determined by selecting one of (1) each of the plurality of scan tasks on the scan wait queue, (2) a predetermined number of the plurality of scan tasks on the scan wait queue, and (3) a set of scan tasks of a similar type on the scan wait queue. A first scan task from the plurality of scan tasks is selected from the scan task queue. The scan task is performed.02-11-2010
20100037225WORKLOAD ROUTING BASED ON GREENNESS CONDITIONS - Workload requests are routed in response to server greenness conditions. A workload request is received for a remotely invocable computing service executing separately in different remotely and geographically dispersed host computing servers. Greenness conditions pertaining to production or conservation of energy based upon external factors for each of the different remotely and geographically dispersed host computing servers are determined. The workload request is routed to one of the different remotely and geographically dispersed host computing servers based upon the determined greenness conditions.02-11-2010
20100064288IMAGE PROCESSING APPARATUS, APPLICATION STARTUP MANAGEMENT METHOD, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR - An image processing apparatus that enables to start an application that is required to start by stopping an application that is not used by a user to reserve an available memory capacity when receiving a startup request. A first determination unit determines a memory usage of the application that receives the startup instruction. A second determination unit determines an available memory capacity. A third determination unit determines an application that is not used by a user. A stopping unit stops the application determined by the third determination unit among the executing applications when the memory usage determined by the first determination unit is more than the available memory capacity determined by the second determination unit. A starting unit starts the application that receives the startup instruction using the available memory capacity that increases because the stopping unit stops the determined application.03-11-2010
20100058346Assigning Threads and Data of Computer Program within Processor Having Hardware Locality Groups - A computer program having threads and data is assigned to a processor having a processor cores and memory organized over hardware locality groups. The computer program is profiled to generate a data thread interaction graph (DTIG) representing the computer program. The threads and the data of the computer program are organized over clusters using the DTIG and based on one or more constraints. The DTIG is displayed to a user, and the user is permitted to modify the constraints such that the threads and the data of the computer program are reorganized over the clusters. Each cluster is mapped onto one of the hardware locality groups. The computer program is regenerated based on the mappings of clusters to hardware locality groups. At run-time, optimizations are performed to improve execution performance, while the computer program is executed.03-04-2010
20090217277USE OF CPI POWER MANAGEMENT IN COMPUTER SYSTEMS - A device, system, and method are directed towards managing power consumption in a computer system with one or more processing units, each processing unit executing one or more threads. Threads are characterized based on a cycles per instruction (CPI) characteristic of the thread. A clock frequency of each processing unit may be configured based on the CPI of each thread assigned to the processing unit. In a system wherein higher clock frequencies consume greater amounts of power, the CPI may be used to determine a desirable clock frequency. The CPI of each thread may also be used to assign threads to each processing unit, so that threads having similar characteristics are grouped together. Techniques for assigning threads and configuring processor frequency may be combined to affect performance and power consumption. Various specifications or factors may also be considered when scheduling threads or determining processor frequencies.08-27-2009
20090217276METHOD AND APPARATUS FOR MOVING THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step.08-27-2009
20090217275PIPELINING HARDWARE ACCELERATORS TO COMPUTER SYSTEMS - A method of pipelining hardware accelerators of a computing system includes associating hardware addresses to at least one processing unit (PU) or at least one logical partition (LPAR) of the computing system, receiving a work request for an associated hardware accelerator address, and queuing the work request for a hardware accelerator using the associated hardware accelerator address.08-27-2009
20110179421ENERGY EFFICIENT INTER-SUBSYSTEM COMMUNICATION - Control of communication in a data communication system of at least two subsystems is presented. Scheduling transfer of data is performed from a transmitting subsystem to a receiving subsystem. The scheduling comprises determining at least one of a plurality of transfer conditions including a level of activity of each subsystem, a point in time when each subsystem is scheduled to be active, a time limit for receiving data, in the receiving subsystem, an amount of data the receiving subsystem need, and a maximum amount of outstanding data in transfer between said subsystem. In dependence on at least the determined transferring conditions is transferring of data from the transmitting subsystem to the receiving subsystem, the transfer being subject to a delay that depends on the determined at least one transfer condition.07-21-2011
20100064286DATA AFFINITY BASED SCHEME FOR MAPPING CONNECTIONS TO CPUS IN I/O ADAPTER - A method, system and computer program product is disclosed for scheduling data packets in a multi-processor system comprising a plurality of processor units and a multitude of multicast groups. The method comprises associating one of the processor units with each of the multicast groups, receiving a multitude of data packets from the multicast groups, and scheduling all of the data packets received from each of the multicast groups for processing by the one of the processor units associated with said each of the multicast groups. In one embodiment, scheduling is based on affinity of both transmit and received processing for multiple connections to a processor unit. In another embodiment, a system call is provided for transmitting the same data over multiple sockets. Additional system calls may be used for building multicast group socket lists.03-11-2010
20110252427MODELING AND SCHEDULING ASYNCHRONOUS INCREMENTAL WORKFLOWS - Disclosed are methods and apparatus for scheduling an asynchronous workflow having a plurality of processing paths. In one embodiment, one or more predefined constraint metrics that constrain temporal asynchrony for one or more portions of the workflow may be received or provided. Input data is periodically received or intermediate or output data is generated for one or more of the processing paths of the workflow, via one or more operators, based on a scheduler process. One or more of the processing paths for generating the intermediate or output data are dynamically selected based on received input data or generated intermediate or output data and the one or more constraint metrics. The selected one or more processing paths of the workflow are then executed so that each selected processing path generates intermediate or output data for the workflow.10-13-2011
20100070975DETERMINING THE PROCESSING ORDER OF A PLURALITY OF EVENTS - A method for operating a multi-threading computational system includes: identifying related events; allocating the related events to a first thread; allocating unrelated events to one or more second threads; wherein the events allocated to the first thread are executed in sequence and the events allocated to the one or more second threads are executed in parallel to execution of the first thread. A system for allocating incoming events among operational groups to create a multi-treaded computation process includes: incoming events; an event processing system configured to receive the incoming events; an event key generator within the event processing system, the event key generator being configured to generate event keys at run time, the event keys being associated with the incoming events; and a thread dispatcher, the thread dispatcher allocating the incoming events among the operational groups according to the associated incoming event keys.03-18-2010
20110154343SYSTEM, METHOD, PROGRAM, AND CODE GNERATION UNIT - A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section.06-23-2011
20110154344 SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DEBUGGING A SYSTEM - A system, computer program and a method for debugging a system, the method includes: controlling, by a debugger, an execution flow of a processing entity; setting, by the debugger or the processing entity, a value of a scheduler control variable accessible by the scheduler; wherein the debugger is prevented from directly controlling an execution flow of a scheduler; and determining, by the scheduler, an execution flow of the scheduler in response to a value of the scheduler control variable.06-23-2011
20090025001METHODS AND SYSTEMS FOR PROCESSING A SET OF PRINT JOBS IN A PRINT PRODUCTION ENVIRONMENT - A system and method for routing and processing print jobs within a print job set considers the setup characteristics of each print job. Each print job set may be classified as a first job processing speed set, a second job processing speed set, or another job processing speed set based on the corresponding setup characteristics. First job processing speed sets are routed to a first group of print job processing resources, while second job processing speed sets are routed to a second group of print job processing speed resources. Each resource group may include an autonomous cell.01-22-2009
20120304185INFORMATION PROCESSING SYSTEM, EXCLUSIVE CONTROL METHOD AND EXCLUSIVE CONTROL PROGRAM - Features of an information processing system include a stand-by thread count information updating means that updates stand-by thread count information showing a number of threads which wait for release of lock according to a spinlock method, according to state transition of a thread which requests acquisition of predetermined lock; and a stand-by method determining means that determines a stand-by method of a thread which requests the acquisition of the lock based on the stand-by thread count information updated by the stand-by thread count information updating means and an upper limit value of the number of threads which wait according to the predetermined spinlock method.11-29-2012
20120304184MULTI-CORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multi-core processor system includes a multi-core processor and a storage apparatus storing for each application, a reliability level related to operation, where a given core accesses the storage apparatus and is configured to extract from the storage apparatus, the reliability level for a given application that invokes a given thread; judge based on the extracted reliability level and a specified threshold, whether the given application is an application of high reliability; identify, in the multi-core processor, a core that has not been allocated a thread of an application of low reliability, when judging that the given application is an application of high reliability, and identify in the multi-core processor, a core that has not been allocated a thread of an application of high reliability, when judging that the given application is an application of low reliability; and give to the identified core, an invocation instruction for the given thread.11-29-2012
20120304183MULTI-CORE PROCESSOR SYSTEM, THREAD CONTROL METHOD, AND COMPUTER PRODUCT - A multi-core processor system includes multiple cores and memory accessible from the cores, where a given core is configured to detect among the cores, first cores having a highest execution priority level; identify among the detected first cores, a second core that caused access conflict of the memory; and control a third core that is among the cores, excluding the first cores and the identified second core, the third core being controlled to execute for a given interval during an interval when the access conflict occurs, a thread that does not access the memory.11-29-2012
20120304180PROCESS ALLOCATION APPARATUS AND PROCESS ALLOCATION METHOD - A process allocation apparatus includes an evaluation value calculating unit, an internode total communication traffic calculating unit, and a correction evaluation value calculating unit. The evaluation value calculating unit calculates an evaluation value of process allocation in accordance with a hop count and inter-process communication traffic from a communication source node to which a process used as a communication source is allocated to a communication destination node to which a process used as a communication destination is allocated. The internode total communication traffic calculating unit specifies a communication route from the communication source node to the communication destination node and calculates internode total communication traffic indicating that the communication traffic between nodes on the specified communication route. The correction evaluation value calculating unit calculates a correction evaluation value used for the correction in accordance with the calculated evaluation value of the process allocation and the calculated internode total communication traffic.11-29-2012
20120304178CONCURRENT REDUCTION OPTIMIZATIONS FOR THIEVING SCHEDULERS - Concurrent reduction optimizations for thieving schedulers may include a thieving worker thread operable take a task from a first worker thread's task dequeue, the thieving worker thread and the first worker thread having same synchronization point in a program at which the thieving worker thread and the first worker thread can resume their operations. The thieving worker thread may be further operable to create a local copy of memory locations associated with the task in local memory of the thieving worker thread, and store result of the thieving worker executing the task as the local copy. The thieving worker thread may be further operable to atomically perform a reduction operation to a master location that both the thieving worker thread and the first worker thread can access, in response to the thieving worker thread completing the task.11-29-2012
20110072434SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING A PROCESSING ENTITY TASK - A system, computer program and a method for scheduling a processing entity task in a multiple-processing entity system, the method includes initializing a scheduler; receiving a task data structure indicative that a pre-requisite to an execution of task to be executed by a processing entity is a completion of a peripheral task that is executed by a peripheral; wherein the peripheral updates a peripheral task completion indicator once the peripheral task is completed; wherein the peripheral task completion indicator is accessible by the scheduler; and scheduling, by the scheduler, the task in response to the peripheral task completion indicator.03-24-2011
20110072433Method to Automatically ReDirect SRB Routines to a ZIIP Eligible Enclave - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments.03-24-2011
20110072432METHOD TO AUTOMATICALLY REDIRECT SRB ROUTINES TO A zIIP ELIGIBLE ENCLAVE - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments.03-24-2011
20110035750PROCESSING RESOURCE APPARATUS AND METHOD OF SYNCHRONISING A PROCESSING RESOURCE - A processing resource apparatus comprises a reference processing module comprising a set of reference stateful elements and a target processing module comprising a set of target stateful elements. A scan chain having a first mode for supporting manufacture testing is also provided, the scan chain being arranged to couple the reference processing module to the target processing module. The scan chain also has a second mode capable of synchronising the set of target stateful elements with the set of reference stateful elements in response to a synchronisation signal.02-10-2011
20110061055SYSTEM AND METHOD FOR GENERATING COMPUTING SYSTEM JOB FLOWCHARTS - A system and method for automatically generating flowcharts based on jobs within a mainframe job scheduling system is disclosed. The system may be interfaced through a web browser over a network (e.g., Internet) in order to configure a job flowchart request. The system includes a job flow utility employing rules and logic to execute a Job Control Language (JCL) script thereby invoking the creation of a job schedule based from a scheduling library and generates a delimited set of data that is stored within a database or saved as a delimited text file. The system also enables a user to view a job flowchart online or download the text-delimited file to open within existing charting applications.03-10-2011
20110061054METHOD AND APPARATUS FOR SCHEDULING EVENT STREAMS - Apparatus and method for scheduling event streams. The apparatus includes (i) an interface for receiving event streams which are placed in queues and (ii) a scheduler which selects at least one event stream for dispatch depending on sketched content information data of the received event streams. The scheduler includes a sketching engine for sketching the received event streams to determine content information data and a selection engine for selecting at least one received event stream for dispatch depending on the determined content information data of the received event streams. The method includes the steps of (i) determining content information data about the content of event streams and (ii) selecting at least one event stream from the event streams for dispatch depending on the content information data. A computer program, when executed by a computer, causes the computer to perform the steps of the above method.03-10-2011
20110061053MANAGING PREEMPTION IN A PARALLEL COMPUTING SYSTEM - This present invention provides a portable user space application release/reacquire of adapter resources for a given job on a node using information in a network resource table. The information in the network resource table is obtained when a user space application is loaded by some resource manager. The present invention provides a portable solution that will work for any interconnect where adapter resources need to be freed and reacquired without having to write a specific function in the device driver. In the present invention, the preemption request is done on a job basis using a key or “job key” that was previously loaded when the user space application or job originally requested the adapter resources. This is done for each OS instance where the job is run.03-10-2011
20110041132ELASTIC AND DATA PARALLEL OPERATORS FOR STREAM PROCESSING - A method to optimize performance of an operator on a computer system includes determining whether the system is busy, decreasing a software thread level within the operator if the system is busy, and increasing the software thread level within the operator if the system is not busy and a performance measure of the system at a current software thread level of the operator is greater than a performance measure of the system when the operator has a lower software thread level.02-17-2011
20110041133PROCESSING OF STREAMING DATA WITH A KEYED DELAY - A keyed delay is used in the processing of streaming data to decrease the processing performed and the output provided. A first event, within a particular window, having a particular key starts a delay condition. Arriving events with the same key replace the previous arrival for that key until the delay condition is satisfied. In response thereto, the latest event with that key is output.02-17-2011
20130160016Allocating Compute Kernels to Processors in a Heterogeneous System - A system and method embodiments for optimally allocating compute kernels to different types of processors, such as CPUs and GPUs, in a heterogeneous computer system are disclosed. These include comparing a kernel profile of a compute kernel to respective processor profiles of a plurality of processors in a heterogeneous computer system, selecting at least one processor from the plurality of processors based upon the comparing, and scheduling the compute kernel for execution in the selected at least one processor.06-20-2013
20110258631MANAGEMENT APPARATUS FOR MANAGING NETWORK DEVICES, CONTROL METHOD THEREOF, AND RECORDING MEDIUM - A control method including acquiring and storing, when generating a task in which an object and a network device to which to transmit the object are set, information about the object to be processed in the task; detecting, when executing the task, whether information about the object to be processed in the task is changed from the information about the object stored when the task is generated, according to a setting of the task or the object to be processed in the task; cancelling, when it is detected that there is a change in the information about the object, execution of the task; and transmitting, when it is detected that there is no change in the information about the object, the object processed in the task by executing the task.10-20-2011
20100281482APPLICATION EFFICIENCY ENGINE - A system and a method are provided. Performance and capacity statistics, with respect to an application executing on one or more VMs, may be accessed and collected. The collected performance and capacity statistics may be analyzed to determine an improved hardware profile for efficiently executing the application on a VM. VMs with a virtual hardware configuration matching the improved hardware profile may be scheduled and deployed to execute the application. Performance and capacity statistics, with respect to the VMs, may be periodically analyzed to determine whether a threshold condition has occurred. When the threshold condition has been determined to have occurred, performance and capacity statistics, with respect to VMs having different configurations corresponding to different hardware profiles, may be automatically analyzed to determine an updated improved hardware profile. VMs for executing the application may be redeployed with virtual hardware configurations matching the updated improved profile.11-04-2010
20100281483PROGRAMMABLE SCHEDULING CO-PROCESSOR - A scheduling co-processor for scheduling the execution of threads on a processor is disclosed. In certain embodiments, the scheduling co-processor includes one or more engines (such as lookup tables) that are programmable with a Petri-net representation of a thread scheduling algorithm. The scheduling co-processor may further include a token list to store tokens associated with the Petri-net; an enabled-thread list to indicate which threads are enabled for execution in response to particular tokens being present in the token list; and a ready-thread list to indicate which threads from the enabled-thread list are ready for execution when data and/or space availability conditions associated with the threads are satisfied.11-04-2010
20100083258SCHEDULING EXECUTION CONTEXTS WITH CRITICAL REGIONS - A scheduler in a process of a computer system detects an execution context that blocked from outside of the scheduler while in a critical region. The scheduler ensures that the execution context resumes execution on the processing resource of the scheduler on which the execution context blocked when the execution context becomes unblocked. The scheduler also prevents another execution context from entering a critical region on the processing resource prior to the blocked execution context becoming unblocked and exiting the critical region.04-01-2010
20120204183ASSOCIATIVE DISTRIBUTION UNITS FOR A HIGH FLOWRATE SYNCHRONIZER/SCHEDULE - An apparatus (08-09-2012
20120204182PROGRAM GENERATING APPARATUS AND PROGRAM GENERATING METHOD - A program generating apparatus includes a second program generating unit to generate a second program including a memory image that reproduces data used to execute a subsection by a first arithmetic unit, subsection information including initial value information at the start position of the subsection, a program controlling portion to store the memory image in a second storing unit used by a second arithmetic unit, to set the second arithmetic unit to the same state as the first arithmetic unit at the start position of the subsection, and to cause the second arithmetic unit to execute the subsection of a first program, a monitor program including a function needed to execute the first program, and a monitor program initializing portion to make settings for causing the monitor program to provide a service requested when the second arithmetic unit executes the first program.08-09-2012
20090288088PARALLEL EFFICIENCY CALCULATION METHOD AND APPARATUS - This invention is to provide a parallel efficiency calculation method, which can be applied, even in a case where a load balance is not kept, to many parallel processings including a heterogeneous computer system environment, and quantitatively correlates a parallel efficiency with a load balance contribution ratio and a virtual parallelization ratio, as parallel performance evaluation indexes, and parallel performance impediment factor contribution ratios. A parallel efficiency E11-19-2009
20080320480SYSTEM FOR DETERMINING ARRAY SEQUENCE OF A PLURALITY OF PROCESSING OPERATIONS - A method and system for determining an array sequence of processing operations to maximize the efficiency of steel plate processing. Between two processing operations, a first sequence constraint based on a first attribute of each processing operation and a second sequence constraint based on a second attribute of each processing operation are defined. A system selects, as a cluster, at least one of processing operations having a common attribute value of the first attribute, and arranged in a sequence satisfying the second sequence constraint. The system regards the first sequence constraint as a sequence constraint between a plurality of clusters, and arranges the plurality of clusters in a sequence maximizing the efficiency of processing.12-25-2008
20080320478AGE MATRIX FOR QUEUE DISPATCH ORDER - An apparatus for queue allocation. An embodiment of the apparatus includes a dispatch order data structure, a bit vector, and a queue controller. The dispatch order data structure corresponds to a queue. The dispatch order data structure stores a plurality of dispatch indicators associated with a plurality of pairs of entries of the queue to indicate a write order of the entries in the queue. The bit vector stores a plurality of mask values corresponding to the dispatch indicators of the dispatch order data structure. The queue controller interfaces with the queue and the dispatch order data structure. The queue controller excludes at least some of the entries from a queue operation based on the mask values of the bit vector.12-25-2008
20080320477METHOD FOR SCHEDULING AND CUSTOMIZING SURVEILLANCE TASKS WITH WEB-BASED USER INTERFACE - A customized surveillance task management system is implemented to intelligently schedule tasks for the user. Via an internet connection a user accesses a list of surveillances to be accomplished. The schedule of surveillances is created from information initially loaded into a centralized database that is subsequently analyzed by the schedule engine and written back into the database. Following execution of these surveillances the user again accesses the system to input data acquired. The user then inputs data to the database via the internet interface via preset database fields rendered to the client machine. This data is again analyzed by the scheduling engine and, with the help the scheduling, criticality, random sampling, and surveillance method assistants, provides the user with an updated schedule list and best set of surveillance methods dependent upon pass/fail rates and criticality of failures.12-25-2008
20080229312Processor register architecture - The invention provides a processor comprising an execution unit for executing multiple threads, each thread comprising a sequence of instructions and each thread being designated to handle activity from at least one specified source. The processor also comprises a thread scheduler for scheduling a plurality of threads to be executed by the execution unit, said scheduling being based on the respective activity handled by the threads; and a plurality of sets of registers connected to the execution unit. Each set of registers is arranged to store information representing a respective one of the plurality of threads, at least a part of the information being accessible by the execution unit for use in executing the respective thread when scheduled.09-18-2008
20090187910METHOD AND SYSTEM FOR AUTOMATED SCHEDULE CONTROL - A method for automated schedule control is disclosed. When a schedule appointment process is performed, an open services gateway initiative framework of an electronic device performs an automatic schedule control operation, detecting whether an execution for a schedule is required. If required, it is determined whether the schedule is an update operation. If so, a start or stop operation for a bundle corresponding to the schedule is performed. If not, the electronic device connects to a remote database at a preset time to determine whether a new manifest for the bundle corresponding to the schedule is detected. If detected, the new manifest is retrieved from the remote database and the bundle is accordingly updated thereto.07-23-2009
20080229310Processor instruction set - The invention provides a processor comprising: an execution unit, and a thread scheduler configured to schedule a plurality of threads for execution by the execution unit in dependence on a respective runnable status for each thread. The execution unit is configured to execute thread scheduling instructions which manage the runnable statuses. The thread scheduling instructions including at least: one or more source event enable instructions each of which sets an event source to a mode in which it generates an event dependent on activity occurring at that source, and a wait instruction which sets one of said runnable statuses to suspended pending one of the events upon which continued execution of the respective thread depends. The continued execution comprises retrieval of a continuation point vector for the respective thread.09-18-2008
20090125908Hardware Port Scheduler - According to one embodiment, an apparatus is disclosed. The apparatus includes a port having a plurality of lanes, a plurality of protocol engines. Each protocol engine is associated with one of the plurality of lanes, and processes tasks to be forwarded to a plurality of remote nodes. The apparatus also includes a first port task scheduler (PTS) to manage the tasks to be forwarded to the one or more of the plurality of protocol engines. The first PTS includes a register to indicate which of the plurality of protocol engines the first PTS is to support.05-14-2009
20090089784VARIABLE POLLING INTERVAL BASED ON HISTORICAL TIMING RESULTS - A method, system, and computer program product for computing an optimal time interval between polling requests to determine whether an asynchronous operation is completed, in a data processing system. A Polling Request Interval (PRI) utility determines the optimal time interval between successive polling requests, based on historical job completion results. The PRI utility first determines an average job time for previously completed operations. The PRI utility then retrieves a pair of preset configuration parameters including (1) a first parameter which provides the minimum time interval between successive polling requests; and (2) a second parameter which provides the fraction of the average task time added to the first parameter to obtain the time interval between (successive) polling requests. The PRI utility calculates the optimal time between polling requests based on the average job time and the retrieved configuration parameters.04-02-2009
20080282247Method and Server for Synchronizing a Plurality of Clients Accessing a Database - The invention relates to a method of synchronizing a plurality of clients accessing a database, each client executing a plurality of tasks on the database, wherein the method comprises for each of the clients the steps of accumulating the time of one or more tasks performed by the client after the issuance of a synchronization request and rejecting a request for the opening of a new task of the client, if the accumulated task time exceeds a maximum accumulated task time.11-13-2008
20080282246COMPILER AIDED TICKET SCHEDULING OF TASKS IN A COMPUTING SYSTEM - A method of scheduling tasks for execution in a computer system includes determining a dynamic worst case execution time for a non-periodic task. The dynamic worst case execution time is based on an actual execution path of the non-periodic task. An available time period is also determined, wherein the available time period is an amount of time available for execution of the non-periodic task. The non-periodic task is scheduled for execution if the dynamic worst case execution time is less than the available time period.11-13-2008
20120204181RECONFIGURABLE DEVICE, PROCESSING ASSIGNMENT METHOD, PROCESSING ARRANGEMENT METHOD, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD THEREFOR - According to the present invention, in changing the circuit configuration of a reconfigurable device, a circuit configuration change period is shortened while avoiding a dependency on processing contents without increasing the size of a circuit due to addition of a mechanism. Considering an execution order relation between a plurality of data flows, a setting change count necessary for changing the circuit configuration in changing processing is decreased within a constraint range, thereby shortening the circuit configuration change period.08-09-2012
20110161967INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING SAME, AND STORAGE MEDIUM - When an instruction about changing the job execution limit information is made, a policy server determines whether or not the changed job execution limit information indicates that the execution of the job by the job execution unit is not limited. When the changed job execution limit information indicates that the execution of the job is not limited and the setting is made such that the job history information for the job is recorded on the image processing apparatus, the policy server sets the changed job execution limit information to the image processing apparatus.06-30-2011
20110161966Controlling parallel execution of plural simulation programs - A non-transitory recording medium has a scheduler program embodied therein for controlling parallel execution of plural simulation programs, the scheduler program causing a computer to perform a parallel execution procedure by which the plural simulation programs are performed in parallel during a period in which there is no data exchange between the plural simulation programs, and a sequential execution procedure by which the plural simulation programs are sequentially performed during a period in which there is data exchange between the plural simulation programs.06-30-2011
20110161965JOB ALLOCATION METHOD AND APPARATUS FOR A MULTI-CORE PROCESSOR - A method and apparatus for performing pipeline processing in a computing system having multiple cores, are provided. To pipeline process an application in parallel and in a time-sliced fashion, the application may be divided into two or more stages and executed stage by stage. A multi-core processor including multiple cores may collect correlation information between the stages and allocate additional jobs to the cores based on the collected information.06-30-2011
20110161964Utility-Optimized Scheduling of Time-Sensitive Tasks in a Resource-Constrained Environment - Systems and methods implementing utility-maximized scheduling of time-sensitive tasks in a resource constrained-environment are described herein. Some embodiments include a method for utility-optimized scheduling of computer system tasks performed by a processor of a first computer system that includes determining a time window including a candidate schedule of a new task to be executed on a second computer system, identifying other tasks scheduled to be executed on the second computer system within said time window, and identifying candidate schedules that each specifies the execution times for at least one of the tasks (which include the new task and the other tasks). The method further includes calculating an overall utility for each candidate schedule based upon a task utility calculated for each of the tasks when scheduled according to each corresponding candidate schedule and queuing the new task for execution according to a preferred schedule with the highest overall utility.06-30-2011
20110161960PROGRESS-DRIVEN PROGRESS INFORMATION IN A SERVICE-ORIENTED ARCHITECTURE - A system may include reception of the first instruction, execution of the business process in a first software work process, reception, during execution of the business process, of an indication of a business object process associated with the business process, determination of progress information associated with the business process based on the indication of the business object process, and storage of the progress information within a memory. Aspects may further include reception, at a second work process, of a request from the client application for progress information, retrieval of the progress information from the shared memory and provision of the progress information to the client application.06-30-2011
20110161962DATAFLOW COMPONENT SCHEDULING USING READER/WRITER SEMANTICS - The scheduling of dataflow components in a dataflow network. A number, if not all, of the dataflow components are created using a domain/agent model. A scheduler identifies, for a number of the components, a creation source for the given component. The scheduler also identifies an appropriate domain-level access permission (and potentially also an appropriate agent-level access permission) for the given component based on the creation source of the given component. Tokens may be used at the domain or agent level to control access.06-30-2011
20110161961METHOD AND APPARATUS FOR OPTIMIZED INFORMATION TRANSMISSION USING DEDICATED THREADS - An approach is provided for optimized information transmission using dedicated threads. A thread manager receives a request from a device for content information. The thread manager assigns the request to a worker thread for processing to generate the content information. The thread manager further determines whether the worker thread has completed the processing of the content information. The thread manager delegates the processed content information to a transmission thread based, at least in part, on the determination, wherein the transmission thread causes, at least in part, transfer of the processed content information. The thread manager releases the worker thread from the assigned request.06-30-2011
20080301686METHOD AND APPARATUS FOR EXTENDING OPERATIONS OF AN APPLICATION IN A DATA PROCESSING SYSTEM - A method, an apparatus, and computer instructions are provided for extending operations of an application in a data processing system. A primary operation is executed. All extended operations of the primary operation are cached and pre and post operation identifiers are identified. For each pre operation identifier, a pre operation instance is created and executed. For each post operation identifier, a post operation instance is created and executed.12-04-2008
20080301685Identity-aware scheduler service - In a computing environment, clients and scheduling services are arranged to coordinate time-based services. Representatively, the client and scheduler engage in an http session whereby the client creates an account (if the first usage) indicating various identities and rights of the client for use with a scheduling job. Thereafter, one or more scheduling jobs are registered including an indication of what payloads are needed, where needed and when needed. Upon appropriate timing, the payloads are delivered to the proper locations, but the scheduling of events is no longer entwined with underlying applications in need of scheduled events. Monitoring of jobs is also possible as is establishment of appropriate communication channels between the parties. Noticing, encryption, and authentication are still other aspects as are launching third party services before payload delivery. Still other embodiments contemplate publishing an API or other particulars so the service can be used in mash-up applications.12-04-2008
20080301683Performing an Allreduce Operation Using Shared Memory - Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.12-04-2008
20120311595Optimizing Workflow Engines - Techniques for implementing a workflow are provided. The techniques include merging a workflow to create a virtual graph, wherein the workflow comprises two or more directed acyclic graphs (DAGs), mapping each of one or more nodes of the virtual graph to one or more physical nodes, and using a message passing scheme to implement a computation via the one or more physical nodes.12-06-2012
20120311594PROGRAM, DEVICE, AND METHOD FOR BUILDING AND MANAGING WEB SERVICES12-06-2012
20120311593ASYNCHRONOUS CHECKPOINT ACQUISITION AND RECOVERY FROM THE CHECKPOINT IN PARALLEL COMPUTER CALCULATION IN ITERATION METHOD - A method and system to acquire checkpoints in making iteration-method computer calculations in parallel and to effectively utilize the acquired data for recovery. At the time of acquiring a checkpoint in parallel calculation that repeats an iteration method, each node independently acquires the checkpoint in parallel with the calculation without stopping the calculation. Thereby, it is possible to perform both of the calculation and the checkpoint acquisition in parallel. In the case where the calculation does not impose an I/O bottleneck, checkpoint acquisition time is overlapped, and execution time is reduced. In this method, checkpoint data including values at different points of time during the acquisition process is acquired. By limiting the use purpose to iteration-method convergence calculations, mixture of the values at the different points of time in the checkpoint data is accepted in the problem that a convergence destination does not depend on an initial value.12-06-2012
20120311592MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which a scheduling function of giving a processing order to each of a plurality of tasks is supported. The present invention includes a memory including an operating system having a scheduler configured to perform a second scheduling function on a plurality of tasks, each having a processing order first-scheduled in accordance with a first reference and a processor performing an operation related to the operating system, the processor processing a plurality of the tasks. Moreover, if a first task among a plurality of the first-scheduled tasks meets a second reference, the scheduler performs the second scheduling function by changing the processing orders to enable the first task to be preferentially processed.12-06-2012
20120311591LICENSE MANAGEMENT IN A CLUSTER ENVIRONMENT - Embodiments are directed to managing and verifying licenses in a cluster computer system environment. In an embodiment, a license management application running on a computer system cluster manager receives a job that has multiple job tasks as well as portions of job information. The license management application determines from the job information how many licenses and computer nodes are to be assigned to the job. The license management application checks out the determined number of licenses from a license distributing application on behalf of the received job. The license management application indicates to a scheduler of the computer system cluster manager that one job task is to be run per checked out license.12-06-2012
20120311590RESCHEDULING ACTIVE DISPLAY TASKS TO MINIMIZE OVERLAPPING WITH ACTIVE PLATFORM TASKS - In general, in one aspect, a mobile device display includes panel electronics, a backlight driver and a rescheduler. The panel electronics is to generate images on an optical stack of the display based on input from a processing platform of the mobile device. The backlight driver is to control operation of a backlight used to illuminate the optical stack so that the user can see the images generated on the display. The rescheduler is to determine when a timing critical task of the processing platform overlaps with a non-timing critical task of the panel electronics or the backlight driver and reschedule the non-timing critical task until the timing critical task is inactive or a visual tolerance limit has been reached. The rescheduling minimizes overlap between the timing critical tasks and non-timing critical tasks and accordingly reduces power consumption without effecting performance or impacting a user's visual experience.12-06-2012
20120311589SYSTEMS AND METHODS FOR PROCESSING HIERARCHICAL DATA IN A MAP-REDUCE FRAMEWORK - Methods and arrangements for processing hierarchical data in a map-reduce framework. Hierarchical data is accepted, and a map-reduce job is performed on the hierarchical data. This performing of a map-reduce job includes determining a cost of partitioning the data, determining a cost of redefining the job and thereupon selectively performing at least one step taken from the group consisting of: partitioning the data and redefining the job.12-06-2012
20110047552ENERGY-AWARE PROCESS ENVIRONMENT SCHEDULER - A device receives a request associated with a process, and determines one or more current states of one or more process resources used to execute the process request. The device also calculates a power consumption associated with execution of the process request by the one or more process resources, and assigns an urgency for the process request, where the urgency corresponds to a time-variant parameter that indicates a measure of necessity for the execution of the process request. The device further determines whether the execution of the process request can be delayed to a future time based on the one or more current states, the power consumption, and the urgency, and causes, based on the determination, the process request to be executed or delayed to the future time.02-24-2011
20110055839Multi-Core/Thread Work-Group Computation Scheduler - Execution units process commands from one or more command queues. Once a command is available on the queue, each unit participating in the execution of the command atomically decrements the command's work groups remaining counter by the work group reservation size and processes a corresponding number of work groups within a work group range. Once all work groups within a range are processed, an execution unit increments a work group processed counter. The unit that increments the work group processed counter to the value stored in a work groups to be executed counter signals completion of the command. Each execution unit that access a command also marks a work group seen counter. Once the work groups processed counter equals the work groups to be executed counter and the work group seen counter equals the number of execution units, the command may be removed or overwritten on the command queue.03-03-2011
20110023042SCALABLE SOCKETS - A data processing system supporting a network interface device and comprising: a plurality of sets of one or more data processing cores; and an operating system arranged to support at least one socket operable to accept data received from the network, the data belonging to one of a plurality of data flows; wherein the socket is configured to provide an instance of at least some of the state associated with the data flows per said set of data processing cores.01-27-2011
20110265087Apparatus, method, and computer program product for solution provisioning - In one embodiment, an apparatus for solution provisioning includes a task manager configured to, establish a provisioning task and obtain a provisioning image for the provisioning task in response to a request, and a provisioning implementer configured to execute and monitor the provisioning task established by the task manager. The task manager configures and launches the provisioning implementer based on the provisioning image obtained, and the provisioning image includes configuration information and scripts used for executing installation, and information for mapping the configuration information to the scripts. In another embodiment, a method includes establishing a provisioning task in response to a received solution provisioning request, obtaining a provisioning image for the provisioning task, configuring and launching a provisioning implementer based on the obtained provisioning image, and executing and monitoring the provisioning task using the provisioning implementer. Other systems, methods, and computer program products are described according to other embodiments.10-27-2011
20110078688Virtualizing A Processor Time Counter - In one embodiment, the present invention includes a method for determining a scaling factor between a frequency of a first processor and a frequency of a second processor after a guest software is migrated from first processor to the second processor, and executing the guest software on the second processor using a virtual counter based on a physical counter of the second processor and the scaling factor. Other embodiments are described and claimed.03-31-2011
20100293547INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND PROGRAM - A loss of convenience that may occur when a process flow usable by a specific user is registered as a process flow commonly usable by multiple users is reduced. To accomplish this, an information processing apparatus includes a registration unit that registers a process flow for executing predetermined processing according to a predefined set value, the process flow being registered as a process flow that is usable by a specific user or a process flow that is commonly usable by a plurality of users, a changing unit that changes the process flow that is usable by the specific user to the process flow that is usable by the plurality of users, and a control unit that, when the changing unit changes the process flow, allows a user to change the set value to another set value.11-18-2010
20110126202Thread folding tool - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display.05-26-2011
20110126201Event Processing Networks - A hybrid event processing network (EPN) having at least one event processing agent (EPA) consists of a first set of EPAs defined declaratively and a second set of EPAs defined dynamically at runtime via an interface. Deploying the hybrid EPN includes loading the hybrid EPN, constructing an EPN structure, and creating indexes of nodes of the EPN structure. Deploying the hybrid EPN further includes representing an event in a hybrid EPN, and, in response to the event occurrence at an event source, receiving a notification from the hybrid EPN based on the event, and publishing the notification in an event channel. Embodiments of the invention includes propagating the event received within the hybrid EPN, determining a subsequent EPA associated with the event within the hybrid EPN, and propagating the event to the subsequent EPA in the hybrid EPN until the last element of the hybrid EPN is reached.05-26-2011
20110138391CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A system and associated method for continuously optimizing data archive management scheduling. A job scheduler receives, from an archive management system, inputs of task information, replica placement data, infrastructure topology data, and resource performance data. The job scheduler models a flow network that represents data content, software programs, physical devices, and communication capacity of the archive management system in various levels of vertices according to the received inputs. An optimal path in the modeled flow network is computed as an initial schedule, and the archive management system performs tasks according to the initial schedule. The operations of scheduled tasks are monitored and the job scheduler produces a new schedule based on feedbacks of the monitored operations and predefined heuristics.06-09-2011
20110138392OPERATING METHOD FOR A COMPUTER WITH PERFORMANCE OPTIMIZATION BY GROUPING APPLICATIONS - In at least one embodiment, if the pre-start level has the value empty container, the computer creates a container within the framework of the pre-start but does not load any application into the container. If the pre-start level has the value application, the computer creates a respective container within the framework of the pre-start for each application. If the pre-start level has a higher value, the computer determines within the framework of the pre-start a degree of grouping for the applications assigned to the respective pre-started unit, and groups the applications in accordance with the degree of grouping determined into at least one container group. Within the framework of the processing of the complex tasks, the computer terminates on switching from one application to another application, the application still being executed only if the application involves an application not able to be suspended.06-09-2011
20110088036Automated Administration Using Composites of Atomic Operations - Various techniques for automatically administering software systems using composites of atomic operations are disclosed. One method, which can be performed by an automation server, involves accessing information representing an activity that includes a first operation and a second operation. The information indicates that the second operation processes a value that is generated by the first operation. The method generates a sequence number as well as an output structure, which associates the sequence number with an output value generated by the first operation, and an input structure, which associates the sequence number with an input value consumed by the second operation. The method sends a message, via a network, to an automation agent implemented on a computing device. The computing device implements a software target of the first operation. The message includes information identifying the first operation as well as the output structure.04-14-2011
20110093856Thermal-Based Job Scheduling Among Server Chassis Of A Data Center - Thermal-based job scheduling among server chassis of a data center including identifying, by a data center management module in dependence upon a threshold fan speed for each server chassis, a plurality of server chassis having servers upon which one or more compute intensive jobs are executing, the data center management module comprising a module of automated computing machinery; identifying, by the data center management module, the compute intensive jobs currently executing on the identified plurality of server chassis; and moving, by the data center management module, the execution of the compute intensive jobs to one or more servers of chassis for compute intensive jobs.04-21-2011
20120210324Extended Dynamic Optimization Of Connection Establishment And Message Progress Processing In A Multi-Fabric Message Passing Interface Implementation - In one embodiment, the present invention includes a system that can optimize message passing by, at least in part, automatically determining a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests, and preventing processing of new connection requests and data transfer requests outside of a predetermined communication pattern. Other embodiments are described and claimed.08-16-2012
20090300628LOG QUEUES IN A PROCESS - A logger in a process of a computer system creates a log queue for each execution context and/or processing resource in the process. A log is created in the log queue for each log request and log information associated with the log request is stored into the log. All logs in each log queue except for the most recently added log in each log queue are flushed prior to the process completing.12-03-2009
20090300624Tracking data processing in an application carried out on a distributed computing system - Methods, systems, and products are disclosed for tracking data processing in an application carried out on a distributed computing system, the distributed computing system including a plurality of computing nodes connected through a data communications network, the application carried out by a plurality of pluggable processing components installed on the plurality of computing nodes, the pluggable processing components including a pluggable processing provider component and a pluggable processing consumer component, that include: identifying, by the provider component, data satisfying predetermined processing criteria, the criteria specifying that the data is relevant to processing provided by the consumer component; passing, by the provider component, the data to the next pluggable processing component in the application for processing, including maintaining access to the data; receiving, by the consumer component, the data during execution of the application; and sending, by the consumer component, a receipt indicating that the consumer component received the data.12-03-2009
20090300627SCHEDULER FINALIZATION - A runtime environment allows a scheduler in a process of a computer system to be finalized prior to the process completing. The runtime environment causes execution contexts that are inducted into the scheduler and execution contexts created by the scheduler to be tracked. The runtime environment finalizes the scheduler subsequent to each inducted execution context exiting the scheduler and each created execution context being retired by the scheduler.12-03-2009
20090300625Managing The Performance Of An Application Carried Out Using A Plurality Of Pluggable Processing Components - Methods, apparatus, and products are disclosed for managing the performance of an application carried out using a plurality of pluggable processing components, the pluggable processing components executed on a plurality of compute nodes, that include: identifying a current configuration of the pluggable processing components for carrying out the application; receiving a plurality of performance indicators produced during execution of the pluggable processing components; and altering the current configuration of the pluggable processing components in dependence upon the performance indicators and one or more additional pluggable processing components.12-03-2009
20090300623METHODS AND SYSTEMS FOR ASSIGNING NON-CONTINUAL JOBS TO CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A system and method for choosing non-continual jobs to run in a stream-based distributed computer system includes determining a total amount of resources to be consumed by non-continual jobs. A priority threshold is determined above which jobs will be accepted, below which jobs will be rejected. Overall penalties are minimized relative to the priority threshold based on estimated completion times of the jobs. System constraints are applied to ensure that jobs meet set criteria such that a plurality of non-continual jobs are scheduled which consider the system constraints and minimize overall penalties using available resources.12-03-2009
20090293060METHOD FOR JOB SCHEDULING WITH PREDICTION OF UPCOMING JOB COMBINATIONS - A method for scheduling different combinations of jobs simultaneously running on a shared hardware platform is disclosed. Schedules may be created while executing the current set of jobs, for one or more possible sets of jobs that may occur after a change in the current set of jobs. In at least one embodiment, the present invention may be implemented in a SDR system where the jobs may correspond to radios in the SDR system. The possible combinations of radios that may occur after a change in the set of currently running radios may be determined at run time by adding or removing one radio at a time from the set of currently running radios.11-26-2009
20110191779RECORDING MEDIUM STORING THEREIN JOB SCHEDULING PROGRAM, JOB SCHEDULING APPARATUS, AND JOB SCHEDULING METHOD - A job scheduling apparatus determines an assignment order, which is the order in which jobs are assigned to a computational resource, on the basis of priority levels and being associated with the assignment order. The apparatus assigns the jobs to the computational resource on the basis of the assignment order. The apparatus reduces the priority levels for the jobs that have been assigned to the computational resource. The apparatus increases the priority levels with time. Regarding a priority level among the priority levels, if, at a future time, which is a fixed time period from the start of execution of the jobs, an amount of an increase in the priority level is expected to be equal to or larger than an amount of a reduction in the priority level for a job, assignment of the job to the computational resource is executed.08-04-2011
20100031264MANAGEMENT APPARATUS AND METHOD FOR CONTROLLING THE SAME - A management apparatus for managing a production apparatus that executes a plurality of processes in accordance with a production plan detects an amount of a release-forgotten memory area that is kept allocated on a memory of the production apparatus by each process even after completion of the process. The management apparatus determines an amount of remaining memory based on the detected amount of the release-forgotten memory area and retrieves a process executable with the amount of remaining memory. The management apparatus determines a process to be executed next in accordance with either a result of the retrieval executed based on the detected amount of the release-forgotten memory area or the production plan and controls the production apparatus to execute the determined process.02-04-2010
20100031263PROCESS MODEL LEAN NOTATION - A process model lean notation provides an easy to understand way to categorize the process elements of a process using a process definition grammar. Process model lean notation allows an organization to rapidly identify the process elements of a process and the interactions between the process elements, and produces a process categorization that includes an ordered sequence of the process elements. A process categorization provides a structured presentation of the process elements and clearly indicates for each process element the task accomplished, the actor responsible for and/or performing the task, the tool that may be used to perform the task, and the work product that may result by performing the task.02-04-2010
20100031262Program Schedule Sub-Project Network and Calendar Merge - A master project file and one or more sub-project files are merged to form a merged master project file while avoiding date shifting, pointers to external files, accommodating equally named resources and calendars and accommodating split tasks which may otherwise be caused by differing settings or defaults for files created or modified on different processors or other incompatibility between the master project file and sub-project files by copying data reconstructed from the original settings, defaults and the like of the original sub-project file to descriptive fields in the merged master project file to resolve settings which must match between the master project file and the sub-project file while altering names of tasks or files as necessary and validating merged task data against the copied data.02-04-2010
20100023946USER-LEVEL READ-COPY UPDATE THAT DOES NOT REQUIRE DISABLING PREEMPTION OR SIGNAL HANDLING - A user-level read-copy update (RCU) technique. A user-level RCU subsystem executes within threads of a user-level multithreaded application. The multithreaded application may include reader threads that read RCU-protected data elements in a shared memory and updater threads that update such data elements. The reader and updater threads may be preemptible and comprise signal handlers that process signals. Reader registration and unregistration components in the RCU subsystem respectively register and unregister the reader threads for RCU critical section processing. These operations are performed while the reader threads remain preemptible and with their signal handlers being operational. A grace period detection component in the RCU subsystem considers a registration status of the reader threads and determines when it is safe to perform RCU second-phase update processing to remove stale versions of updated data elements that are being referenced by the reader threads, or take other RCU second-phase update processing actions.01-28-2010
20100017805DATA PROCESSING APPARATUS, METHOD FOR CONTROLLING DATA PROCESSING APPARATUS,AND COMPUTER-READABLE STORAGE MEDIUM - When a plurality of jobs are processed using a plurality of data processing units, data formats of the jobs to be processed can be determined to distribute a data processing load of the data processing units. A method for controlling a data processing apparatus for causing a plurality of data processing units to process data of a job includes storing data of a first job in a storing unit in first and second data formats, determining whether to process the stored data of the first job in the first or second data format, and causing the plurality of data processing units to process the data in the determined data format. The determination is made based on whether processing of data of a second job by the first or second processing unit requires longer time.01-21-2010
20100017804Thread-to-Processor Assignment Based on Affinity Identifiers - For each thread of a computer program to be executed on a multiple-processor computer system, an affinity identifier is associated to the thread by the computer program. The affinity identifiers of the threads denote how closely related the threads are. For each thread, a processor of the multiple-processor computer system on which the thread is to be executed is selected based on the affinity identifiers of the threads, by an operating system being executed on the multiple-processor computer system and in relation to which the computer programs are to be executed. Each thread is then executed by the processor selected for the thread.01-21-2010
20110307896Method and Apparatus for Scheduling Plural Tasks - A method is provided for scheduling a first task and a second task, wherein the first task is to be performed repeatedly with a predetermined first repetition time interval and the second task is to be performed repeatedly with a predetermined second repetition time interval. The method includes: scheduling the first task for performing the first task at first time points and scheduling the second task for performing the second task at second time points, wherein each of the second time points is different from any of the first time points. Further an apparatus for scheduling a first task and a second task is provided.12-15-2011
20110307897DYNAMICALLY LOADING GRAPH-BASED COMPUTATIONS - Processing data includes: receiving units of work that each include one or more work elements, and processing a first unit of work using a first compiled dataflow graph (12-15-2011
20110307894Redundant Multithreading Processor - A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread.12-15-2011
20120042316METHOD AND SYSTEM FOR CONTROLLING A SCHEDULING ORDER PER DAYPART CATEGORY IN A MUSIC SCHEDULING SYSTEM - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc) with respect to particular slots within the scheduling order. The broadcast day is divided into dayparts according to specific time slots. The dayparts are assigned with specific daypart categories wherein multimedia is scheduled. The scheduling order can be configured to include a slotted by daypart scheduling technique to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations.02-16-2012
20120042315METHOD AND SYSTEM FOR CONTROLLING A SCHEDULING ORDER PER CATEGORY IN A MUSIC SCHEDULING SYSTEM - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc) with respect to particular slots within the scheduling order. The scheduling order can be configured to include a forward order per category or a reverse order per category with respect to the playback of the multimedia content in order to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. A reverse scheduling technique provides an ideal rotation of songs when a pre-programmed show interferes with a normal rotation. Any rotational compromises can be buried in off-peak audience listening hours of the programming day using the disclosed reverse scheduling technique.02-16-2012
20120042317APPLICATION PRE-LAUNCH TO REDUCE USER INTERFACE LATENCY - A device stores a plurality of applications and a list of associations for those applications. The applications are preferably stored within a secondary memory of the device, and once launched each application is loaded into RAM. Each application is preferably associated to one or more of the other applications. Preferably, no applications are launched when the device is powered on. A user selects an application, which is then launched by the device, thereby loading the application from the secondary memory to RAM. Whenever an application is determined to be associated with a currently active state application, and that associated application has yet to be loaded from secondary memory to RAM, the associated application is pre-launched such that the associated application is loaded into RAM, but is set to an inactive state.02-16-2012
20120210323DATA PROCESSING CONTROL METHOD AND COMPUTER SYSTEM - A rerunning load is reduced for reducing the risk of exceeding a specified termination time after abnormally ending a job net. Even if the same data processed by jobs within a job net is replaced with split data of sub-jobs and some of sub-jobs have been abnormally ended, the job net is continued. For each split data, a state and/or an execution server ID of each job are stored, and the progress of a job net is managed. Only split data whose state is not “normal” is to be processed by rerunning. Based on states of execution servers, on whether or not intermediate files transferred between jobs is shared among execution servers, and on whether or not an output file is deleted after ending the subsequent job, it is judged whether or not intermediate files can be referred to and from what job the rerun is to be performed.08-16-2012
20120047508Resource Tracking Method and Apparatus - The present invention is directed to a parallel processing infrastructure, which enables the robust design of task scheduler(s) and communication primitive(s). This is achieved, in one embodiment of the present invention, by decomposing the general problem of exploiting parallelism into three parts. First, an infrastructure is provided to track resources. Second, a method is offered by which to expose the tracking of the aforementioned resources to task scheduler(s) and communication primitive(s). Third, a method is established by which task scheduler(s) in turn may enable and/or disable communication primitive(s). In this manner, an improved parallel processing infrastructure is provided.02-23-2012
20120047507SELECTIVE CONSTANT COMPLEXITY DISMISSAL IN TASK SCHEDULING - Various embodiments for selective constant complexity dismissal in task scheduling of a plurality of tasks are provided. A strictly increasing function is implemented to generate a plurality of unique creation stamps, each of the plurality of unique creation stamps increasing over time pursuant to the strictly increasing function. A new task to be placed with the plurality of tasks is labeled with a new unique creation stamp of the plurality of unique creation stamps. The one of the list of dismissal rules holds a minimal valid creation (MVC) stamp, which is updated when a dismissal action for the one of the list of dismissal rules is executed. The dismissal action acts to dismiss a selection of tasks over time due to continuous dispatch.02-23-2012
20120005682HOLISTIC TASK SCHEDULING FOR DISTRIBUTED COMPUTING - Embodiments of the present invention provide a method, system and computer program product for holistic task scheduling in a distributed computing environment. In an embodiment of the invention, a method for holistic task scheduling in a distributed computing environment is provided. The method includes selecting a first task for a first job and a second task for a different, second job, both jobs being scheduled for processing within a node a distributed computing environment by a task scheduler executing in memory by at least one processor of a computer. The method also can include comparing an estimated time to complete the first and second jobs. Finally, the first task can be scheduled for processing in the node when the estimated time to complete the second job exceeds the estimated time to complete the first job. Otherwise the second task can be scheduled for processing in the node when the estimated time to complete the first job exceeds the estimated time to complete the second job.01-05-2012
20120005681ASSERTIONS-BASED OPTIMIZATIONS OF HARDWARE DESCRIPTION LANGUAGE COMPILATIONS - Methods and systems for assertion-based simulations of hardware description language are provided. A method may include reading hardware description models of one or more hardware circuits. The hardware description language models may be transformed into a program of instructions configured to, when executed by a processor: (a) assume assertions regarding the hardware description language models are true; (b) establish dependencies among processes of the program of instructions based on the assertions; and (c) dynamically schedule execution of the processes based on the established dependencies.01-05-2012
20110167424INTEGRATED DIFFERENTIATED OPERATION THROTTLING - A method and system for throttling a plurality of operations of a plurality of applications that share a plurality of resources. A difference between observed and predicted workloads is computed. If the difference does not exceed a threshold, a multi-strategy finder operates in normal mode and applies a recursive greedy pruning process with a look-back and look-forward optimization to select actions for a final schedule of actions that improve the utility of a data storage system. If the difference exceeds the threshold, the multi-strategy finder operates in unexpected mode and applies a defensive action selection process to select actions for the final schedule. The selected actions are performed according to the final schedule and include throttling of a CPU, network, and/or storage.07-07-2011
20110167426SMART SCHEDULER - A smart scheduler is provided to prepare a machine for a job, wherein the job has specific requirements, i.e., dimensions. One or more config jobs are identified to configure the machine to meet the dimensions of the job. Information concerning the machine's original configuration and groupings of config jobs that change the machine's configuration are cached in a central storage. The smart scheduler uses information in the central storage to identify a suitable machine and one or more config jobs to configure the machine to meet the dimensions of a job. The smart scheduler schedules a run for the config jobs on the machine.07-07-2011
20110167423Intelligent Keying Center Workflow Optimization - A system and method for an intelligent keying center workflow optimization is disclosed. In accordance with one embodiment of the present disclosure, a method comprises receiving a plurality of work units and determining one or more item attributes associated with each of the work units. The method also includes selecting one of the plurality of work units to process. The method further includes determining one or more agent attributes associated with each of a plurality of agents. Additionally, the method includes selecting, with a workflow manager, an agent from the plurality of agents to process the selected work unit, based at least in part on the determined item attributes associated with each of the received work units and the determined one or more agent attributes. The method also includes transmitting the selected work unit to the selected agent.07-07-2011
20110167425INSTRUMENT-BASED DISTRIBUTED COMPUTING SYSTEMS - An instrument-based distributed computing system is disclosed that accelerates the measurement, analysis, verification and validation of data in a distributed computing environment. A large computing work can be performed in a distributed fashion using the instrument-based distributed system. The instrument-based distributed system may include a client that creates a job. The job may include one or more tasks. The client may distribute a portion of the job to one or more remote workers on a network. The client may reside in an instrument. One or more workers may also reside in instruments. The workers execute the received portion of the job and may return execution results to the client. As such, the present invention allows the use of instrument-based distributed system on a network to conduct the job and facilitate decreasing the time for executing the job.07-07-2011
20120017217MULTI-CORE PROCESSING SYSTEM AND COMPUTER READABLE RECORDING MEDIUM RECORDED THEREON A SCHEDULE MANAGEMENT PROGRAM - A multi-core processor system has a processing order manager which manages command blocks in a lock acquired state under exclusive control, an assigner which assigns a command block managed by the processing order manager to one of the processor cores, an exclusion manager which manages command blocks in a lock acquisition waiting state under the exclusive control, and a transfer controller which, when the command block in the lock acquisition waiting state managed by the exclusion manager gets into the lock acquired state, releases the command block from the exclusion manager, and registers the command block in the processing order manager, thereby efficiently processing tasks.01-19-2012
20120023497ELECTRONIC DEVICE WITH NETWORK ACCESS FUNCTION - An electronic device with network access function includes an input unit, a storage unit, a wireless network unit and a processing unit. The processing unit includes a scheduling module, a determining module, an accessing module and a downloading module. The scheduling module is configured to receive input from user and schedule online tasks. The determining module is configured to determine when it is time to perform a scheduled task. The accessing module is configured to navigate for the location of the desired information according to the user input when it is time for the scheduled task, and the downloading module is configured to download the desired information according to the user input, and store the desired information in the storage unit.01-26-2012
20120023498LOCAL MESSAGING IN A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for queuing tasks in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager performs a task enqueue operation for the task. The task enqueue operation includes adding the received task to an associated queue of the scheduling hierarchy, where the queue is associated with a data flow of the received task. The queue has a corresponding scheduler level M, where M is a positive integer less than or equal to N. Starting at the queue and iteratively repeating at each scheduling level until reaching the root scheduler, each node in the scheduling hierarchy maintains an actual count of tasks corresponding to the node. Each node communicates a capped task count to a corresponding parent scheduler at a relative next scheduler level.01-26-2012
20120060162SYSTEMS AND METHODS FOR PROVIDING A SENIOR LEADER APPROVAL PROCESS - Systems and methods of managing tasks within a customer relationship management system. A user with appropriate permissions who is assigned a task can create subtasks subordinate to the assigned task in order to delegate responsibility for completing the task. An owner of a task can seek input from other users by creating an approval route. A user interface is provided to display tasks assigned to a user in an approval route, and to allow a user to provide feedback on tasks assigned to them without having to sort through irrelevant information.03-08-2012
20120060161UI FRAMEWORK DISPLAY SYSTEM AND METHOD - A technique of efficiently improving the processing speed and response time of a user interface (UI) framework in a multi-core environment is provided. According to the technique, it is possible to improve both the throughput and response time of a UI by causing a plurality of workers to process a frame display command.03-08-2012
20090007120SYSTEM AND METHOD TO OPTIMIZE OS SCHEDULING DECISIONS FOR POWER SAVINGS BASED ON TEMPORAL CHARACTERISTICS OF THE SCHEDULED ENTITY AND SYSTEM WORKLOAD - In some embodiments, the invention involves a system and method to enhance an operating system's ability to schedule ready threads, specifically to select a logical processor on which to run the ready thread, based on platform policy. Platform policy may be performance-centric, power-centric, or a balance of the two. Embodiments of the present invention use temporal characteristics of the system utilization, or workload, and/or temporal characteristics of the ready thread in choosing a logical processor. Other embodiments are described and claimed.01-01-2009
20120060160COMPONENT-SPECIFIC DISCLAIMABLE LOCKS - Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource.03-08-2012
20120159496Performing Variation-Aware Profiling And Dynamic Core Allocation For A Many-Core Processor - In one embodiment, the present invention includes a processor with multiple cores each having a self-test circuit to determine a frequency profile and a leakage power profile of the corresponding core. In turn, a scheduler is coupled to receive the frequency profiles and the leakage power profiles and to schedule an application on at least some of the cores based on the frequency profiles and the leakage power profiles. Other embodiments are described and claimed.06-21-2012
20120254878MECHANISM FOR OUTSOURCING CONTEXT-AWARE APPLICATION-RELATED FUNCTIONALITIES TO A SENSOR HUB - A mechanism is described for outsourcing context-aware application-related activities to a sensor hub. A method of embodiments of the invention includes outsourcing a plurality of functionalities from an application processor to a sensor hub processor of a sensor hub by configuring the sensor hub processor, and performing one or more context-aware applications using one or more sensors coupled to the sensor hub processor.10-04-2012
20120159494WORKFLOWS AND PRESETS FOR WORKFLOWS - A system generate a workflow identifier, create a workflow that includes a first work unit, assign the workflow identifier to the workflow, update the workflow by adding a second work unit to the workflow, receive a work order to process the workflow, decompose the workflow into constituent work units in response to the work order, instantiate tasks that correspond to the constituent work units, and execute a work unit process for each of the tasks.06-21-2012
20120159493ADVANCED SEQUENCING GAP MANAGEMENT - Systems and methods to provide advance sequencing gap management. In example embodiments, a need to generate a proxy gap order for a sequence is detected. Using one or more processors, the proxy gap order is generated based on the detected need. The generated proxy gap order is then inserted into a particular location of the sequence based on the detected need.06-21-2012
20120159497ADAPTIVE PROCESS SCHEDULING METHOD FOR EMBEDDED LINUX - Provided is an adaptive process scheduling method for embedded Linux. The adaptive process scheduling method includes calculating a central processing unit (CPU) occupancy time of each of one or more processes, determining whether or not it is necessary to perform adaptive process scheduling, calculating a predetermined weight to be applied to the CPU occupancy time of each process when it is determined that it is necessary to perform adaptive process scheduling, and applying the predetermined weight and updating the CPU occupancy time of each process when it is determined that it is necessary to perform adaptive process scheduling. Accordingly, the adaptive process scheduling method can improve the performance by omitting an unnecessary context exchange compared to the related art and can dynamically cope with an abrupt increase in the number of processes.06-21-2012
20120159495NON-BLOCKING WAIT-FREE DATA-PARALLEL SCHEDULER - Methods, systems, and mediums are described for scheduling data parallel tasks onto multiple thread execution units of processing system. Embodiments of a lock-free queue structure and methods of operation are described to implement a method for scheduling fine-grained data-parallel tasks for execution in a computing system. The work of one of a plurality of worker threads is wait-free with respect to the other worker threads. Each node of the queue holds a reference to a task that may be concurrently performed by multiple thread execution units, but each on a different subset of data. Various embodiments relate to software-based scheduling of data-parallel tasks on a multi-threaded computing platform that does not perform such scheduling in hardware. Other embodiments are also described and claimed.06-21-2012
20120072916FUTURE SYSTEM THAT CAN PARTICIPATE IN SYSTEMS MANAGEMENT ACTIVITIES UNTIL AN ACTUAL SYSTEM IS ON-LINE - Hardware configuration management is provided. A hardware configuration manager includes a proposed new hardware configuration item for an existing production environment and its hardware configuration management software. A completed detailed setup of the management of the proposed hardware configuration item is completed before the proposed hardware configuration item is available. The detailed setup includes at least configuring policies of the proposed hardware configuration item. The hardware configuration manager also comprises a device for preventing scheduled tasks from running until a predefined period following activation of a new hardware configuration item that has the completed detailed setup and the proposed hardware configuration item is mapped thereto.03-22-2012
20120110584SYSTEM AND METHOD OF ACTIVE RISK MANAGEMENT TO REDUCE JOB DE-SCHEDULING PROBABILITY IN COMPUTER CLUSTERS - Systems and methods are provided for generating backup tasks for a plurality of tasks scheduled to run in a computer cluster. Each scheduled task is associated with a target probability for execution, and is executable by a first cluster element and a second cluster element. The system classifies the scheduled tasks into groups based on resource requirements of each task. The system determines the number of backup tasks to be generated. The number of backup tasks is determined in a manner necessary to guarantee that the scheduled tasks satisfy the target probability for execution. The backup tasks are desirably identical for a given group. And each backup task can replace any scheduled task in the given group.05-03-2012
20080320479TELECOM ADAPTER LAYER SYSTEM, AND METHOD AND APPARATUS FOR ACQUIRING NETWORK ELEMENT INFORMATION - A Telecom Adapter Layer (TAL) system includes a management unit and an execution unit connected via a distributed bus. In order to acquire network element (NE) information, an external service module sends a Get Info request to the execution unit according to the reference of the execution unit, and the execution unit acquires information from an NE according to the request and returns the NE information acquired from the NE to the external service module. The execution unit can be deployed in a device other than the service module or the management unit. The TAL system may be expanded to include more than one management unit and/or execution unit. By acquiring NE information from the execution unit, the TAL system is capable to perform NE management across a firewall.12-25-2008
20120254873COMMAND PATHS, APPARATUSES AND METHODS FOR PROVIDING A COMMAND TO A DATA BLOCK - Command paths, apparatuses, and methods for providing a command to a data block are described. In an example command path, a command receiver is configured to receive a command and a command buffer is coupled to the command receiver and configured to receive the command and provide a buffered command. A command block is coupled to the command buffer to receive the buffered command. The command block is configured to provide the buffered command responsive to a clock signal and is further configured to add a delay before to the buffered command, the delay based at least in part on a shift count. A command tree is coupled to the command block to receive the buffered command and configured to distribute the buffered command to a data block.10-04-2012
20120254876SYSTEMS AND METHODS FOR COORDINATING COMPUTING FUNCTIONS TO ACCOMPLISH A TASK - Systems and Methods are provided for coordinating computing functions to accomplish a task. The system includes a plurality of standardized executable application modules (SEAMs), each of which is configured to execute on a processor to provide a unique function and to generate an event associated with its unique function. The system includes a configuration file that comprises a dynamic data store (DDS) and a static data store (SDS). The DDS includes an event queue and one or more response queues. The SDS includes a persistent software object that is configured to map a specific event from the event queue to a predefined response record and to indicate a response queue into which the predefined response record is to be placed. The system further includes a workflow service module, the work flow service module being configured to direct communication between the SDS, the DDS and each of the plurality of SEAMs.10-04-2012
20120254877TRANSFERRING ARCHITECTED STATE BETWEEN CORES - A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads.10-04-2012
20110078690Opcode-Specified Predicatable Warp Post-Synchronization - One embodiment of the present invention sets forth a technique for performing a method for synchronizing divergent executing threads. The method includes receiving a plurality of instructions that includes at least one set-synchronization instruction and at least one instruction that includes a synchronization command, and determining an active mask that indicates which threads in a plurality of threads are active and which threads in the plurality of threads are disabled. For each instruction included in the plurality of instructions, the instruction is transmitted to each of the active threads included in the plurality of threads. If the instruction is a set-synchronization instruction, then a synchronization token, the active mask and the synchronization point is each pushed onto a stack. Or, if the instruction is a predicated instruction that includes a synchronization command, then each active thread that executes the predicated instruction is monitored to determine when the active mask has been updated to indicate that each active thread, after executing the predicated instruction, has been disabled.03-31-2011
20120079488Execute at commit state update instructions, apparatus, methods, and systems - An apparatus including an execution logic that includes circuitry to execute instructions, and an instruction execution scheduler logic coupled with the execution logic. The instruction execution scheduler logic is to receive an execute at commit state update instruction. The instruction execution scheduler logic includes at commit state update logic that is to wait to schedule the execute at commit state update instruction for execution until the execute at commit state update instruction is a next instruction to commit. Other apparatus, methods, and systems are also disclosed.03-29-2012
20120079487Subscriber-Based Ticking Model for Platforms - A central manager receives tick subscription requests from subscribers, including a requested period and an allowable variance. The manager selects a group period for a group of requests, based on requested period(s) and allowable variance(s). In some cases, the group period is not a divisor of every requested period but nonetheless provides at least one tick within the allowable variance of each requested period. Ticks may be issued by invoking a callback function. Ticks may be issued in a priority order based on the subscriber's category, e.g., whether it is a user-interface process. An application platform may send a tick subscription request on behalf of an application process, e.g., a mobile device platform may submit subscription requests for processes which execute on a mobile computing device. Tick subscription requests may be sent during application execution, e.g., while the application's user interface is being built or modified.03-29-2012
20110107338Selecting isolation level for an operation based on manipulated objects - Concurrency control overhead in transactional memory and main memory databases is reduced by automatically selecting the appropriate isolation level for each operation based on the objects accessed by the operation.05-05-2011
20110099551Opportunistically Scheduling and Adjusting Time Slices - Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a load imposed by virtual machines, each having a plurality of virtual processors, across various logical processors (comprising a physical machine) that are running threads in parallel. The threads are issued by the virtual processors and often cause spin waits that inefficiently consume capacity of the logical processors that are executing the threads. Upon detecting a spin-wait state of the logical processor(s), the scheduler will opportunistically grant time-slice extensions to virtual processors that are running a critical section of code, thus, mitigating performance loss on the front end. Also, the scheduler will mitigate performance loss on the back end by opportunistically de-scheduling then rescheduling a virtual machine in a spin-wait state to render the logical processor(s) available for other work in the interim.04-28-2011
20110099550ANALYSIS AND VISUALIZATION OF CONCURRENT THREAD EXECUTION ON PROCESSOR CORES. - An analysis and visualization is used to depict how a concurrent application executes threads on processor cores over time. With the analysis and visualization, a developer can readily identify thread migrations and thread affinity bugs that can degrade performance of the concurrent application. An example receives information regarding processes or threads running during a selected period of time. The information is processed to determine which processor cores are executing which threads over the selected period of time. The information is analyzed and executing threads for each core are depicted as channel segments over time, and can be presented in a graphical display. The visualization can help a developer identify areas of code that can be modified to avoid thread migration or to reduce thread affinity bugs to improve processor performance of concurrent applications.04-28-2011
20120124587THREAD SCHEDULING ON MULTIPROCESSOR SYSTEMS - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors.05-17-2012
20120124586SCHEDULING SCHEME FOR LOAD/STORE OPERATIONS - A method and apparatus are provided to control the order of execution of load and store operations. Also provided is a computer readable storage device encoded with data for adapting a manufacturing facility to create the apparatus. One embodiment of the method includes determining whether a first group, comprising at least one or more instructions, is to be selected from a scheduling queue of a processor for execution using either a first execution mode or a second execution mode. The method also includes, responsive to determining that the first group is to be selected for execution using the second execution mode, preventing selection of the first group until a second group, comprising at least one or more instructions, that entered the scheduling queue prior to the first group is selected for execution.05-17-2012
20090133025METHODS AND APPARATUS FOR BANDWIDTH EFFICIENT TRANSMISSION OF USAGE INFORMATION FROM A POOL OF TERMINALS IN A DATA NETWORK - Methods and apparatus for bandwidth efficient transmission of usage information from a pool of terminals in a data network. A device includes transceiver logic to receive usage tracking and reporting parameters, wherein the usage tracking parameters identify events to be tracked and the reporting parameters identify reporting criteria for each event, scheduling logic to track the events based on the usage tracking parameters to produce a tracking log, reporting logic to process the tracking log based on the reporting parameters to produce a reporting log, and the transceiver logic to transmit the reporting log. A server includes processing logic to generate usage tracking parameters that identify events to be tracked and reporting parameters that identify reporting criteria for each event and a transceiver to transmit the usage tracking parameters and the reporting parameters to one or more terminals.05-21-2009
20090133024Scheduling a Workload Based on Workload-Related Variables and Triggering Values - A mechanism is provided for scheduling a workload on a computer. The mechanism receives, in the computer, one or more workload-related variables. The mechanism further receives, in the computer, one or more trigger values for at least one of the one or more workload-related variables. Moreover, the mechanism determines, from the workload-related variables and their triggering values, one or more conditions under which one or more tasks are to be performed on the computer. In addition, the mechanism acquires a status value of at least one of the one or more workload-related variables at regular intervals and performs a task when a status value of a workload-related variable attains the triggering value for the task.05-21-2009
20090133023High Performance Queue Implementations in Multiprocessor Systems - Systems and methods provide a single reader single writer (SRSW) queue structure having entries that can be concurrently accessed in an atomic manner with a single memory access. The SRSW queues may be combined to create more complicated queues, including multiple reader single writer (MRSW), single reader multiple writer (SRMW), and multiple reader multiple writer (MRMW) queues.05-21-2009
20090133021METHODS AND SYSTEMS FOR EFFICIENT USE AND MAPPING OF DISTRIBUTED SHARED RESOURCES - Methods and systems for coordinating sharing of resources among a plurality of tasks operating in parallel in a document presentation environment while host communications and task processing may be performed asynchronously with respect to one another. A mapped resource manager manages activation (addition) and deactivation (deletion) of resources shared by a plurality of tasks operating in parallel to assure that each task may continue processing with a consistent set of files as resources despite changes made by other tasks or by operator intervention.05-21-2009
20120124588Generating Hardware Accelerators and Processor Offloads - System and method for generating hardware accelerators and processor offloads. System for hardware acceleration. System and method for implementing an asynchronous offload. Method of automatically creating a hardware accelerator. Computerized method for automatically creating a test harness for a hardware accelerator from a software program. System and method for interconnecting hardware accelerators and processors. System and method for interconnecting a processor and a hardware accelerator. Computer implemented method of generating a hardware circuit logic block design for a hardware accelerator automatically from software. Computer program and computer program product stored on tangible media implementing the methods and procedures of the invention.05-17-2012
20120124585Increasing Parallel Program Performance for Irregular Memory Access Problems with Virtual Data Partitioning and Hierarchical Collectives - A method for increasing performance of an operation on a distributed memory machine is provided. Asynchronous parallel steps in the operation are transformed into synchronous parallel steps. The synchronous parallel steps of the operation are rearranged to generate an altered operation that schedules memory accesses for increasing locality of reference. The altered operation that schedules memory accesses for increasing locality of reference is mapped onto the distributed memory machine. Then, the altered operation is executed on the distributed memory machine to simulate local memory accesses with virtual threads to check cache performance within each node of the distributed memory machine.05-17-2012
20120124584Event-Based Orchestration in Distributed Order Orchestration System - A distributed order orchestration system is provided that includes an event manager configured to generate and publish a set of events based on a process state and metadata stored in a database. A set of subscribers can consume the set of events, and each subscriber can execute a task based on the consumed event.05-17-2012
20120317577Pattern Matching Process Scheduler with Upstream Optimization - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue.12-13-2012
20110126203Efficient Input/Output-Aware Multi-Processor Virtual Machine Scheduling - Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a CPU-intensive workload imposed by virtual machines, each having a plurality of virtual processors supported by a root partition, across various logical processors that are running threads and input/output (I/O) operations in parallel. Upon measuring a frequency of the I/O operations performed by a logical processor that is mapped to the root partition, a hardware-interrupt rate is calculated as a function of the frequency. The hardware-interrupt rate is compared against a predetermined threshold rate to determine a level of an I/O-intensive workload being presently carried out by the logical processor. When the hardware-interrupt rate surpasses the predetermined threshold rate, the scheduler refrains from allocating time slices on the logical processor to the virtual machines.05-26-2011
20120317576method for operating an arithmetic unit - A method for operating an arithmetic unit having at least two computation cores. One signature register which has multiple inputs is assigned in each case to at least two of the at least two computation cores. At least one task is executed by the at least two of the at least two computation cores, an algorithm is computed in each task, results computed by each computation core are written into the assigned signature register, and the results written into the signature registers are compared.12-13-2012
20120317575APPORTIONING SUMMARIZED METRICS BASED ON UNSUMMARIZED METRICS IN A COMPUTING SYSTEM - A computer program product includes a computer readable storage medium containing computer code that, when executed by a computer, implements a method including receiving, by a memory device of the computing system, a log file, the log file comprising unsummarized metrics, the unsummarized metrics being related to a plurality of transactions performed by a program in the computing system, and a summarized metric, the summarized metric being related to the program, wherein the summarized metric comprises accumulated data from the plurality of transactions; selecting an unsummarized metric that reflects a distribution of the summarized metric among the plurality of transactions by a processing device of the computing system; and determining an amount of the summarized metric that belongs to a transaction of the plurality of transactions based on the selected unsummarized metric by the processing device of the computing system.12-13-2012
20120222034ASYNCHRONOUS CHECKPOINT ACQUSITION AND RECOVERY FROM THE CHECKPOINT IN PARALLEL COMPUTER CALCULATION IN ITERATION METHOD - A method and system to acquire checkpoints in making iteration-method computer calculations in parallel and to effectively utilize the acquired data for recovery. At the time of acquiring a checkpoint in parallel calculation that repeats an iteration method, each node independently acquires the checkpoint in parallel with the calculation without stopping the calculation. Thereby, it is possible to perform both of the calculation and the checkpoint acquisition in parallel. In the case where the calculation does not impose an I/O bottleneck, checkpoint acquisition time is overlapped, and execution time is reduced. In this method, checkpoint data including values at different points of time during the acquisition process is acquired. By limiting the use purpose to iteration-method convergence calculations, mixture of the values at the different points of time in the checkpoint data is accepted in the problem that a convergence destination does not depend on an initial value.08-30-2012
20120222033OFFLOADING WORK UNITS FROM ONE TYPE OF PROCESSOR TO ANOTHER TYPE OF PROCESSOR - A work unit (e.g., a load module) to be executed on one processor may be eligible to be offloaded and executed on another processor that is heterogeneous from the one processor. The other processor is heterogeneous in that is has a different computing architecture and/or different instruction set from the one processor. A determination is made as to whether the work unit is eligible for offloading. The determination is based, for instance, on the particular type of instructions (e.g., particular type of service call and/or program call instructions) included in the work unit and whether those types of instructions are supported by the other processor. If the instructions of the work unit are supported by the other processor, then the work unit is eligible for offloading.08-30-2012
20120167100MANUAL SUSPEND AND RESUME FOR NON-VOLATILE MEMORY - An external controller has greater control over control circuitry on a memory die in a non-volatile storage system. The external controller can issue a manual suspend command on a communication path which is constantly monitored by the control circuitry. In response, the control circuitry suspends a task immediately, with essentially no delay, or at a next acceptable point in the task. The external controller similarly has the ability to issue a manual resume command, which can be provided on the communication path when that path has a ready status. The control circuitry can also automatically suspend and resume a task. The external controller can cause a task to be suspended by issuing an illegal read command. The external controller can cause a suspended program task to be aborted by issuing a new program command.06-28-2012
20120131587HARDWARE DEVICE FOR PROCESSING THE TASKS OF AN ALGORITHM IN PARALLEL - A hardware device for concurrently processing a fixed set of predetermined tasks associated with an algorithm which includes a number of processes, some of the processes being dependent on binary decisions, includes a plurality of task units for processing data, making decisions and/or processing data and making decisions, including source task units and destination task units. A task interconnection logic means interconnect the task units for communicating actions from a source task unit to a destination task unit. Each of the task units includes a processor for executing only a particular single task of the fixed set of predetermined tasks associated with the algorithm in response to a received request action, and a status manager for handling the actions from the source task units and building the actions to be sent to the destination task units.05-24-2012
20120131586APPARATUS AND METHOD FOR CONTROLLING RESPONSE TIME OF APPLICATION PROGRAM - The present invention relates to a multi-control method for management of the response time of a data-centric real-time application program. The present invention integrally models a first system for controlling the response time of CPU operations and a second system for controlling the response time of accessing a storage medium using a MIMO structure and simultaneously controls the response time of the CPU operation and the response time of accessing the storage medium through the configuration by the integrated modeling. According to exemplary embodiments of the present invention, it is possible to more efficiently control the response time than an existing feedback control method.05-24-2012
20120131585Apparatuses And Methods For Processing Workitems In Taskflows - At least one example embodiment discloses a method of processing a workitem including a plurality of tasks. The method includes transmitting requests for completion to the plurality of tasks, respectively, receiving processed data from a first task of the plurality of tasks in response to the request, the processed data being marked as intended for a second task of the plurality of tasks, changing a counter value associated with the second task, each of the plurality of tasks associated with a counter value, transmitting the processed data to the second task, and determining a state of the workitem based on the counter values.05-24-2012
20120131584Devices and Methods for Optimizing Data-Parallel Processing in Multi-Core Computing Systems - According to an embodiment of a method of the invention, at least a portion of data to be processed is loaded to a buffer memory of capacity (B). The buffer memory is accessible to N processing units of a computing system. The processing task is divided into processing threads. An optimal number (n) of processing threads is determined by an optimizing unit of the computing system. The n processing threads are allocated to the processing task and executed by at least one of the N processing units. After processing by at least one of N processing units, the processed data is stored on a disk defined by disk sectors, each disk sector having storage capacity (S). The storage capacity (B) of the buffer memory is optimized to be a multiple X of sector storage capacity (S). The optimal number (n) is determined based, at least in part on N, B and S. The system and method are implementable in a multithreaded, multi-processor computing system. The stored encrypted data may be later recalled and decrypting using the same system and method.05-24-2012
20120167102TAG-BASED DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD THEREOF - A data processing apparatus and a data processing method thereof are provided. The data processing apparatus comprises the buffers, the scheduler and the process nodes. The buffer stores the processed data and unprocessed data about the process nodes. The scheduler uses a tag to indicate the data is in which process and location, and puts the data into the process. The process node actively retrieves the data from the buffer according to the tag, and processes and stores the data in the buffer. By assigning the tag of the data, the data process flow can be established to form a data process pipeline.06-28-2012
20120167106THREAD SYNCHRONIZATION METHODS AND APPARATUS FOR MANAGED RUN-TIME ENVIRONMENTS - A example method disclosed herein comprises initiating a first optimistically balanced synchronization to acquire a lock of an object, the first optimistically balanced synchronization comprising a first optimistically balanced acquisition and a first optimistically balanced release to be performed on the lock by a same thread and at a same nesting level, releasing the lock after execution of program code covered by the lock if a stored state of the first optimistically balanced release indicates that the first optimistically balanced release is still valid, the stored state of the first optimistically balanced release being initialized prior to execution of the program code to indicate that the first optimistically balanced release is valid, and throwing an exception after execution of the program code covered by the lock if the stored state of the first optimistically balanced release indicates that the first optimistically balanced release is no longer valid.06-28-2012
20120167105DETERMINING THE PROCESSING ORDER OF A PLURALITY OF EVENTS - A method for operating a multi-threading computational system includes: identifying related events; allocating the related events to a first thread; allocating unrelated events to one or more second threads; wherein the events allocated to the first thread are executed in sequence and the events allocated to the one or more second threads are executed in parallel to execution of the first thread.06-28-2012
20120167103APPARATUS FOR PARALLEL PROCESSING CONTINUOUS PROCESSING TASK IN DISTRIBUTED DATA STREAM PROCESSING SYSTEM AND METHOD THEREOF - Disclosed are an apparatus and a method for parallel processing continuous processing tasks in a distributed data stream processing system. A system for processing a distributed data stream according to an exemplary embodiment of the present invention includes a control node configured to determine whether a parallel processing of continuous processing tasks for an input data stream is required and if the parallel processing is required, instruct to divide the data stream and allocate the continuous processing tasks for processing the data streams to a plurality of distributed processing nodes, and a plurality of distributed processing nodes configured to divide the input data streams, allocate the divided data stream and the continuous processing tasks for processing the divided data streams, respectively, and combine the processing results, according to the instruction of the control node.06-28-2012
20120216205ENERGY-AWARE JOB SCHEDULING FOR CLUSTER ENVIRONMENTS - A job scheduler can select a processor core operating frequency for a node in a cluster to perform a job based on energy usage and performance data. After a job request is received, an energy aware job scheduler accesses data that specifies energy usage and job performance metrics that correspond to the requested job and a plurality of processor core operating frequencies. A first of the plurality of processor core operating frequencies is selected that satisfies an energy usage criterion for performing the job based, at least in part, on the data that specifies energy usage and job performance metrics that correspond to the job. The job is assigned to be performed by a node in the cluster at the selected first of the plurality of processor core operating frequencies.08-23-2012
20120216204CREATING A THREAD OF EXECUTION IN A COMPUTER PROCESSOR - Creating a thread of execution in a computer processor, including copying, by a hardware processor opcode called by a user-level process, with no operating system involvement, register contents from a parent hardware thread to a child hardware thread, the child hardware thread being in a wait state, and changing, by the hardware processor opcode, the child hardware thread from the wait state to an ephemeral run state.08-23-2012
20120216202Restarting Data Processing Systems - Techniques are disclosed that include a computer-implemented method including transmitting a message in response to a predetermined event through a process stage including at least first and second processes being executed as one or more tasks, the message instructing the abortion of the executing of the one or more tasks, and initiating abortion of execution of the one or more tasks by the one or more of the processes on receiving the messages.08-23-2012
20120216203HOLISTIC TASK SCHEDULING FOR DISTRIBUTED COMPUTING - Embodiments of the present invention provide a method, system and computer program product for holistic task scheduling in a distributed computing environment. In an embodiment of the invention, a method for holistic task scheduling in a distributed computing environment is provided. The method includes selecting a first task for a first job and a second task for a different, second job, both jobs being scheduled for processing within a node a distributed computing environment by a task scheduler executing in memory by at least one processor of a computer.08-23-2012
20100205606SYSTEM AND METHOD FOR EXECUTING A COMPLEX TASK BY SUB-TASKS - A system, device and method for performing a task by sub-tasks are provided. A number of sub-tasks may be selected for execution and an execution order may be determined. A prologue for a preceding sub-task and an epilogue for a subsequent task may be executed. The same prologue and epilogue may be used for a number of sub-tasks pairs. Executing the prologue and epilogue may enable consecutive execution of sub-tasks. Other embodiments are described and claimed.08-12-2010
20100205604SYSTEMS AND METHODS FOR EFFICIENTLY RUNNING MULTIPLE INSTANCES OF MULTIPLE APPLICATIONS - A system and method for managing multiple instances of a software application running on a single operating system is described. The system may be a server which hosts multiple copies of the same software application running in real time within a framework. The framework prevents the multiple copies of the application from interfering with one another.08-12-2010
20100205605SCHEDULING METHOD AND SYSTEM - A scheduling method and system. The method includes receiving by a computing system first data and second data associated with a user. The first data comprises user identification associated, an activity selection for an activity, and first scheduling information. The second data comprises geographical preference data. The computing system determines facilities associated with the activity. The facilities are located within boundaries specified by the geographical preference data. The computing system generates tentative reservations for the user at each facility. The computing system presents the tentative reservations data to the user. The computing system receives verification data from the user. The computing system posts the tentative reservations data in a social networking environment. The computing system stores the tentative reservations data.08-12-2010
20110179420Computer System and Method of Operation Thereof - Within a computer system typically any server process will serve client requests to actually use the system resource to which the server process relates, e.g. a file server process will relate to the file system stored typically on a hard disk or the like and provide read/write access thereto, or to respond to a client request for information of one or more properties of the system resource. For example, a client may request a file server process to report back the amount of spare capacity in the file storage system. However, if no client processes are currently requesting actual use of the resource, then there will be no changes in the system resource which will require notification in any event. Therefore, in the case that all the client programs connected to the server are connected for notification services only, rather than access services, then the server will not in fact be used, and moreover no changes to the system resource will need to be notified (because there has been no use of the resource to cause any changes). In this case, therefore, the access server can be unloaded from main or higher memory, thus providing savings in memory and CPU execution cycles.07-21-2011
20120137300Information Processor and Information Processing Method - According to one embodiment, an information processor includes a plurality of execution units, a storage, a generator, and a controller. The storage stores a plurality of basic modules executable asynchronously with another module and a parallel execution control description that defines an execution rule for the basic modules. The generator generates a task graph in which nodes indicating a plurality of tasks relating to the execution of the basic modules are connected by an edge according to the execution order of the tasks, and the nodes and a node of another module in a data dependency relationship are connected by the edge. The controller controls the assignment of the basic modules to the execution units based on the execution rule. The execution units each function as the generator for a basic module to be processed according to the assignment and executes the basic module according to the task graph.05-31-2012
20120137298Managing Groups of Computing Entities - Managing groups of entities is described. In an embodiment an administrator manages operations on a plurality of entities by constructing a management scenario which defines tasks to be applied on a group of entities. In an example the management scenario includes information on dependencies between entities and information on entity attributes, for example operating system version or CPU usage. In an embodiment an entity management engine converts the tasks and dependencies in the scenario to a management plan. In an example the management plan is a list of operations and conditions to be respected in applying an operation to an entity. In an embodiment the plan can be validated to ensure there are no conflicts. In an embodiment the entity management engine also comprises a scheduler which runs tasks contained in the plan and monitors their outcome.05-31-2012
20120137299MECHANISM FOR YIELDING INPUT/OUTPUT SCHEDULER TO INCREASE OVERALL SYSTEM THROUGHPUT - A mechanism for yielding input/output scheduler to increase overall system throughput is described. A method of embodiments of the invention includes initiating a first process issuing a first input/output (I/O) operation. The first process is initiated by a first I/O scheduling entity running on a computer system. The method further includes yielding, in response to a yield call made by the first I/O scheduling entity, an I/O scheduler to a second I/O scheduling entity to initiate a second process issuing a second I/O operation to complete a transaction including the first and second processes, and committing the transaction to a storage device coupled to the computer system.05-31-2012
20110185361Interdependent Task Management - An illustrative embodiment of a computer-implemented process for interdependent task management selects a task from an execution task dependency chain to form a selected task, wherein a type selected from a set of types including “forAll,” “runOnce” and none is associated with the selected task and determines whether there is a “forAll” task. Responsive to a determination that there is no “forAll” task, determines whether there is a “runOnce” task and responsive to a determination that there is a “runOnce” task further determines whether there is a semaphore for the selected task. Responsive to a determination that there is a semaphore for the selected task, the computer-implemented process determines whether the semaphore is “on” for the selected task and responsive to a determination that the semaphore is “on,” sets the semaphore “off” and executes the selected task.07-28-2011
20100175066SYSTEMS ON CHIP WITH WORKLOAD ESTIMATOR AND METHODS OF OPERATING SAME - A system on chip (SOC) includes a processor circuit configured to receive instruction information from an external source and to execute an instruction according to the received instruction information and a workload estimator circuit configured to monitor instruction codes executed in the processor circuit, to generate an estimate of a workload of the processor circuit based on the monitored instruction codes and to generate power supply voltage control signal based on the estimate of the workload. The SOC may further include a power management integrated circuit (PMIC) configured to receive the control signal and to adjust a power supply voltage provided to the SOC in response to the control signal.07-08-2010
20120174111METHOD TO DETERMINE DRIVER WORKLOAD FUNCTION AND USAGE OF DRIVER WORKLOAD FUNCTION FOR HUMAN-MACHINE INTERFACE PERFORMANCE ASSESSMENT - A method of objectively measuring a driver's ability to operate a motor vehicle user interface. The method includes objectively measuring the driver's ability to perform each one of a plurality of calibration tasks of various degrees of difficulty including an easy task, a medium task, and a difficult task; generating a scale with which to evaluate the driver's ability to operate the user interface, the scale customized for the driver based on the objective measurements of the driver's ability to perform each calibration task; objectively measuring the driver's ability to operate a function of the motor vehicle user interface; and objectively evaluating the driver's ability to operate the function of the motor vehicle user interface using the scale to determine if the user interface is appropriate for the driver.07-05-2012
20100299668Associating Data for Events Occurring in Software Threads with Synchronized Clock Cycle Counters - Methods, apparatuses, and computer-readable storage media are disclosed for reducing power by reducing hardware-thread toggling in a multi-processor. In a particular embodiment, a method is disclosed that includes collecting data from a plurality of software threads being processed by a processor, where the data for each of the events includes a value of an associated clock cycle counter upon occurrence of the event. Data is correlated for the events occurring for each of the plurality of threads by starting each of a plurality of clock cycle counters associated with the software threads at a common time. Alternatively, data is correlated for the events by logging a synchronizing event within each of the plurality of software threads.11-25-2010
20120284726PERFORMING PARALLEL PROCESSING OF DISTRIBUTED ARRAYS - One or more computer-readable media store executable instructions that, when executed by processing logic, perform parallel processing. The media store one or more instructions for initiating a single programming language, and identifying, via the single programming language, one or more data distribution schemes for executing a program. The media also store one or more instructions for transforming, via the single programming language, the program into a parallel program with an optimum data distribution scheme selected from the one or more identified data distribution schemes, and allocating the parallel program to two or more labs for parallel execution. The media further store one or more instructions for receiving one or more results associated with the parallel execution of the parallel program from the two or more labs, and providing the one or more results to the program.11-08-2012
20120284725Apparatus and Method for Processing Events in a Telecommunications Network - A processing platform, for example a Java Enterprise Edition (JEE) platform comprises a JEE cluster (11-08-2012
20120284724SYNCHRONIZATION OF WORKFLOWS IN A VIDEO FILE WORKFLOW SYSTEM - A system and method for synchronization of workflows in a video file workflow system. A workflow is created that splits execution of the workflow tasks (in a single, video file workflow) across multiple Content Management Systems (CMSs). When a single workflow is split across two CMSs, which jointly perform the overall workflow, the two resulting workflows are created to essentially mirror each other so that each CMS can track the tasks being executed on the other CMS using synchronization messages. Hence, both CMSs have the same representation of the processing status of the video content at all time. This allows for dual tracking of the workflow process and for independent operations, at different CMSs, when the CMS systems require load balancing. The split-processing based synchronization can be implemented in the workflows themselves or with simple modifications to workflow templates, without requiring any modification of the software of the workflow systems.11-08-2012
20100275209READER/WRITER LOCK WITH REDUCED CACHE CONTENTION - A scalable locking system is described herein that allows processors to access shared data with reduced cache contention to increase parallelism and scalability. The system provides a reader/writer lock implementation that uses randomization and spends extra space to spread possible contention over multiple cache lines. The system avoids updates to a single shared location in acquiring/releasing a read lock by spreading the lock count over multiple sub-counts in multiple cache lines, and hashing thread identifiers to those cache lines. Carefully crafted invariants allow the use of partially lock-free code in the common path of acquisition and release of a read lock. A careful protocol allows the system to reuse space allocated for a read lock for subsequent locking to avoid frequent reallocating of read lock data structures. The system also provides fairness for write-locking threads and uses object pooling techniques to make reduce costs associated with the lock data structures.10-28-2010
20100275210Execution engine for business processes - An execution engine is disclosed for executing business processes. An executable object model is generated for a business process document. Executable object models of business processes are assigned to virtual processors.10-28-2010
20100275208Reduction Of Memory Latencies Using Fine Grained Parallelism And Fifo Data Structures - Software rendering and fine grained parallelism are utilized to reduce/ovoid memory latency in a multi-processor (MP) system. According to one embodiment, the management of the transfer of data from one processor to another in the MP environment is moved into a low overhead hardware system. The low overhead hardware system may be a FIFO (“First In First Out”) hardware control. Each FIFO may be real or virtual.10-28-2010
20120180055OPTIMIZING ENERGY USE IN A DATA CENTER BY WORKLOAD SCHEDULING AND MANAGEMENT - Techniques are described for scheduling received tasks in a data center in a manner that accounts for operating costs of the data center. Embodiments of the invention generally include comparing cost-saving methods of scheduling a task to the operating parameters of completing a task—e.g., a maximum amount of time allotted to complete a task. If the task can be scheduled to reduce operating costs (e.g., rescheduled to a time when power is cheaper) and still be performed within the operating parameters, then that cost-saving method is used to create a workload plan to implement the task. In another embodiment, several cost-saving methods are compared to determine the most profitable.07-12-2012
20120180054METHODS AND SYSTEMS FOR DELEGATING WORK OBJECTS ACROSS A MIXED COMPUTER ENVIRONMENT - A method of delegating work of a computer program across a mixed computing environment is provided. The method includes: performing on one or more processors: allocating a container structure on a first context; delegating a new operation to a second context based on the container; receiving the results of the new operation; and storing the results in the container.07-12-2012
20120180058Configuring An Application For Execution On A Parallel Computer - Methods, systems, and products are disclosed for configuring an application for execution on a parallel computer that include: booting up a first subset of a plurality of nodes in a serial processing mode; booting up a second subset of the plurality of nodes in a parallel processing mode; profiling, prior to application deployment on the parallel computer, the application to identify the serial segments and the parallel segments of the application; and deploying the application for execution on the parallel computer in dependence upon the profile of the application and proximity within the data communications network of the nodes in the first subset relative to the nodes in the second subset.07-12-2012
20120180057Activity Recording System for a Concurrent Software Environment - An activity recording system for a concurrent software environment executing software threads in a computer system, the activity recording system comprising: a thread state indicator for recording an indication of a synchronization state of a software thread, the indication being associated with an identification of the software thread; a time profiler for polling values of a program counter for a processor of the computer system at regular intervals, the time profiler being adapted to identify and record one or more synchronization states of the software thread based on the polled program counter value and the recorded indication of state.07-12-2012
20120180056Heterogeneous Enqueuinig and Dequeuing Mechanism for Task Scheduling - Methods, systems and computer-readable mediums for task scheduling on an accelerated processing device (APD) are provided. In an embodiment, a method comprises: enqueuing one or more tasks in a memory storage module based on the APD; using a software-based enqueuing module; and dequeuing the one or more tasks from the memory storage module using a hardware-based command processor, wherein the command processor forwards the one or more tasks to the shader cote.07-12-2012
20130174168POLICY-BASED SCALING OF COMPUTING RESOURCES IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for policy-driven (e.g., price-sensitive) scaling of computing resources in a networked computing environment (e.g., a cloud computing environment). In a typical embodiment, a workload request for a customer will be received and a set of computing resources available to process the workload request will be identified. It will then be determined whether the set of computing resources are sufficient to process the workload request. If the set of computing resources are under-allocated (or are over-allocated), a resource scaling policy may be accessed. The set of computing resources may then be scaled based on the resource scaling policy, so that the workload request can be efficiently processed while maintaining compliance with the resource scaling policy.07-04-2013
20120185861MEDIA FOUNDATION MEDIA PROCESSOR - A system and method for a media processor separates the functions of topology creation and maintenance from the functions of processing data through a topology. The system includes a control layer including a topology generating element to generate a topology describing a set of input multimedia streams, one or more sources for the input multimedia streams, a sequence of operations to perform on the multimedia data, and a set of output multimedia streams, and a media processor to govern the passing of the multimedia data as described in the topology and govern the performance of the sequence of multimedia operations on the multimedia data to create the set of output multimedia streams. The core layer includes the input media streams, the sources for the input multimedia streams, one or more transforms to operate on the multimedia data, stream sinks, and media sinks to provide the set of output multimedia streams.07-19-2012
20120185860Component Lock Tracing - Methods for lock tracing at a component level. The method includes associating one or more locks with a component of the operating system; initiating lock tracing for the component; and instrumenting the component-associated locks with lock tracing program instructions in response to initiating lock tracing. The locks are selected from a group of locks configured for use by an operating system and individually comprise locking code. The component lock tracing may be static or dynamic.07-19-2012
20120084782Method and Apparatus for Efficient Memory Replication for High Availability (HA) Protection of a Virtual Machine (VM) - High availability (HA) protection is provided for an executing virtual machine. At a checkpoint in the HA process, the active server suspends the virtual machine; and the active server copies dirty memory pages to a buffer. During the suspension of the virtual machine on the active host server, dirty memory pages are copied to a ring buffer. A copy process copies the dirty pages to a first location in the buffer. At a predetermined benchmark or threshold, a transmission process can begin. The transmission process can read data out of the buffer at a second location to send to the standby host. Both the copy and transmission processes can operate substantially simultaneously on the ring buffer. As such, the ring buffer cannot overflow because the transmission process continues to empty the ring buffer as the copy process continues. This arrangement allows for smaller buffers and prevents buffer overflows.04-05-2012
20120233619USING GATHERED SYSTEM ACTIVITY STATISTICS TO DETERMINE WHEN TO SCHEDULE A PROCEDURE - Provided are a method, system, and computer program product for using gathered system activity statistics to determine when to schedule a procedure. Activity information is gathered in a computer system during time slots for recurring time periods. A high activity value is an activity amount of a slot having a maximum amount of activity and a low activity value is an activity amount of a slot having a minimum amount of activity. A threshold point is determined as a function of the high activity, the low activity, and a threshold percent comprising a percentage value. A selection is made of at least one lull window having a plurality of consecutive time slots each having an activity value lower than the threshold point and the procedure in the computer system is scheduled to be performed during the time slots in the lull window in a future time period.09-13-2012
20120227049JOB SCHEDULING WITH OPTIMIZATION OF POWER CONSUMPTION - A scheduler is provided, which takes into account the location of the data to be accessed by a set of jobs. Once all the dependencies and the scheduling constraints of the plan are respected, the scheduler optimizes the order of the remaining jobs to be run, also considering the location of the data to be accessed. Several jobs needing an access to a dataset on a specific disk may be grouped together so that the grouped jobs are executed in succession, e.g., to prevent activating and deactivating the storage device several times, thus improving the power consumption and also avoiding input output performances degradation.09-06-2012
20120260254VISUAL SCRIPTING OF WEB SERVICES FOR TASK AUTOMATION - Tasks are automated using assemblies of services. An interface component allows a user to collect services and to place selected services corresponding to a task to be automated onto a workspace. An analysis component performs an analysis of available data with regard to the selected services provided on the workspace and a configuration component automatically configures inputs of the selected services based upon the analysis of available data without intervention of the user. A dialog component is also provided to allow the user to contribute information to configure one or more of the inputs of the selected services. When processing is complete, an output component outputs a script that is executable to implement the task to be automated.10-11-2012
20120260253MODELING AND CONSUMING BUSINESS POLICY RULES - Concepts and technologies are described herein for modeling and consuming business policy rules. A policy server executes a policy application for modeling and storing the business policy rules. The business policy rules are modeled and stored in a data storage device according to an extensible policy framework architecture that can be tailored by administrators or other entities to support business-specific needs and/or operations. The modeled business policy rules can be used to support enforcement of business policy rules against various business operations, as well as allowing histories and/or other audits of business policy rules to be completed based upon information stored as the business policy rules.10-11-2012
20120260252SCHEDULING SOFTWARE THREAD EXECUTION - A computer-implemented method, system, and/or computer program product schedules execution of software threads. A first software thread is executed together with a second software thread as a first software thread pair. A first content, which resulted from executing the first software pair together, of at least one performance counter, is stored. The first software thread is then executed with a third software thread as a second software thread pair, and the resulting second content of the performance counter(s) is stored. An identification is made of a most efficient software thread pair from the first and second software thread pairs. Upon receiving a request to re-execute the first software thread, the first software thread is selectively matched with either the second software thread or the third software thread for execution based on whether the first software thread pair or the second software thread pair has been identified as the most efficient software thread pair.10-11-2012
20090019444Information processing and control - Information processing apparatus, including occurrence number counter counting events that occurred in each of a plurality of CPUs. Apparatus performs functions of; storing accumulated occurrence number of events, which occurred while the thread is being executed by each of the CPUs, in a thread storage area of the thread associating accumulated occurrence number with CPU; storing, in the thread storage area, a value of occurrence number counter of the CPU, the value having been counted before the thread is resumed by the CPU; and adding, to accumulated occurrence number which has been stored in accumulated number storing unit while corresponding to the CPU, a difference value obtained by subtracting a counter value, which has been stored in the start-time number storing unit of the thread, from a counter value of the occurrence number counter of the CPU, in a case where the CPU terminates an execution of the thread.01-15-2009
20090019442Changing a Scheduler in a Virtual Machine Monitor - Machine-readable media, methods, and apparatus are described to change a first scheduler in the virtual machine monitor. In some embodiments, a second scheduler is loaded in a virtual machine monitor when the virtual machine monitor is running; and then is activated to handle a scheduling request for a scheduling process in place of the first scheduler, when the virtual machine monitor is running.01-15-2009
20120260255Dynamic Test Scheduling - According to one embodiment of the present invention, a system dynamically schedules performance of tasks, and comprises a computer system including at least one processor. The system determines resources required or utilized by each task for performance of that task on a target system, and compares the determined resources of the tasks to identify tasks with similar resource requirements. The identified tasks with similar resource requirements are scheduled to be successively performed on the target system. Embodiments of the present invention further include a method and computer program product for dynamically scheduling performance of tasks in substantially the same manner described above.10-11-2012
20110126200Scheduling for functional units on simultaneous multi-threaded processors - A method and system for scheduling threads on simultaneous multithreaded processors are disclosed. Hardware and operating system communicate with one another providing information relating to thread attributes for threads executing on processing elements. The operating system determines thread scheduling based on the information.05-26-2011
20120233621METHOD, PROGRAM, AND PARALLEL COMPUTER SYSTEM FOR SCHEDULING PLURALITY OF COMPUTATION PROCESSES INCLUDING ALL-TO-ALL COMMUNICATIONS (A2A) AMONG PLURALITY OF NODES (PROCESSORS) CONSTITUTING NETWORK - Optimally scheduling a plurality of computation processes including all-to-all communications (A09-13-2012
20130174169UPDATING WORKFLOW NODES IN A WORKFLOW - Provided a method, system, and article of manufacture for updating workflow nodes in a workflow. A workflow program processes user input at one node in a workflow comprised of nodes and workflow paths connecting the nodes, wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program transmits information on a current node to an analyzer. The analyzer processes the information on the current node to determine whether there are modifications to at least one subsequent node following the current node over at least one workflow path from the current node. The analyzer transmits to the workflow program an update including modifications to the at least one subsequent node in response to determining the modifications.07-04-2013
20130174170PARALLEL COMPUTER, AND JOB INFORMATION ACQUISITION METHOD FOR PARALLEL COMPUTER - A parallel computer includes a plurality of calculation nodes and a management node. A calculation node includes a retention control unit that retains job information in a retention unit in association with an identification number, and the management node includes a retention control unit that retains the job information in a retention unit, retains, as a snapshot, job information of the same identification number in a case where the job information of the same identification number about a calculation node is detected in the retention unit. The retention unit of the calculation node includes a retention region enabling retention of job information corresponding to a plurality of periods, and the retention unit of the management node includes a retention region enabling retention of the job information corresponding to the plurality of periods with respect to each of the calculation nodes.07-04-2013
20130174171INTELLIGENT INCLUSION/EXCLUSION AUTOMATION - Methods, computer systems, and computer program products for automating tasks in a computing environment, are provided. In one such embodiment, by way of example only, if an instant task is not found in one of list of included tasks and a list of excluded tasks, at least one of the following is performed: the instant task is compared the with previous instances of the task, if any; the instant task is analyzed, including an input/output (I/O) sequence for the instant task, to determine if the instant task is similar to an existing task; and the instant task is considered as a possible candidate for automation. If the instant task is determined to be an automation candidate, the instant task is added to the list of included tasks, otherwise the instant task is added to the list of excluded tasks.07-04-2013
20130174165FAULT TOLERANT DISTRIBUTED LOCK MANAGER - A lock manager running on a machine may write a first entry for a first process to a queue associated with a resource. If the first entry is not at a front of the queue, the lock manager identifies a second entry that is at the front of the queue, and determines whether a second process associated with the second entry is operational. If the second process is not operational, the lock manager removes the second entry from the queue. Additionally, if the queue becomes unavailable, the lock manager may initiate failover to a backup copy of the queue.07-04-2013
20130174164METHOD AND SYSTEM FOR MANAGING ONE OR MORE RECURRENCIES - The present disclosure discloses methods and systems for managing one or more recurrencies. The method includes defining one or more recurrency tasks, each task having associated recurrency parameters. The method further includes identifying a recurrency period wherein the one or more recurrency tasks are disaggregated into individual scheduled events over the span of the recurrency period. Thereafter, a user-defined exclusionary schedule is applied to the disaggregated set of events. Subsequently, the edited recurrent tasks are output in a pre-defined file format.07-04-2013
20110004880System and Method for Data Transformation using Dataflow Graphs - A system and method for managing data, such as in a data warehousing, analysis, or similar applications, where dataflow graphs are expressed as reusable map components, at least some of which are selected from a library of components, and map components are assembled to create an integrated dataflow application. Composite map components encapsulate a dataflow pattern using other maps as subcomponents. Ports are used as link points to assemble map components and are hierarchical and composite allowing ports to contain other ports. The dataflow application may be executed in a parallel processing environment by recognizing the linked data processes within the map components and assigning threads to the linked data processes.01-06-2011
20110131581Scheduling Virtual Interfaces - A mechanism is provided for scheduling virtual interfaces having at least one virtual interface scheduler, a virtual interface context cache and a pipeline with a number of processing units. The virtual interface scheduler is configured to send a lock request for a respective virtual interface to the virtual interface context cache. The virtual interface context cache is configured to lock a virtual interface context of the respective virtual interface and to send a lock token to the virtual interface scheduler in dependence on said lock request. The virtual interface context cache configured to hold a current lock token for the respective virtual interface context and to unlock the virtual interface context, if a lock token of an unlock request received from the pipeline matches the held current lock token.06-02-2011
20110131580MANAGING TASK EXECUTION ON ACCELERATORS - Execution of tasks on accelerator units is managed. The managing includes multi-level grouping of tasks into groups based on defined criteria, including start time of tasks and/or deadline of tasks. The task groups and possibly individual tasks are mapped to accelerator units to be executed. During execution, redistribution of a task group and/or an individual task may occur to optimize a defined energy profile.06-02-2011
20120240122WEB-Based Task Management System and Method - A computer system configured to manage a task hierarchy has a task data store configured to store information about a plurality of tasks, the task information including a parent task and a task unique identifier, a task sheet data store configured to store information about a plurality of tasks, the task sheet information including a task sheet unique identifier, and a task to task sheet data store configured to store a plurality of relationships between tasks and task sheets, said relationships including a task unique identifier and a task sheet unique identifier.09-20-2012
20120240123Energy And Performance Optimizing Job Scheduling - Energy and performance optimizing job scheduling that includes queuing jobs; characterizing jobs as hot or cold, specifying a hot and a cold job sub-queue; iteratively for a number of schedules, until estimated performance and power characteristics of executing jobs in accordance with a schedule meets predefined selection criteria: determining a schedule in dependence upon a user provided parameter, the characterization of each job as hot or cold, and an energy and performance optimizing heuristic; estimating performance and power characteristics of executing the jobs in accordance with the schedule; and determining whether the estimated performance and power characteristics meet the predefined selection criteria. If the estimated performance and power characteristics do not meet the predefined selection criteria, adjusting the user-provided parameter for a next iteration and executing the plurality of jobs in accordance with the determined schedule if the estimated performance and power characteristics meet the predefined selection criteria.09-20-2012
20120240121CROSS FUNCTIONAL AREA SERVICE IDENTIFICATION - A cross-functional area service identification method and system. The method includes reading by a computing system, processes. The computing system processes process elements associated with the processes. The computing system identifies a first functional area associated with a first current process element of the process elements and a second functional area associated with a first parent process element of the first current process element. The computing system compares the first functional area to the second functional area and determines if the first functional area comprises a same functional area as the second functional area. The computing system generates and stores results indicating if the first functional area comprises a same functional area as the second functional area.09-20-2012
20120240120INFORMATION PROCESSING APPARATUS, POWER CONTROL METHOD, AND COMPUTER PRODUCT - An information processing apparatus includes a first detector that detects a scheduled starting time of an event to be corrected and executed at the current time or thereafter; a second detector that detects in processing contents differing from that of the event detected by the first detector, a scheduled starting time of each event to be executed at the current time or thereafter; a calculator that calculates the difference between the scheduled starting time detected by the first detector and each scheduled starting time detected by the second detector; a determiner that determines a target event for the event to be corrected, based on the calculated differences; and a corrector that corrects the scheduled starting time of the event to be corrected such that an interval becomes short between the scheduled starting time of the event to be corrected and the scheduled starting time of the target event.09-20-2012
20120266174METHODS AND APPARATUS FOR ACHIEVING THERMAL MANAGEMENT USING PROCESSING TASK SCHEDULING - The present invention provides apparatus and methods to perform thermal management in a computing environment. In one embodiment, thermal attributes are associated with operations and/or processing components, and the operations are scheduled for processing by the components so that a thermal threshold is not exceeded. In another embodiment, hot and cool queues are provided for selected operations, and the processing components can select operations from the appropriate queue so that the thermal threshold is not exceeded.10-18-2012
20110041131MIGRATING TASKS ACROSS PROCESSORS - The present disclosure is directed to a method for managing tasks in a computer system having a plurality of CPUs. Each task in the computer system may be configured to indicate a migration ready indicator of the task. The migration ready indicator for a task may be given when the set of live data for that task reduces or its working set of memory changes. The method may comprise associating a migration readiness queue with each of the plurality of CPUs, the migration readiness queue having a front-end and a back-end; analyzing a task currently executing on a particular CPU, wherein the particular CPU is one of the plurality of CPUs; placing the task in the migration readiness queue of the particular CPU based on status of the task and/or the migration ready indicator of the task; and selecting at least one queued task from the front-end of the migration readiness queue of the particular CPU for migration when the particular CPU receives a task migration command.02-17-2011
20120324460Thread Execution in a Computing Environment - A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock.12-20-2012
20120324459PROCESSING HIERARCHICAL DATA IN A MAP-REDUCE FRAMEWORK - Methods and arrangements for processing hierarchical data in a map-reduce framework. Hierarchical data is accepted, and a map-reduce job is performed on the hierarchical data. This performing of a map-reduce job includes determining a cost of partitioning the data, determining a cost of redefining the job and thereupon selectively performing at least one step taken from the group consisting of: partitioning the data and redefining the job.12-20-2012
20120324458SCHEDULING HETEROGENOUS COMPUTATION ON MULTITHREADED PROCESSORS - Aspects include computation systems that can identify computation instances that are not capable of being reentrant, or are not reentrant capable on a target architecture, or are non-reentrant as a result of having a memory conflict in a particular execution situation. A system can have a plurality of computation units, each with an independently schedulable SIMD vector. Computation instances can be defined by a program module, and a data element(s) that may be stored in a local cache for a particular computation unit. Each local cache does not maintain coherency controls for such data elements. During scheduling, a scheduler can maintain a list of running (or runnable) instances, and attempt to schedule new computation instances by determining whether any new computation instance conflicts with a running instance and responsively defer scheduling. Memory conflict checks can be conditioned on a flag or other indication of the potential for non-reentrancy.12-20-2012
20120324457USING COMPILER-GENERATED TASKS TO REPRESENT PROGRAMMING ELEMENTS - The present invention extends to methods, systems, and computer program products for representing various programming elements with compiler-generated tasks. Embodiments of the invention enable access to the future state of a method through a handle to a single and composable task object. For example, an asynchronous method is rewritten to generate and return a handle to an instance of a builder object, which represents one or more future states of the asynchronous method. Information about operation of the asynchronous method is then passed through the handle. Accordingly, state of the asynchronous method is trackable prior to and after completing.12-20-2012
20120324456MANAGING NODES IN A HIGH-PERFORMANCE COMPUTING SYSTEM USING A NODE REGISTRAR - A method of managing nodes in a high-performance computing (HPC) system, which includes a management subsystem and a job scheduler subsystem, includes providing a node registrar subsystem. Logical node management functions are performed with the node registrar subsystem. Other management functions are performed with the management subsystem using the node registrar subsystem. Job scheduling functions are performed with the job scheduler subsystem using the node registrar subsystem.12-20-2012
20120324455MONAD BASED CLOUD COMPUTING - Systems and methods are provided for using monads to facilitate complex computation tasks in a cloud computing environment. In particular, monads can be employed to facilitate creation and execution of data mining jobs for large data sets. Monads can allow for improved error handling for complex computation tasks. Monads can also facilitate identification of opportunities for improving the efficiency of complex computations.12-20-2012
20110055838OPTIMIZED THREAD SCHEDULING VIA HARDWARE PERFORMANCE MONITORING - A system and method for efficient dynamic scheduling of tasks. A scheduler within an operating system assigns software threads of program code to computation units. A computation unit may be a microprocessor, a processor core, or a hardware thread in a multi-threaded core. The scheduler receives measured data values from performance monitoring hardware within a processor as the one or more processors execute the software threads. The scheduler may be configured to reassign a first thread assigned to a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource. The scheduler may perform this dynamic reassignment in response to determining from the measured data values a first measured value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second measured value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold.03-03-2011
20100251248JOB PROCESSING METHOD, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED JOB PROCESSING PROGRAM AND JOB PROCESSING SYSTEM - For data obtaining target at execution of a new task, if a data set as a processing target is beforehand allocated to a data allocation area in an allocation-target execution server as a target of allocation, a schedule server of a job processing system sets the data set as the data obtaining target; if the data set as the processing target is not beforehand allocated to the data allocation area in any one of the execution servers, the schedule server sets the data in the external storage area as the data obtaining target; and if the data set as the processing target is beforehand allocated to the data allocation area in a second execution server other than the allocation-target execution server, the schedule server sets the data set allocated to the second execution server as the data obtaining target.09-30-2010
20120278810Scheduling Cool Air Jobs In A Data Center - Scheduling cool air jobs in a data center comprising computers whose operations produce heat and require cooling, cooling resources that provide cooling for the data center, a workload controller that schedules and allocates data processing jobs among the computers, a cooling controller that schedules and allocates cooling jobs among cooling resources, including assigning data processing jobs for execution by computers in the data center; providing, to the cooling controller, information describing data processing jobs scheduled for allocation among the computers in the data center; specifying, by the cooling controller in dependence upon the physical location of the computer to which each job is allocated and the quantity of data processing represented by each job, cooling jobs to be executed by cooling resources; and assigning, by the cooling controller in accordance with the workload allocation schedule to cooling resources in the data center, cooling jobs for execution.11-01-2012
20120278809LOCK BASED MOVING OF THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step.11-01-2012
20120331471EXECUTING MOLECULAR TRANSACTIONS - The claimed subject matter provides a method for executing molecular transactions on a distributed platform. The method includes generating a first unique identifier for executing a molecular transaction. The molecular transaction includes a first atomic action. The method further includes persisting a first work list record. The first work list record includes the first unique identifier and a step number for the first atomic action. Additionally, the method includes retrieving, by a first worker process of a runtime, the first work list record. The method also includes executing, by the first worker process, the first atomic action in response to determining that a first successful completion record for the first atomic action does not exist. Further, the method includes persisting, by the first worker process, the first successful completion record for the first atomic action in response to a successful execution of the first atomic action.12-27-2012
20110276971EXTENDING OPERATIONS OF AN APPLICATION IN A DATA PROCESSING SYSTEM - A method, an apparatus, and computer instructions are provided for extending operations of an application in a data processing system. A primary operation is executed. All extended operations of the primary operation are cached and pre and post operation identifiers are identified. For each pre operation identifier, a pre operation instance is created and executed. For each post operation identifier, a post operation instance is created and executed.11-10-2011
20110276970PROGRAMMER PRODUCTIVITY THOUGH A HIGH LEVEL LANGUAGE FOR GAME DEVELOPMENT - An object of the present invention is to provide a system and a method for a C++ based extension of the parallel VSIPL++ API that consists of a basis of game engine related operations. The invention relates to a system 11-10-2011
20110276969LOCK REMOVAL FOR CONCURRENT PROGRAMS - A system and method are disclosed for removing locks from a concurrent program. A set of behaviors associated with a concurrent program are modeled as causality constraints. The causality constraints which preserve the behaviors of the concurrent program are identified. Having identified the behavior preserving causality constraints, the corresponding lock and unlock statements in the concurrent program are identified which enforce the identified causality constraints. All identified lock and unlock statements are retained, while all other lock and unlock statements are discarded.11-10-2011
20110276968EVENT DRIVEN CHANGE INJECTION AND DYNAMIC EXTENSIONS TO A BPEL PROCESS - An extensible process design provides an ability to dynamically inject changes into a running process instance, such as a BPEL instance. Using a combination of BPEL, rules and events, processes can be designed to allow flexibility in terms of adding new activities, removing or skipping activities and adding dependent activities. These changes do not require redeployment of the orchestration process and can affect the behavior of in-flight process instances. The extensible process design includes a main orchestration process, a set of task execution processes and a set of generic trigger processes. The design also includes a set of rules evaluated during execution of the tasks of the orchestration process. The design can further include three types of events: an initiate process event, a pre-task execution event and a post-task execution event. These events and rules can be used to alter the behavior of the main orchestration process at runtime.11-10-2011
20120331472AD HOC GENERATION OF WORK ITEM ENTITY FOR GEOSPATIAL ENTITY BASED ON SYMBOL MANIPULATION LANGUAGE-BASED WORKFLOW ITEM - In one embodiment, a method comprises receiving from a user interface, by a computing device, a request for execution of at least one lambda function in an operation of a geospatial application, the geospatial application having lambda functions for operating on at least one of a workflow item or one or more entities of an ad hoc geospatial directory, the workflow item including at least one of the lambda functions for a workflow in the geospatial application; and executing by the computing device the at least one lambda function to form, in the geospatial application, a work entity that associates the workflow item with one of the entities, the work entity defining execution of the workflow on the one entity.12-27-2012
20120331470EMITTING COHERENT OUTPUT FROM MULTIPLE THREADS FOR PRINTF - One embodiment of the present invention sets forth a technique for emitting coherent output from multiple threads for the printf( )function. Additionally, parallel (not divergent) execution of the threads for the printf( )function is maintained when possible to improve run-time performance. Processing of the printf( )function is separated into two tasks, gathering of the per thread data and formatting the gathered data according to the formatting codes for display. The threads emit a coherent stream of contiguous segments, where each segment includes the format string for the printf( )function and the gathered data for a thread. The coherent stream is written by the threads and read by a display processor. The display processor executes a single thread to format the gathered data according to the format string for display.12-27-2012
20110283284DISTRIBUTED BUSINESS PROCESS MANAGEMENT SYSTEM WITH LOCAL RESOURCE UTILIZATION - Systems and methods consistent with the invention may include providing an instance of business process management suite in a sandbox of a web browser. The instance of the business process management suite may be based on an archive received from a web server. The business process management suite may be controlled using a graphical user interface in a browser. Providing a business process management suite may further include creating an instance of a database management system in the sandbox. The instance of the database management system may further store its data in the local memory of a client device.11-17-2011
20110321051TASK SCHEDULING BASED ON DEPENDENCIES AND RESOURCES - An example system identifies a set of tasks as being designated for execution, and the set of tasks includes a first task and a second task. The example system accesses task dependency data that corresponds to the second task and indicates that the first task is to be executed prior to the second task. The example system, based on the task dependency data, generates a task dependency model of the set of tasks. The dependency model indicates that the first task is to be executed prior to the second task. The example system schedules an execution of the first task, which is scheduled to use a particular data processing resource. The scheduling is based on the dependency model.12-29-2011
20110321050METHOD AND APPARATUS FOR PROVIDING SHARED SCHEDULING REQUEST RESOURCES - In accordance with one or more embodiments and corresponding disclosure thereof, various aspects are described in connection with providing shared scheduling request (SR) resources to devices for transmitting SRs. Identifiers related to the shared SR resources can be signaled to the devices along with indications of the shared SR resources in given time durations. Thus, devices can transmit an SR over shared SR resources related to one or more received identifiers for obtaining an uplink grant. This can decrease delay associated with receiving uplink grants since the device need not wait for dedicated SR resources before transmitting the SR. In addition, overhead can be decreased on control channels, as compared to signaling dedicated SR resources and/or uplink grants. Moreover, identifiers related to SR resources can correspond to a grouping of devices, such that a device can transmit over shared SR resources related to a group including the device.12-29-2011
20110321049Programmable Integrated Processor Blocks - An integrated processor block of the network on a chip is programmable to perform a first function. The integrated processor block includes an inbox to receive incoming packets from other integrated processor blocks of a network on a chip, an outbox to send outgoing packets to the other integrated processor blocks, an on-chip memory, and a memory management unit to enable access to the on-chip memory.12-29-2011
20110321048FACILITATING QUIESCE OPERATIONS WITHIN A LOGICALLY PARTITIONED COMPUTER SYSTEM - A facility is provided for processing to distinguish between a full conventional (or total system) quiesce request within a logically partitioned computer system, which requires all processors of the computer system to remain quiesced for the duration of the quiesce-related operation, and a new early-release conventional quiesce request, which is associated with fast-quiesce request utilization. In accordance with the facility, once all processors have quiesced responsive to a pending quiesce request sequence, the processors are allowed to block early-release conventional quiesce interrupts and to continue processing if there is no total system quiesce request in the pending quiesce request sequence.12-29-2011
20120102496RECONFIGURABLE PROCESSOR AND METHOD FOR PROCESSING A NESTED LOOP - A reconfigurable processor which merges an inner loop and an outer loop which are included in a nested loop and allocates the merged loop to processing elements in parallel, thereby reducing processing time to process the nested loop. The reconfigurable processor may extract loop execution frequency information from the inner loop and the outer loop of the nested loop, and may merge the inner loop and the outer loop based on the extracted loop execution frequency information.04-26-2012
20120102495RESOURCE MANAGEMENT IN A MULTI-OPERATING ENVIRONMENT - A method for providing user access to telephony operations in a multi operating environment having memory resources nearly depleted that include determining whether a predetermined first memory threshold of a computing environment has been reached and displaying a user interface corresponding to memory usage; and determining whether a predetermined second memory threshold, greater than the first, of the computing environment has been reached. Restricting computing functionality and allowing user access for telephony operations, corresponding to a mobile device, when the second memory threshold is reached is included as well. Also included is maintaining the computing restriction until the memory usage returns below the second memory threshold.04-26-2012
20120102494MANAGING NETWORKS AND MACHINES FOR AN ONLINE SERVICE - A cloud manager assists in deploying and managing networks for an online service. The cloud manager system receives requests to perform operations relating to configuring, updating and performing tasks in networks that are used in providing the online service. The management of the assets may comprise deploying machines, updating machines, removing machines, performing configuration changes on servers, Virtual Machines (VMs), as well as performing other tasks relating to the management. The cloud manager is configured to receive requests through an idempotent and asynchronous application programming interface (API) that can not rely on a reliable network.04-26-2012
20120291034TECHNIQUES FOR EXECUTING THREADS IN A COMPUTING ENVIRONMENT - A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock.11-15-2012
20120291035PARALLELIZED PROGRAM CONTROL - A processor comprises a plurality of processing units operating in parallel. Each processing unit is associated with a time signal generator upon the expiry of which the corresponding processing unit is capable to set expired time signal generator to a predefined duration of time. In case an end of the predefined duration of time deviates less than a predefined duration of time from a scheduled expiry of a time signal generator assigned to a different processing unit; predefined duration of time is modified.11-15-2012
20120291033THREAD-RELATED ACTIONS BASED ON HISTORICAL THREAD BEHAVIORS - Various embodiments provide techniques for managing threads based on a thread history. In at least some embodiments, a behavior associated with currently existing threads is observed and a thread-related action is performed. A result of the thread-related action with respect to the currently existing threads, resources associated with the currently existing threads (e.g., hardware and/or data resources), and/or other threads, is then observed. A thread history is recorded (e.g., as part of a thread history database) that includes the behavior associated with the currently existing threads, the thread related action that was performed, and the result of the thread-related action. The thread history can include information about multiple different thread behaviors and can be referenced to determine whether to perform thread-related actions in response to other observed thread behaviors.11-15-2012
20100199281Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments - Processing requests may be routed between a plurality of runtime environments, based on whether or not program(s) required for completion of the processing requests is/are loaded in a given runtime environment. Cost measures may be used to compare costs of processing a request in a local runtime environment and of processing the request at a non-local runtime environment.08-05-2010
20100199280SAFE PARTITION SCHEDULING ON MULTI-CORE PROCESSORS - One embodiment is directed to a method of generating a set of schedules for use by a partitioning kernel to execute a plurality of partitions on a plurality of processor cores included in a multi-core processor unit. The method includes determining a duration to execute each of the plurality of partitions without interference and generating a candidate set of schedules using the respective duration for each of the plurality of partitions. The method further includes estimating how much interference occurs for each partition when the partitions are executed on the multi-core processor unit using the candidate set of schedules and generating a final set of schedules by, for at least one of the partitions, scaling the respective duration in order to account for the interference for that partition. The method further includes configuring the multi-core processor unit to use the final set of schedules to control the execution of the partitions using at least two of the cores.08-05-2010
20130014115HIERARCHICAL TASK MAPPING - Mapping tasks to physical processors in parallel computing system may include partitioning tasks in the parallel computing system into groups of tasks, the tasks being grouped according to their communication characteristics (e.g., pattern and frequency); mapping, by a processor, the groups of tasks to groups of physical processors, respectively; and fine tuning, by the processor, the mapping within each of the groups.01-10-2013
20130014114INFORMATION PROCESSING APPARATUS AND METHOD FOR CARRYING OUT MULTI-THREAD PROCESSING - For a thread where data is to be popped off of queue storage, whether or not there is data that can be popped out of the queue storage accessed is first checked and then the data, if any, is popped. When there is no such data, the thread pushes thread information, including the identification information of its own thread, on the same queue and then releases a processor and shifts to a standby state. For a thread that is to push the data, when there is the thread information in the queue, it is determined that there is a thread waiting for the data, and then the data is sent after the thread information has been popped, which in turn resumes the processing.01-10-2013
20130014116UPDATING A WORKFLOW WHEN A USER REACHES AN IMPASSE IN THE WORKFLOW - Provided are a method, system, and article of manufacture for updating a workflow when a user reaches an impasse in the workflow. A workflow program processes user input at a current node in a workflow and provides user input to traverse through at least one workflow path to reach the current node. The workflow program processes user input at the current node to determine whether there is a next node in the workflow for the processed user input. The workflow program transmits information on the current node to an analyzer in response to determining that there is no next node in the workflow. If there are modifications to the current node, then the analyzer transmits to the workflow program an update including the determined modifications to the current node in response to determining the modification.01-10-2013
20100131954ELECTRONIC DEVICE AND CONTROL METHOD THEREOF, DEVICE AND CONTROL METHOD THEREOF, INFORMATION PROCESSING APPARATUS AND DISPLAY CONTROL METHOD THEREOF, IMAGE FORMING APPARATUS AND OPERATION METHOD THEREOF, AND PROGRAM AND STORAGE MEDIUM - In a device having a capability of using time data acquired from an external time information generator, a notification unit notifies a user of time information. The notification unit also notifies the user whether the notified time information is based on time data acquired from the external time information generator. Processing performed by the device is restricted depending on a status associated with time information. Although some types of processing are allowed when the device is in a status in which the time information is based on the time data acquired from the external time information generator, the same type of processing are disabled when the device is in any other status associated with time information.05-27-2010
20120151490SYSTEM POSITIONING SERVICES IN DATA CENTERS - A system and method are disclosed for managing a data center in terms of power and performance. The system includes at least one system positioning application for managing power costs and performance costs at a data center. The at least one system positioning application may determine a status of a data center in terms of power costs and performance costs or generate configurations to automatically implement a desired target state at the data center. A system configuration compiler is configured to receive a request from the system positioning application associated with a data center management task, convert the request into a set of subtasks, and schedule execution of the subtasks to implement the data center management task.06-14-2012
20120151489ARCHITECTURE FOR PROVIDING ON-DEMAND AND BACKGROUND PROCESSING - Embodiments are directed to providing schedule-based processing using web service on-demand message handling threads and to managing processing threads based on estimated future workload. In an embodiment, a web service platform receives a message from a client that is specified for schedule-based, background handling. The web service platform includes an on-demand message handling service with processing threads that are configured to perform on-demand message processing. The web service platform loads the on-demand message handling service including the on-demand message handling threads. The web service platform implements the on-demand message handling service's threads to perform background processing on the received client message. The client messages specified for background handling are thus handled as service-initiated on-demand tasks.06-14-2012
20130019245SPECIFYING ON THE FLY SEQUENTIAL ASSEMBLY IN SOA ENVIRONMENTSAANM Jalaldeen; AhamedAACI KarnatakaAACO INAAGP Jalaldeen; Ahamed Karnataka INAANM Purohit; Siddharth N.AACI AllenAAST TXAACO USAAGP Purohit; Siddharth N. Allen TX USAANM Sharma; ManishaAACI New DelhiAACO INAAGP Sharma; Manisha New Delhi INAANM Sivakumar; GandhiAACI VictoriaAACO AUAAGP Sivakumar; Gandhi Victoria AUAANM Viswanathan; RamAACI PlanoAAST TXAACO USAAGP Viswanathan; Ram Plano TX US - A method and system for defining an interface of a service in a service-oriented architecture environment. Definitions of atomic tasks of a request or response operation included in a service are received. Unique identifiers corresponding to the atomic tasks are assigned. A sequence map required to implement the service is received. The sequence map is populated with a sequence of the assigned unique identifiers corresponding to a sequence of the atomic tasks of the operation. At runtime, an interface of the service is automatically and dynamically generated to define the service by reading the sequence of unique identifiers in the populated sequence map and assembling the sequence of the atomic tasks based on the read sequence of unique identifiers.01-17-2013
20130019246Managing A Collection Of Assemblies In An Enterprise Intelligence ('EI') FrameworkAANM Reddington; Francis X.AACI SarasotaAAST FLAACO USAAGP Reddington; Francis X. Sarasota FL USAANM Sahota; NeilAACI Costa MesaAAST CAAACO USAAGP Sahota; Neil Costa Mesa CA US - Managing a collection of assemblies in an Enterprise Intelligence (‘EI’) framework, including: identifying, by an assembly collection tool, one or more processes for inclusion in a specification of an assembly, the assembly configured to carry out a business capability upon execution in the EI framework; identifying for each process, by the assembly collection tool, one or more tasks that comprise the process; identifying for each task, by the assembly collection tool, one or more steps that comprise the task; identifying, by the assembly collection tool, a sequence for executing the steps, tasks, and processes in the assembly; generating, in dependence upon the identified processes, tasks, steps, and sequence, the specification of the assembly; and storing the specification in a EI assembly repository.01-17-2013
20110161968Performing Zone-Based Workload Scheduling According To Environmental Conditions - To perform zone-based workload scheduling according to environmental conditions in a system having electronic devices, indicators of cooling efficiencies of the electronic devices in corresponding zones are aggregated to form aggregated indicators for respective zones, where the zones include respective subsets of electronic devices. Workload is assigned to the electronic devices according to the aggregated indicators.06-30-2011
20110161963METHODS, APPARATUSES, AND COMPUTER PROGRAM PRODUCTS FOR GENERATING A CYCLOSTATIONARY EXTENSION FOR SCHEDULING OF PERIODIC SOFTWARE TASKS - An apparatus for generating a cyclostationary extension for scheduling periodic software tasks may include a processor and a memory storing executable computer program code that causes the apparatus to at least perform operations including determining a time period including time periods associated with one or more radios. Each of the radios may include algorithms that are executable during respective time intervals of the time period. The computer program code may cause the apparatus to cyclically repeating each of the algorithms a number of times for the duration of the time period. In this regard, the algorithms may be executable a plurality of times during the time period. The computer program code may cause the apparatus to determine whether the algorithms are assignable to processors for execution during the respective time intervals based at least in part on a value. Corresponding computer program products and methods are also provided.06-30-2011
20130024865MULTI-CORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multi-core processor system includes a core configured to determine whether a task to be synchronized with a given task is present; identify among cores making up the multi-core processor and upon determining that a task to be synchronized with the given task is present, a core to which no non-synchronous task that is not synchronized with another task has been assigned, and identify among cores making up the multi-core processor and upon determining that a task to be synchronized with the given task is not present, a core to which no synchronous task to be synchronized with another task has been assigned; and send to the identified core, an instruction to start the given task.01-24-2013
20130024864Scalable Hardware Mechanism to Implement Time Outs for Pending POP Requests to Blocking Work Queues - Methods and apparatus for minimizing resources for handling time-outs of read requests to a work queue in a work queue memory are described. According to one embodiment of the invention, a work queue execution engine receives a first read request when the work queue is configured in a blocking mode and is empty. A time-out timer is started in response to receiving the first read request. The work queue execution engine receives a second read request while the first read request is still pending, and the work queue is still empty. When the time-out timer expires for the first read request, the work queue execution engine sends an error response for the first read request and restarts the time-out timer for the second read request taking into account an amount of time the second read request has already been pending.01-24-2013
20080250413Method and Apparatus for Managing Tasks - The method of managing a task provided by the present invention includes the steps of decomposing said task into at least two sub-tasks; assigning said at least two sub-tasks to at least two function modules, so that said at least two function modules respectively complete said at least two sub-tasks, wherein said at least two function modules respectively belong to at least two different equipments. By means of the present invention, a virtual equipment can be constructed more flexibly to complete specific tasks, thus not only the resources of the equipments can be made use of more effectively, but also the user's requirements at different situations can be met.10-09-2008
20080244588Computing the processor desires of jobs in an adaptively parallel scheduling environment - The present invention describes a system and method for scheduling jobs on a multiprocessor system. The invention includes schedulers for use in both work-sharing and work-stealing environments. Each system utilizes a task scheduler using historical usage information, in conjunction with a job scheduler to achieve its results. In one embodiment, the task scheduler measures the time spent on various activities, in conjunction with its previous processor allocation or previous desire, to determine an indication of its current processor desire. In another embodiment of the present invention, the task scheduler measures the resources used by the job on various activities. Based on these measurements, the task scheduler determines the efficiency of the job and an indication of its current processor desire. In another embodiment, the task scheduler measures the resources consumed executing the job and determines its efficiency and an indication of its current processor desire.10-02-2008
20080235692APPARATUS AND DATA STRUCTURE FOR AUTOMATIC WORKFLOW COMPOSITION - A stream processing system provides a description language for stream processing workflow composition. A domain definition data structure in the description language defines all stream processing components available to the stream processing system. Responsive to receiving a stream processing request, a planner translates the stream processing request into a problem definition. The problem definition defines stream properties that must be satisfied by property values associated with one or more output streams. The planner generates a workflow that satisfies the problem definition given the domain definition data structure.09-25-2008
20080235691SYSTEM AND METHOD OF STREAM PROCESSING WORKFLOW COMPOSITION USING AUTOMATIC PLANNING - An automatic planning system is provided for stream processing workflow composition. End users provide requests to the automatic planning system. The requests are goal-based problems to be solved by the automatic planning system, which then generates plan graphs to form stream processing applications. A scheduler deploys and schedules the stream processing applications for execution within an operating environment. The operating environment then returns the results to the end users.09-25-2008
20080235690Maintaining Processing Order While Permitting Parallelism - A system and method for maintaining processing order while permitting parallelism. Processing of a piece of work is divided into a plurality of stages. At each stage, a task advancing the work towards completion is performed. By performing processing as a sequence of tasks, processing can be done in parallel, with progress being made simultaneously on different pieces of work in different stages by a plurality of threads of execution.09-25-2008
20080235689Scheduling in a communication server - A method of operating a communication server in handling data from a plurality of channels, which includes receiving data of a plurality of channels, by the communication server, determining, for the channels, target times by which the channels should be handled in order to avoid starvation of the channel, estimating handling times required for processing sessions of the channel and repeatedly selecting, by a scheduler of the communication server, a channel whose data is to be handled responsive to the determined target times and the estimated handling times. In addition, a processor of the communication server is scheduled to perform, without interruption for handling of data of other channels, a processing session on the selected channel, in which the received data is prepared for transmission and placed in an output buffer and at least one driver of the communication server transmits the data prepared for transmission independently of the processor of the communication server.09-25-2008
20080229313Project task management system for managing project schedules over a network - A client-server based project schedule management system comprises multiple editors accessible through a web browser to perform various scheduling tasks by members of a project. Client-executable code is generated by the server for the client, which is passed to the client along with schedule-related information for populating the respective editors. The client executes the server-generated code to display the respective editor with pertinent information populated therein, and to manage and maintain any new or updated information in response to user interactions with the editor. Rows of tasks are represented by corresponding objects, where editor elements are object attributes which are directly accessible by the respective objects. Database queries are generated by the server based on constant strings containing placeholders which are replaced with information used by the query.09-18-2008
20080229311Interface processor - The invention provides a processor comprising a first port operable to generate a first indication dependent on a first activity at the first port, and a second port operable to generate a second indication dependent on a second activity at the second port. The processor also comprises an execution unit arranged to execute multiple threads; and a thread scheduler connected to receive the indications and arranged to schedule the multiple threads for execution by the execution unit based on those indications. The scheduling includes suspending the execution of a thread until receipt of the respective ready signal. The first activity and the second activity are each associated with respective corresponding threads.09-18-2008
20110246994SCHEDULING HETEROGENEOUS PARTITIONED RESOURCES WITH SHARING CONSTRAINTS - A system and method that provides an automated solution to obtaining quality scheduling for users of computing resources. The system, implemented in an enterprise software test center, collects information from test-shop personnel about test machine features and availability, test jobs, and tester preferences and constraints. The system reformulates this testing information as a system of constraints. An optimizing scheduling engine computes efficient schedules whereby all the jobs are feasibly scheduled while satisfying the users' time preferences to the greatest extent possible. The method and system achieves fairness: if all preferences can not be meet, it is attempted to evenly distribute violations of preferences across the users. The test scheduling is generated according to a first application of a greedy algorithm that finds an initial feasible assignment of jobs. The second is a local search algorithm that improves the initial greedy solution.10-06-2011
20110265088SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DYNAMICALLY INCREASING RESOURCES UTILIZED FOR PROCESSING TASKS - Mechanisms and methods are provided for dynamically increasing resources utilized for processing tasks. These mechanisms and methods for dynamically increasing resources utilized for processing tasks can enable embodiments to adjust processing power utilized for task processing. Further, adjusting processing power can ensure that quality of service goals set for processing tasks are achieved.10-27-2011
20110265086USER AND DEVICE LOCALIZATION USING PROBABILISTIC DEVICE LOG TRILATERATION - A system and method of localizing elements (shared devices and/or their users) in a device infrastructure, such as a printing network, are provided. The method includes mapping a structure in which the elements of a device infrastructure are located, the elements comprising shared devices and users of the shared devices. Probable locations of fewer than all of the elements in the structure are mapped, with at least some of the elements being initially assigned to an unknown location. Usage logs for a plurality of the shared devices are acquired. The acquired usage log for each device includes a user identifier for each of a set of uses of the device, each of the uses being initiated from a respective location within the mapped structure by one of the users. Based on the acquired usage logs and the input probable locations of some of the elements, locations of at least some of the elements initially assigned to an unknown location are predicted. The prediction is based a model which infers that for each of a plurality of the users, a usage of at least some of the shared devices by the user is a function of respective distances between the user and each of those devices.10-27-2011
20130174167INTELLIGENT INCLUSION/EXCLUSION AUTOMATION - Computer systems and computer program products for automating tasks in a computing environment are provided. In one such embodiment, by way of example only, if an instant task is not found in one of list of included tasks and a list of excluded tasks, at least one of the following is performed: the instant task is compared the with previous instances of the task, if any; the instant task is analyzed, including an input/output (I/O) sequence for the instant task, to determine if the instant task is similar to an existing task; and the instant task is considered as a possible candidate for automation. If the instant task is determined to be an automation candidate, the instant task is added to the list of included tasks, otherwise the instant task is added to the list of excluded tasks.07-04-2013
20130179890LOGICAL DEVICE DISTRIBUTION IN A STORAGE SYSTEM - Utilization of the processor modules is monitored. A varying load pattern including at least one of a bursty behavior or an oscillatory behavior of the processor modules is identified. Distribution of logical devices between processor modules is performed.07-11-2013
20110271284Method of Simulating, Testing, and Debugging Concurrent Software Applications - Embodiments of a method of simulating, testing, and debugging of concurrent software applications are disclosed. Software code is executed by a simulator program that takes over some functions of an operating system. The simulator program according to various embodiments is capable of controlling thread spawning, preemption, operating system calls, interprocess communications, signals. Notable advantages of the invention are its capability of testing uninstrumented user applications, independence of the high-level computer language of a user application, and machine instruction level granularity. The simulator is capable of obtaining outcomes of reproducible execution sequences, reproducing faulty behavior, and providing debugging information to a user.11-03-2011
20120254881PARALLEL COMPUTER SYSTEM AND PROGRAM - There is provided a parallel computer system for performing barrier synchronization using a master node and a plurality of worker nodes based on the time to allow for an adaptive setting of the synchronization time. When a task process in a certain worker node has not been completed by a worker determination time, the particular worker node performs a communication to indicate that the process has not been completed, to a master node. When the communication has been received by a master determination time, the master node performs a communication to indicate that the process time is extended by a correction process time, in order to adjust and extend the synchronization time. In this way, it is possible to reduce the synchronization overhead associated with the execution of an application with a relatively large variation in the process time from a synchronization point to the next synchronization point.10-04-2012
20120254880THREAD FOLDING TOOL - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display.10-04-2012
20120254879HIERARCHICAL TASK MAPPING - Mapping tasks to physical processors in parallel computing system may include partitioning tasks in the parallel computing system into groups of tasks, the tasks being grouped according to their communication characteristics (e.g., pattern and frequency); mapping, by a processor, the groups of tasks to groups of physical processors, respectively; and fine tuning, by the processor, the mapping within each of the groups.10-04-2012
20120254875Method for Transforming a Multithreaded Program for General Execution - A technique is disclosed for executing a program designed for multi-threaded operation on a general purpose processor. Original source code for the program is transformed from a multi-threaded structure into a computationally equivalent single-threaded structure. A transform operation modifies the original source code to insert code constructs for serial thread execution. The transform operation also replaces synchronization barrier constructs in the original source code with synchronization barrier code that is configured to facilitate serialization. The transformed source code may then be conventionally compiled and advantageously executed on the general purpose processor.10-04-2012
20120254874System and Method for Job Management between Mobile Devices - A system for job management between mobile devices includes a processor configured to store in a storage messages and data pertaining to job assignments and reassignments to build a historical record concerning each job assignment and reassignment, to receive from a dispatch terminal requests containing messages and data pertaining to creating and allocating job assignments and reassignments at and between mobile devices, to communicate via a communication interface with the mobile devices, to execute the requests by creating and allocating the job assignments and reassignments at and between the mobile devices, and to output to a storage the messages and data pertaining to the allocation of job assignments and reassignments in order to add to and maintain the historical record in the storage concerning each job assignment and reassignment with respect to the mobile devices.10-04-2012
20080216079MANAGING A RESOURCE LOCK - A method of operating a resource lock for controlling access to a resource by a plurality of resource requesters, the resource lock operating in a contention efficient (heavyweight) operating mode, and the method being responsive to a request from a resource requester to acquire the resource lock, the method comprising the steps of: incrementing a count of a total number of acquisitions of the resource lock in the contention efficient operating mode; in response to a determination that access to the resource is not contended by more than one resource requester, performing the steps of: a) incrementing a count of a number of uncontended acquisitions of the resource lock in the contention efficient operating mode; b) calculating a contention rate as the number of uncontended acquisitions in the contention efficient operating mode divided by the total number of acquisitions in the contention efficient operating mode; and c) in response to a determination that the contention rate meets a threshold contention rate, causing the resource lock to change to a non-contention efficient (lightweight) operating mode.09-04-2008
20130174166Efficient Sequencer - Techniques are disclosed for efficiently sequencing operations performed in multiple threads of execution in a computer system. In one set of embodiments, sequencing is performed by receiving an instruction to advance a designated next ticket value, incrementing the designated next ticket value in response to receiving the instruction, searching a waiters list of tickets for an element having the designated next ticket value, wherein searching does not require searching the entire waiters list, and the waiters list is in a sorted order based on the values of the tickets, and removing the element having the designated next ticket value from the list using a single atomic operation. The element may be removed by setting a waiters list head element, in a single atomic operation, to refer to an element in the list having a value based upon the designated next ticket value.07-04-2013
20130139164Business Process Optimization - The present disclosure involves systems, software, and computer implemented methods for optimizing business processes. One process includes identifying a process model to be compiled, the process model including a plurality of process steps for performing a process associated with the process model, identifying at least two sequential process steps within the process model for inclusion within a single transactional boundary, combining the identified at least two sequential process steps within the single transactional boundary, and compiling the identified process model with the identified at least two sequential process steps combined within the single transactional boundary. In some instances, the process model may be represented in a business process modeling notation (BPMN). Combining the identified sequential process steps within the single transactional boundary can include modifying the process model to enclose the sequential process steps into the single transactional boundary. The transactional boundary may be a transactional sub-process in BPMN.05-30-2013
20130139166DISTRIBUTED DATA STREAM PROCESSING METHOD AND SYSTEM - Embodiments of the present application relate to a distributed data stream processing method, a distributed data stream processing device, a computer program product for processing a raw data stream and a distributed data stream processing system. A distributed data stream processing method is provided. The method includes dividing a raw data stream into a real-time data stream and historical data streams, processing the real-time data stream and the historical data streams in parallel, separately generating respective results of the processing of the real-time data stream and the historical data streams, and integrating the generated processing results.05-30-2013
20130091504DATA FLOWS AND THEIR INTERACTION WITH CONTROL FLOWS - A method and apparatus for processing data by a computer and a method of determining data storage requirements of a computer for earning out a data processing task.04-11-2013
20130097607METHOD, APPARATUS, AND SYSTEM FOR ADAPTIVE THREAD SCHEDULING IN TRANSACTIONAL MEMORY SYSTEMS - An apparatus and method is described herein for adaptive thread scheduling in a transactional memory environment. A number of conflicts in a thread over time are tracked. And if the conflicts exceed a threshold, the thread may be delayed (adaptively scheduled) to avoid conflicts between competing threads. Moreover, a more complex version may track a number of transaction aborts within a first thread that are caused by a second thread over a period, as well as a total number of transactions executed by the first thread over the period. From the tracking, a conflict ratio is determined for the first thread with regard to the second thread. And when the first thread is to be scheduled, it may be delayed if the second thread is running and the conflict ratio is over a conflict ratio threshold.04-18-2013
20130097606Dynamic Scheduling for Frames Representing Views of a Geographic Information Environment - An exemplary method for scheduling jobs in frames representing views of a geographic information environment is disclosed. An exemplary method includes determining a remaining frame period in a frame representing a view of a geographic information environment. The exemplary method also includes identifying a dynamic job in a scheduling queue. The dynamic job has a non-preemptive section that is between a start of the job and a preemption point of the job. The exemplary method further includes determining an estimated execution time for executing the job. When the estimated execution time is not greater than the remaining frame period, the exemplary method includes executing the non-preemptive section of the job in the frame. When the estimated execution time is greater than the remaining frame period, the exemplary method includes postponing the execution of the job in the frame.04-18-2013
20130104132COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective.04-25-2013
20130104136OPTIMIZING ENERGY USE IN A DATA CENTER BY WORKLOAD SCHEDULING AND MANAGEMENT - Techniques are described for scheduling received tasks in a data center in a manner that accounts for operating costs of the data center. Embodiments of the invention generally include comparing cost-saving methods of scheduling a task to the operating parameters of completing a task—e.g., a maximum amount of time allotted to complete a task. If the task can be scheduled to reduce operating costs (e.g., rescheduled to a time when power is cheaper) and still be performed within the operating parameters, then that cost-saving method is used to create a workload plan to implement the task. In another embodiment, several cost-saving methods are compared to determine the most profitable.04-25-2013
20130104135DATA CENTER OPERATION - In response to a map task distributed by a job tracker, a map task tracker executes the map task to generate a map output including version information. The map task tracker stores the generated map outputs. The map task tracker informs the job tracker of related information of the map output. In response to a reduce task distributed by the job tracker, the reduce task tracker acquires the map outputs for key names including given version information from the map task trackers, wherein the acquired map outputs include the map outputs with the given version information and historical map outputs with the version information prior to the given version information. The reduce task tracker executes the reduce task on the acquired map outputs.04-25-2013
20130104134COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective.04-25-2013
20130104137MULTIPROCESSOR SYSTEM - A multiprocessor system including a plurality of processors, each including a task scheduler that determines a task execution order of the tasks in a task set to be executed by the processors within a task period which is defined as a period in repeated execution of the task sets, and processors that execute the respective tasks; and a scheduler management device having a command unit configured to issue a command for at least one of the task schedulers to change the task execution order, wherein each of the task schedulers, when receiving the command from the command unit, changes the task execution order of the processors.04-25-2013
20130104133CONSTRUCTING CHANGE PLANS FROM COMPONENT INTERACTIONS - Techniques for constructing change plans from one or more component interactions are provided. For example, a computer-implemented technique includes observing at least one interaction between two or more components of at least one distributed computing system, consolidating the at least one interaction into at least one interaction pattern, and using the at least one interaction pattern to construct at least one change plan useable for managing the at least one distributed computing system. In another computer-implemented technique, a partial order of two or more changes is determined from at least one component interaction, and is automatically transformed into at least one ordered task, wherein the at least one ordered task is linked by at least one temporal ordering constraint, and is used to generate at least one change plan useable for managing the distributed computing system is generated, wherein the change plan is based on at least one requested change.04-25-2013
20130125128REALIZING JUMPS IN AN EXECUTING PROCESS INSTANCE - A method for realizing jumps in an executing process instance can be provided. The method can include suspending an executing process instance, determining a current wavefront for the process instance and computing both a positive wavefront difference for a jump target relative to the current wavefront and also a negative wavefront difference for the jump target relative to the current wavefront. The method also can include removing activities from consideration in the process instance and also adding activities for consideration in the process instance both according to the computed positive wavefront difference and the negative wavefront difference, creating missing links for the added activities, and resuming executing of the process instance at the jump target.05-16-2013
20130125127Task Backpressure and Deletion in a Multi-Flow Network Processor Architecture - Described embodiments generate tasks corresponding to packets received by a network processor. A source processing module sends task messages including a task identifier and a task size to a destination processing module. The destination module receives the task message and determines a queue in which to store the task. Based on a used cache counter of the queue and a number of cache lines for the received task, the destination module determines whether the queue has reached a usage threshold. If the queue has reached the threshold, the destination module sends a backpressure message to the source module. Otherwise, if the queue has not reached the threshold, the destination module accepts the received task, stores data of the received task in the queue, increments the used cache counter for the queue corresponding to the number of cache lines for the received task, and processes the received task.05-16-2013
20130125126INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a user interface, a switching unit, and a computer. The user interface is for a user that operates a first processing unit that runs a first operating system or a second processing unit that runs a second operating system. The switching unit selectively switches between the first processing unit and the second processing unit to be associated with the user interface. The computer functions as the first processing unit. The computer functions as the second processing unit. The computer runs a first application program on the first operating system. The computer activates, on the second operating system, a second application program related to the first application program, in a state in which the first processing unit is associated with the user interface. The computer controls the switching unit upon completion of the activation of the second application program.05-16-2013
20130132961MAPPING TASKS TO EXECUTION THREADS - Tasks are mapped to execution threads of a parallel processing device. Tasks are mapped from the list of tasks to execution threads of the parallel processing device that are free. The parallel processing device is allowed to perform the tasks mapped to the execution threads of the parallel processing device for a predetermined number of execution cycles. When the parallel processing device has performed the tasks mapped to the execution threads of the parallel processing device for the predetermined number of execution cycles, the parallel processing device is suspended from further performing the tasks to allow the parallel processing device to determine which execution threads have completed performance of mapped tasks and are therefore free.05-23-2013
20130145372EMBEDDED SYSTEMS AND METHODS FOR THREADS AND BUFFER MANAGEMENT THEREOF - Embedded systems are provided, which includes a processing unit and a memory. The processing unit simultaneously executes first thread having a flag for performing a data acquisition operation and second thread for performing a data process and output operation for the acquired data in the data acquisition operation. The flag is used for indicating whether a state of the first thread is in an execution state or a sleep state. The memory which is coupled to the processing unit provides a shared buffer for the first and second threads. Before executing the second thread, the flag is checked to determine whether to execute the second thread, wherein the second thread is executed when the flag indicates the sleep state while execution of the second thread is suspended when the flag indicates the execution state.06-06-2013
20110219379ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item.09-08-2011
20100281484SHARED JOB SCHEDULING IN ELECTRONIC NOTEBOOK - Architecture that synchronizes a job to shared notebook eliminating the need for user intervention and guaranteeing that only one instance of the notebook client performs the task. A job tracking component creates and maintains tracking information of jobs processed against shared notebook information. A scheduling component synchronizes a new job against the shared notebook information based on the tracking information. The tracking information can be a file or cells stored at a root level of a hierarchical data collection that represents the electronic notebook. The file includes properties related to a job that has been processed. The properties are updated as new jobs are processed. Job scheduling includes whole file updates and/or incremental updates to the shared notebook information.11-04-2010
20100287557METHOD FOR THE MANAGEMENT OF TASKS IN A DECENTRALIZED DATA NETWORK - In a method for the management of tasks in a decentralized data network with a plurality of nodes for carrying out the tasks, resources are distributed based on a mapping rule, in particular a hash function. A task that is to be suspended is distributed by dividing the process image of the task into segments and by distributing the segments over the nodes using the mapping rule. Thus, a distributed swap space is created so that tasks can also be carried out on nodes whose swap space is not sufficient on its own. The method can be used in embedded systems with a limited storage capacity and/or in a voltage distribution system, wherein the nodes are, for example, switching units in the voltage distribution system. The method can also be used in any other technical systems such as, for example, a power generation system, an automation system and the like.11-11-2010
20100287556Computer System, Control Apparatus For A Machine, In Particular For An Industrial Robot, And Industrial Robot - The invention relates to a computer system (11-11-2010
20100287555USING COMPOSITE SYSTEMS TO IMPROVE FUNCTIONALITY - Systems and methods are provided for enabling communication between a composite system providing additional functionality not contained in existing legacy systems and other existing systems using different commands, variables, protocols, methods, or instructions, when data may be located on more than one system. In an embodiment, multiple software layers are used to independently manage different aspects of an application. A business logic layer may be used in an embodiment to facilitate reading/writing operations on data that may be stored locally and/or on external systems using different commands, variables, protocols, methods, or instructions. A backend abstraction layer may be used in an embodiment in conjunction with the business logic layer to facilitate communication with the external systems. A user interface layer may be used in an embodiment to manage a user interface, a portal layer to manage a user context, and a process logic layer to manage a workflow.11-11-2010
20110231850BLOCK-BASED TRANSMISSION SCHEDULING METHODS AND SYSTEMS - Block-based transmission scheduling methods and systems are provided. First, a plurality of packets corresponding to at least one data flow is received. The packets of the data flow are accumulated to form a data block. Then, the data block of the data flow is scheduled and transmitted according to a transmission scheduling algorithm based on the unit of block. In some embodiments, when the length of the accumulated data block equals to or is greater than a predefined or dynamically calculated block length threshold, the data block is scheduled and transmitted according to the transmission scheduling algorithm. In some embodiments, when current time is equal to a specific time point derived from a dynamically calculated or a fixed time duration, the data block is scheduled and transmitted according to the transmission scheduling algorithm.09-22-2011
20130145373INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - There is provided with an information processing apparatus for controlling execution of a plurality of threads which run on a plurality of calculation cores connected to a memory including a plurality of banks. A first selection unit is configured to select a thread as a continuing thread which receives data from other thread, out of threads which process a data group of interest, wherein the number of accesses for a bank associated with the selected thread is less than a predetermined count. A second selection unit is configured to select a thread as a transmitting thread which transmits data to the continuing thread, out of the threads which process the data group of interest.06-06-2013
20110239218METHOD AND SYSTEM OF LAZY OUT-OF-ORDER SCHEDULING - A method and system to schedule out of order operations without the requirement to execute compare, ready and pick logic in a single cycle. A lazy out-of-order scheduler splits each scheduling loop into two consecutive cycles. The scheduling loop includes a compare stage, a ready stage and a pick stage. The compare stage and the ready stage are executed in a first of the two consecutive cycles and the pick stage is executed in a second of the two consecutive cycles. By splitting each scheduling loop into two consecutive cycles, selecting the oldest operation by default and checking the readiness of the oldest operation, it relieves the system of timing requirements and avoids the need for power hungry logic. Every execution of an operation does not appear as one extra cycle longer and the lazy out-of-order scheduler retains most of the performance of a full out-of-order scheduler.09-29-2011
20130152096APPARATUS AND METHOD FOR DYNAMICALLY CONTROLLING PREEMPTION SECTION IN OPERATING SYSTEM - An apparatus for dynamically controlling a preemption section includes a preemption manager configured to monitor whether a system context has changed, and if the system context has changed, set a current preemptive mode according to the changed system context to dynamically control a preemption section of a kernel. Therefore, even when an application requiring real-time processing, such as a health-care application, co-exists with a normal application, optimal performance may be ensured.06-13-2013
20130152095Expedited Module Unloading For Kernel Modules That Execute Read-Copy Update Callback Processing Code - A technique for expediting the unloading of an operating system kernel module that executes read-copy update (RCU) callback processing code in a computing system having one or more processors. According to embodiments of the disclosed technique, an RCU callback is enqueued so that it can be processed by the kernel module's callback processing code following completion of a grace period in which each of the one or more processors has passed through a quiescent state. An expediting operation is performed to expedite processing of the RCU callback. The RCU callback is then processed and the kernel module is unloaded.06-13-2013
20130152094ERROR CHECKING IN OUT-OF-ORDER TASK SCHEDULING - One embodiment of the present invention sets forth a technique for error-checking a compute task. The technique involves receiving a pointer to a compute task, storing the pointer in a scheduling queue, determining that the compute task should be executed, retrieving the pointer from the scheduling queue, determining via an error-check procedure that the compute task is eligible for execution, and executing the compute task.06-13-2013
20130152093Multi-Channel Time Slice Groups - A time slice group (TSG) is a grouping of different streams of work (referred to herein as “channels”) that share the same context information. The set of channels belonging to a TSG are processed in a pre-determined order. However, when a channel stalls while processing, the next channel with independent work can be switched to fully load the parallel processing unit. Importantly, because each channel in the TSG shares the same context information, a context switch operation is not needed when the processing of a particular channel in the TSG stops and the processing of a next channel in the TSG begins. Therefore, multiple independent streams of work are allowed to run concurrently within a single context increasing utilization of parallel processing units.06-13-2013
20130152092GENERIC VIRTUAL PERSONAL ASSISTANT PLATFORM - A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user.06-13-2013
20130152091Optimized Judge Assignment under Constraints - Described is a technology by which an assignment model is computed to distribute labeling tasks among judging entities (judges). The assignment model is optimized by obtaining accuracy-related data of the judges, e.g., by probing the judges with labeling tasks having a gold standard label and evaluating the judges' labels against the gold standard labels, and optimizing for accuracy. Optimization may be based upon on or more other constraints, such as per-judge cost and/or quota.06-13-2013
20130152090Resolving Resource Contentions - A computer-implemented method for managing access to a shared resource of a process may include identifying a plurality of process steps, each process step of the plurality of process steps, when executed, accessing the shared resource at a same time. The method may also include rearranging at least one of the process steps of the plurality of process steps to access the shared resource at a different time.06-13-2013
20100293548METHOD AND COMPUTER SYSTEM FOR ADMINISTRATION OF MEDICAL APPLICATIONS EXECUTING IN PARALLEL - A method and a computer system are disclosed for administration of medical applications running in parallel. At least one embodiment of the method includes creation of a number of application components as a result a beginning of a number of user actions; provision of a module for parallel execution and/or for coordination of the previously created application components, provision of a least one communication interface for exchanging messages and/or data between an application component and a command which is of interest to the application component which has been initiated by one of the user actions, and removal of the application component created by a user action after the user action has ended.11-18-2010
20090165003SYSTEM AND METHOD FOR ALLOCATING COMMUNICATIONS TO PROCESSORS AND RESCHEDULING PROCESSES IN A MULTIPROCESSOR SYSTEM - In a multiprocessor system, a system and method assigns communications to processors, processes, or subsets of types of communications to be processed by a specific processor without using a locking mechanism specific to the resources required for assignment. The system and method can reschedule processes to run on the processor on which the assignment is made.06-25-2009
20100318996METHODS AND SYSTEMS FOR SHARING COMMON JOB INFORMATION - Apparatus and methods are provided for utilizing a plurality of processing units. A method comprises selecting a pending job from a plurality of unassigned jobs based on a plurality of assigned jobs for the plurality of processing units and assigning the pending job to a first processing unit. Each assigned job is associated with a respective processing unit, wherein the pending job is associated with a first segment of information that corresponds to a second segment of information for a first assigned job. The method further comprises obtaining the second segment of information that corresponds to the first segment of information from the respective processing unit associated with the first assigned job, resulting in an obtained segment of information and performing, by the first processing unit, the pending job based at least in part on the obtained segment of information.12-16-2010
20100318995THREAD SAFE CANCELLABLE TASK GROUPS - A scheduler in a process of a computer system schedules tasks of a task group for concurrent execution by multiple execution contexts. The scheduler provides a mechanism that allows the task group to be cancelled by an arbitrary execution context or an asynchronous error state. When a task group is cancelled, the scheduler sets a cancel indicator in each execution context that is executing tasks corresponding to the cancelled task group and performs a cancellation process on each of the execution contexts where a cancel indicator is set. The scheduler also creates local aliases to allow task groups to be used without synchronization by execution contexts that are not directly bound to the task groups.12-16-2010
20080263553Dynamic Service Level Manager for Image Pools - An embodiment of the present invention relates to the field of computer technology, in particular it relates to a method for provisioning images for virtual machines, wherein for a predefined application type a pool of at least one image of a virtual machine performing said application is loaded in the main memory of the computer.10-23-2008
20130185725SCHEDULING AND EXECUTION OF COMPUTE TASKS - One embodiment of the present invention sets forth a technique for selecting a first processor included in a plurality of processors to receive work related to a compute task. The technique involves analyzing state data of each processor in the plurality of processors to identify one or more processors that have already been assigned one compute task and are eligible to receive work related to the one compute task, receiving, from each of the one or more processors identified as eligible, an availability value that indicates the capacity of the processor to receive new work, selecting a first processor to receive work related to the one compute task based on the availability values received from the one or more processors, and issuing, to the first processor via a cooperative thread array (CTA), the work related to the one compute task.07-18-2013
20130185726Method for Synchronous Execution of Programs in a Redundant Automation System - A method for synchronous execution of programs in a redundant automation system comprising at least two subsystems, wherein at least one request for execution of one of the programs is taken as a basis for starting a scheduling pass, and during this scheduling pass a decision is taken as to whether this one program is executed on each of the subsystems. Suitable measures are proposed which allow all programs a fair and deterministic share of the program execution based on their priorities.07-18-2013
20130185727METHOD FOR MANAGING TASKS IN A MICROPROCESSOR OR IN A MICROPROCESSOR ASSEMBLY - This method includes steps for the parallel management of a first list and of a second list. The first list corresponds to a list of tasks to be carried out. The second list corresponds to a list of variables indicating the presence or absence of tasks to be carried out. The list of tasks is managed in a “FIFO” manner, that is to say that the first task inputted into the list is the first task to be executed. A task interruption is managed using a “Test And Set” function executed on the elements of the second list, the “Test And Set” function being a function which cannot be interrupted and including the following steps: reading the value of the element in question, storing the read value in a local memory, and assigning a predetermined value to the element which has just been read.07-18-2013
20110289504IMAGE PROCESSING APPARATUS - When a user inputs an image addition instruction from a UI input unit, a job registration unit registers a job corresponding to the instruction in a job list for each type of processing. When undo is input from the UI input unit, a current position pointer prepared for each type of processing returns to the immediately preceding job. When redo is input, the current position pointer moves to the immediately succeeding job. When a processing execution instruction is input, out of jobs registered in the job list, the job indicated by the current position pointer and preceding jobs are executed in a predetermined order.11-24-2011
20110296425Management apparatus, management system, and recording medium for recording management program - A management apparatus includes a job definition information storage section for storing, at each period, a job definition file including execution characteristics indicating the start condition of each job in the execution schedule, the estimated execution time of the job, the state of the job, and a restriction at the time of setting the start schedule of the job; an exclusion information storage section for storing exclusion definition information indicating the jobs to be executed exclusively from each other; a reset job specifying section for acquiring a first job definition file of a schedule to be executed and a second job definition file of an executed schedule, then extracting, as a reset job, an abnormally terminated job from the second job definition file, and extracting a job using the reset job and the issue message of the reset job as start conditions, to store the extracted jobs in a related job set table; an execution possible time zone calculating section for searching, as an execution possible time zone, a time zone enabling execution of the job stored in the related job set table, from the first job definition file based on the second job definition file and the exclusion definition information; a start schedule adjusting section for setting the start schedule of the job stored in the related job set table based on the execution possible time zone; and a start schedule time setting section for setting the start time of the job set in the first job definition file, based on the start schedule of the job stored in the related job set table.12-01-2011
20110307895Managing Requests Based on Request Groups - A request management component receives requests to perform an operation. Each of the requests is assigned, based on one or more criteria, to one of multiple different request groups. Based at least in part on execution policies associated with the request groups, determinations are made as to when to submit the requests to one or more recipient. Each of the multiple requests is submitted to one of the recipients when it is determined that the request is to be submitted.12-15-2011
20110314474HETEROGENEOUS JOB DASHBOARD - This disclosure provides a system and method for summarizing jobs for a user group. In one embodiment, a job manager is operable to identify a state of a first job, the first job associated with a first job scheduler. A state of a second job is identified. The second job is associated with a second job scheduler. The first job scheduler and the second job scheduler are heterogeneous. A summary of information associated with at least the first job scheduler and the second job scheduler is determined using, at least in part, the first job state and the second job state. The summary is presented to a user though a dashboard.12-22-2011
20110314473SYSTEM AND METHOD FOR GROUPING MULTIPLE PROCESSORS - A distributed multi-processor out-of-order system includes multiple processors, an arbiter, a data dispatcher, a memory controller, a storage unit, multiple memory access requests issued by the multiple processors, and multiple data units that provide the results of the multiple memory access requests. Each of the multiple memory access requests includes a tag that identifies the priority of the processor that issued the memory access request, a processor identification number that identifies the processor that issued the request, and a processor access sequence number that identifies the order that the particular one of the processors issued the request. Each of the data units also includes a tag that specifics the processor identification number, the processor access sequence number, and a data sequence number that identifies the order of the data units satisfying the corresponding one of the memory requests. Using the tags, a distributed arbiter and data dispatcher can execute the requests out-of-order, handle simultaneous memory requests, order the memory requests based on, for example, the priority, return the data units to the processor that requested it, and reassemble the data units.12-22-2011
20130191835DISTRIBUTED PROCESSING DEVICE AND DISTRIBUTED PROCESSING SYSTEM - A distributed processing device includes an object storage unit that stores a continuation object including at least one of plural processes constituting a task and containing data of the task that is being processed, a processing unit that executes the continuation object retrieved from the object storage unit, and a storage processing unit that stores, in an execution state file, data stored in the object storage unit.07-25-2013
20130191833SYSTEM AND METHOD FOR ASSURING PERFORMANCE OF DATA SCRUBBING OPERATIONS - A method may include determining based on at least one data scrubbing parameter associated with at least one storage resource that the at least one storage resource is scheduled for a data scrubbing operation. The method may also include cause the at least one storage resource to transition from a low-power mode to a normal-power mode in order to perform a data scrubbing operation in response to a determination that the at least one storage resource is scheduled for a data scrubbing operation. The method may additionally include determining based on the at least one data scrubbing parameter that the data scrubbing operation is scheduled to cease. The method may further comprise causing the at least one storage resource to transition from the normal-power mode to the low-power mode in response to a determination that the data scrubbing operation is scheduled to cease.07-25-2013
20130191832MANAGEMENT OF THREADS WITHIN A COMPUTING ENVIRONMENT - Threads of a computing environment are managed to improve system performance. Threads are migrated between processors to take advantage of single thread processing mode, when possible. As an example, inactive threads are migrated from one or more processors, potentially freeing-up one or more processors to execute an active thread. Active threads are migrated from one processor to another to transform multiple threading mode processors to single thread mode processors.07-25-2013
20120291036SAFETY CONTROLLER AND SAFETY CONTROL METHOD - Upon occurrence of an abnormality, a safety control can be executed more rapidly. An OS partially includes a partition scheduler that selects and decides a time partition to be subsequently scheduled according to a scheduling pattern including TP11-15-2012
20120017216DYNAMIC MACHINE-TO-MACHINE COMMUNICATIONS AND SCHEDULING - A method may include obtaining traffic loading and resource utilization information associated with a network for the network time domain; obtaining machine-to-machine resource requirements for machine-to-machine tasks using the network; receiving a target resource utilization value indicative of a network resource limit for the network time domain; calculating a probability for assigning each machine-to-machine task to the network time domain, wherein the probability is based on a difference between the target resource utilization value and the traffic loading and resource utilization associated with the network; calculating a probability density function based on an independent and identically distributed random variable; generating a schedule of execution of the machine-to-machine tasks within the network time domain based on the probabilities associated with the machine-to-machine tasks and the probability density function; and providing the schedule of execution of the machine-to-machine tasks.01-19-2012
20120030681HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component.02-02-2012
20120030680System and Method of General Service Management - A system and method is provided for servicing service management requests via a general service management framework that supports a plurality of platforms (for example, Windows®, UNIX®, Linux, Solaris™, and/or other platforms), and that manages local and/or remote machine services at system and/or application level.02-02-2012
20130198751INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating.08-01-2013
20130198750WIZARD-BASED SYSTEM FOR BUSINESS PROCESS SPECIFICATION - Method and systems assist non-programmer users in specifying business processes. Users submit high-level descriptions of simple, incomplete, or incorrect business processes in softcopy form illustrating the orchestration of services (or the control flow), and get prompted with suggestions to specify the services' data flow. The method and systems herein assist in specifying data flowing between services, but also detects missing edges and services, for which the method and systems herein also provide data flow suggestions. The suggestions are computed and ranked using heuristics, and displayed through a wizard to the user.08-01-2013
20130198752INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating.08-01-2013
20080256543Replicated State Machine - A replicated state machine includes multiple state machine replicas. In response to a request from a client, the state machine replicas can execute a service for the request in parallel. Each of the state machine replicas is provided with a request manager instance. The request manager instance includes a distributed consensus means and a selection means. The distributed consensus means commits a stimulus sequence of requests to be processed by each of the state machine replicas. The selection means selects requests to be committed to the stimulus sequence. The selection is based on an estimated service time of the request from the client. The estimated service time of the request from the client is based on a history of service times from the client provided by a feedback from the state machine replicas. As such, requests from multiple clients are serviced fairly.10-16-2008
20120066685SCHEDULING REALTIME INFORMATION STORAGE SYSTEM ACCESS REQUESTS - Access requests (03-15-2012
20120066684CONTROL SERVER, VIRTUAL SERVER DISTRIBUTION METHOD - When plural virtual servers are distributed to plural physical servers, efficient distribution is performed in terms of the processing capacity of the physical servers and their power consumption. Firstly a second load of each virtual server in future is predicted based on a first load in a prescribed time period up to the present of each of the plural virtual servers. Next, the schedule is determined to distribute the plural virtual servers to the plural physical servers based on the second load of each virtual server so that a total of the second loads of one or a plurality of the virtual servers distributed to a physical server is within a prescribed range of proportion with respect to processing capacity of the physical server. Furthermore, the distribution is instructed (execution of redistribution) in accordance with the schedule.03-15-2012
20120066683BALANCED THREAD CREATION AND TASK ALLOCATION - Methods for balancing thread creation and task scheduling are provided for predictable tasks. A list of tasks is sorted according to a predicted completion time for each task. Then tasks are assigned to threads in order of total predicted completion time, and the threads are scheduled to execute the tasks assigned to the threads on a processor.03-15-2012
20130205298APPARATUS AND METHOD FOR MEMORY OVERLAY - A memory overlay apparatus includes an internal memory that includes a dirty bit indicating a changed memory area, a memory management unit that controls an external memory to store only changed data so that only data actually being used by a task during overlay is stored and restored, and a direct memory access (DMA) management unit that confirms the dirty bit when the task is changed and that moves a data area of the task between the internal memory and the external memory.08-08-2013
20120079489Method, Computer Readable Medium And System For Dynamic, Designer-Controllable And Role-Sensitive Multi-Level Properties For Taskflow Tasks Using Configuration With Persistence - The method includes determining whether or not a processing level of the task flow is an intermediate processing level. Generating a new property upon determining that the processing level is an intermediate processing level. Associating a relatively lower level property with the new property and publishing the new property to a relatively higher processing level in the task flow.03-29-2012
20120304182CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A system and associated method for continuously optimizing data archive management scheduling. A job scheduler receives, from an archive management system, inputs of task information, replica placement data, infrastructure topology data, and resource performance data. The job scheduler models a flow network that represents data content, software programs, physical devices, and communication capacity of the archive management system in various levels of vertices according to the received inputs. An optimal path in the modeled flow network is computed as an initial schedule, and the archive management system performs tasks according to the initial schedule. The operations of scheduled tasks are monitored and the job scheduler produces a new schedule based on feedbacks of the monitored operations and predefined heuristics.11-29-2012
20120304181SCHEDULING COMPUTER JOBS FOR EXECUTION - A method, system, and apparatus to divide a computing job into micro-jobs and allocate the execution of the micro-jobs to times when needed resources comply with one or more idleness criteria is provided. The micro-jobs are executed on an ongoing basis, but only when the resources needed by the micro-jobs are not needed by other jobs. A software program utilizing this methodology may be run at all times while the computer is powered up without impacting the performance of other software programs running on the same computer system.11-29-2012
20120304179WORKLOAD-TO-CLOUD MIGRATION ANALYSIS BASED ON CLOUD ASPECTS - Methods and systems for evaluating compatibility of a cloud of computers to perform one or more workload tasks. One or more computing solution aspects are determined that corresponding to one or more sets of workload factors, where the workload factors characterize one or more workloads, to characterize one or more computing solutions. The workload factors are compared to the computing solution aspects in a rule-based system to exclude computing solutions that cannot satisfy the workload factors. A computing solution is selected that has aspects that accommodate all of the workload factors to find a solution that accommodates the one or more individual workloads.11-29-2012
20120084783AUTOMATED OPERATION LIST GENERATION DEVICE, METHOD AND PROGRAM - Selection of operations in a desired order and, as necessary, input of processing parameters by the user are received. Based on each operation corresponding to the received input, operation information, which classifies the operation corresponding to the input into a non-routine operation, which requires input of a processing parameter during execution of an automated operation list, or a routine operation other than the non-routine operation in advance, is obtained. Then, an automated operation list is generated based on the obtained operation information by registering, if the operation corresponding to the input is a routine operation, the operation corresponding to the input in the automated operation list with associating, as necessary, a necessary processing parameter for the operation with the operation, and registering, if the operation corresponding to the input is a non-routine operation, the operation corresponding to the input in the automated operation list.04-05-2012
20120096467MICROPROCESSOR OPERATION MONITORING SYSTEM - A microprocessor operation monitoring system whose own tasks are constituted by associating beforehand the task number of the task that is next to be started up, for each of the tasks constituting the program, and abnormality of microprocessor operation is detected by comparing and determining whether or not the announced task and the task to be started up match.04-19-2012
20120096466METHOD, SYSTEM AND PROGRAM FOR DEADLINE CONSTRAINED TASK ADMISSION CONTROL AND SCHEDULING USING GENETIC APPROACH - Disclosed is an admission control and scheduling method of deadline constrained tasks. The method comprises: buffering new arriving tasks into a waiting queue; pre-scheduling a new task and a previously admitted task; producing multiple pre-schedules; using the most feasible pre-schedule as an executive schedule; and dispatching the tasks in the executive schedule.04-19-2012
20130212590LOCK RESOLUTION FOR DISTRIBUTED DURABLE INSTANCES - A command log selectively logs commands that have the potential to create conflicts based on instance locks. Lock times can be used to distinguish cases where the instance is locked by the application host at a previous logical time from cases where the instance is concurrently locked by the application host through a different name. A logical command clock is also maintained for commands issued by the application host to a state persistence system, with introspection to determine which issued commands may potentially take a lock. The command processor can resolve conflicts by pausing command execution until the effects of potentially conflicting locking commands become visible and examining the lock time to distinguish among copies of a persisted state storage location.08-15-2013
20130212588PERSISTENT DATA STORAGE TECHNIQUES - A database is maintained that stores data persistently. Tasks are accepted from task sources. At least some of the tasks have competing requirements for use of regions of the database. Each of the regions includes data that is all either locked or not locked for writing at a given time. Each of the regions is associated with an available processor. For each of the tasks, jobs are defined each of which requires write access to regions that are to be accessed by no more than one of the processors. Jobs are distributed for concurrent execution by the associated processors.08-15-2013
20130212589Method and System for Controlling a Scheduling Order Per Category in a Music Scheduling System - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc.) with respect to particular slots within the scheduling order. The scheduling order can be configured to include a forward order per category or a reverse order per category with respect to the playback of the multimedia content in order to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. A reverse scheduling technique provides an ideal rotation of songs when a pre-programmed show interferes with a normal rotation. Any rotational compromises can be buried in off-peak audience listening hours of the programming day using the disclosed reverse scheduling technique.08-15-2013
20130212584METHOD FOR DISTRIBUTED CACHING AND SCHEDULING FOR SHARED NOTHING COMPUTER FRAMEWORKS - In a distributed caching and scheduling method for a shared nothing computing framework, the framework includes an aggregator node and multiple computing nodes with local processor, storage unit and memory. The method includes separating a dataset into multiple data segments; distributing the data segments across the local storage units; and for each computing node, copying the data segment from the storage unit to the memory; processing the data segment to compute a partial result; and sending the partial result to the aggregator node. The method includes determining the data segment stored in local memory of computing nodes; and coordinating additional computing jobs based on the determination of the data segment stored in local memory. Coordinating can include scheduling new computing jobs using the data segment already stored in local memory, or to maximize the use of the data segments already stored in local memories.08-15-2013
20130212585DATA PROCESSING SYSTEM OPERABLE IN SINGLE AND MULTI-THREAD MODES AND HAVING MULTIPLE CACHES AND METHOD OF OPERATION - In some embodiments, a data processing system includes a processing unit, a first load/store unit LSU and a second LSU configured to operate independently of the first LSU in single and multi-thread modes. A first store buffer is coupled to the first and second LSUs, and a second store buffer is coupled to the first and second LSUs. The first store buffer is used to execute a first thread in multi-thread mode. The second store buffer is used to execute a second thread in multi-thread mode. The first and second store buffers are used when executing a single thread in single thread mode.08-15-2013
20130212587SHARED RESOURCES IN A DOCKED MOBILE ENVIRONMENT - Sharing resources in a docked mobile environment comprises maintaining a set of execution tasks within a first data handling system having a system dock interface to physically couple to a second data handling system and assigning a task to be executed by the second data handling system while the two systems are physically coupled. The described method further comprises detecting a physical decoupling of the first and second data handling systems and displaying an execution result of the task via a first display element of the first data handling system in response to such a detection.08-15-2013