Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Multitasking, time sharing

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

718102000 - Process scheduling

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
718108000 Context switching 73
Entries
DocumentTitleDate
20110202931METHOD, COMPUTER PROGRAM AND DEVICE FOR SUPERVISING A SCHEDULER FOR MANAGING THE SHARING OF PROCESSING TIME IN A MULTI-TASK COMPUTER SYSTEM - The invention in particular has as an object supervising a scheduler for the management of processing time sharing in a multitask data-processing system comprising a computation unit having a standard execution mode and a preferred execution mode for executing a plurality of applications. The execution time for the said plurality of applications is divided into a plurality of periods and a minimal time for access per period to the said computation unit is determined for at least one application of the said plurality of applications. For at least one period, the said preferred execution mode is associated with the said at least one application and the said at least one application is executed according to at least the said minimal time for access to the said computation unit. For the said at least one period, the said standard execution mode is associated with the applications of the said plurality of applications and at least any one of the applications of the said plurality of applications is executed.08-18-2011
20130081053Acquiring and transmitting tasks and subtasks to interface devices - A computationally implemented method includes receiving request data including a request to carry out a task of acquiring data, acquiring one or more subtasks related to the task of acquiring data, selecting two or more discrete interface devices based on at least one of a status of the two or more discrete interface devices and a characteristic of the two or more discrete interface devices, transmitting at least one of the one or more subtasks to at least two of the two or more discrete interface devices, and receiving result data corresponding to a result of at least one subtask of the one or more subtasks executed by at least one of the two or more discrete interface devices. In addition to the foregoing, other method aspects are described in the claims, drawings, and text.03-28-2013
20130081054Method for Enabling Sequential, Non-Blocking Processing of Statements in Concurrent Tasks in a Control Device - A method for enabling sequential, non-blocking processing of statements in concurrent tasks in a control device having an operating system capable of multi-tasking, in particular a programmable logic controller, is disclosed. At least one operating system call, which causes the operating system to interrupt the particular task according to an instruction output by the statement in favor of another task, is associated with at least one statement.03-28-2013
20090144747COMPUTATION OF ELEMENTWISE EXPRESSION IN PARALLEL - An exemplary embodiment provides methods, systems and mediums for executing arithmetic expressions that represent elementwise operations. An exemplary embodiment provides a computing environment in which elementwise expressions may be executed in parallel by multiple execution units. In an exemplary embodiment, multiple execution units may reside on a network.06-04-2009
20120210332ASYNCHRONOUS PROGRAMMING EXECUTION - One or more techniques and/or systems are disclosed for improving asynchronous programming execution at runtime. Asynchronous programming code can comprise more than one level of hierarchy, such as in an execution plan. Respective aggregation operations in a portion of the asynchronous programming code are unrolled, to create a single level iterative execution, by combining elements of the multi-level iterative execution of the asynchronous programming code. In this way, the aggregation operations are concatenated to local logic code for the aggregation operations. Thread context switching in the unrolled portion of asynchronous programming code is performed merely at an asynchronous operation, thereby mitigating unnecessary switches. Exceptions thrown during programming code can be propagated up to a top of a virtual callstack for the execution.08-16-2012
20100107175INFORMATION PROCESSING APPARATUS - In a cellular phone applicable to an information processing apparatus according to the present invention, a CPU of a main control unit executes monitor threads 04-29-2010
20090083753DYNAMIC THREAD GENERATION AND MANAGEMENT FOR IMPROVED COMPUTER PROGRAM PERFORMANCE - The performance of an executing computer program is dynamically enhanced by creating one or more additional threads of execution and then intercepting function calls generated by the executing computer program and executing such function calls within one of the one or more additional threads. Each thread may be associated with a different processing resource, thereby allowing for concurrent execution of the multiple threads. This technique may be used, for example, to improve the performance of a single-threaded computer program, such as a single-threaded video game program, by allowing multi-threaded techniques to be used to execute the computer program even though the computer program was not designed to use such techniques.03-26-2009
20120167114PROCESSOR - Provide is a processor that can maintain a dependency relationship between a plurality of instructions and one read instruction. The processor comprises: a setting unit configured to set, when an instruction that exists at a location ensuring that writing into a memory area has been completed is executed, usage information indicating whether writing into the memory area has been completed such that the usage information indicates that writing into a memory area during execution of one thread has been completed; and a control unit configured to (i) perform execution of a read instruction to read data stored in the memory area when the usage information indicates that writing into the memory area during execution of the one thread has been completed, and (ii) suppress execution of the read instruction when the usage information indicates that writing into the memory area during execution of the one thread has not been completed.06-28-2012
20090044198Method and Apparatus for Call Stack Sampling in a Data Processing System - A computer implemented method, apparatus, and computer usable program code for sampling call stack information. An event is monitored during execution of a plurality of threads executed by a plurality of processors. In response to an occurrence of the event, a thread is identified in the plurality of threads to form an identified thread. A plurality of sampling threads is woken, wherein a sampling thread within the plurality of sampling threads is associated with each processor in the plurality of processors and wherein one sampling thread in the plurality of sampling threads obtains call stack information for the identified thread.02-12-2009
20090307707SYSTEM AND METHOD FOR DYNAMICALLY ADAPTIVE MUTUAL EXCLUSION IN MULTI-THREADED COMPUTING ENVIRONMENT - A system and associated method for mutually exclusively executing a critical section by a process in a computer system. The critical section accessing a shared resource is controlled by a lock. The method measures a detection time when a lock contention is detected, a wait time representing a duration of wait for the lock at each failed attempt to acquire the lock, and a delay representing a total lapse of time from the detection time till the lock is acquired. The delay is logged and used to calculate an average delay, which is compared with a suspension overhead time of the computer system on which the method is executed to determine whether to spin or to suspend the process while waiting for the lock to be released.12-10-2009
20090282418Method and system for integrated scheduling and replication in a grid computing system - A method for scheduling a plurality of computation jobs to a plurality of data processing units (DPUs) in a grid computing system 11-12-2009
20090271800SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DERIVING INTELLIGENCE FROM ACTIVITY LOGS - Techniques for segregating one or more logs of at least one multitasking user to derive at least one behavioral pattern of the at least one multitasking user are provided. The techniques include obtaining at least one of at least one action log, configuration information, domain knowledge, at least one task history and open task repository information, correlating the at least one of at least one action log, configuration information, domain knowledge, at least one task history and open task repository information to determine a task associated with each of one or more actions and segregate the one or more logs based on the one or more actions, and using the one or more logs that have been segregated to derive at least one behavioral pattern of the at least one multitasking user. Techniques are also provided for deriving intelligence from at least one activity log of at least one multitasking user to provide information to the at least one user.10-29-2009
20100031270HEAP MANAGER FOR A MULTITASKING VIRTUAL MACHINE - A multitasking virtual machine is described. The multitasking virtual machine may comprise an execution engine to concurrently execute a plurality of tasks. The multitasking virtual machine may further comprise a heap organization coupled to the execution engine. The heap organization may comprise a system heap to store system data accessible by the plurality of tasks; and a plurality of task heaps. Each of the plurality of task heaps may be assigned to each of the plurality of tasks to store task data accessible by the assigned task. The multitasking virtual machine may further comprise a heap manager to manage the heap organization. The heap manager may comprise a heap size controller to control heap size of the system heap.02-04-2010
20110283294DETERMINING MULTI-PROGRAMMING LEVEL USING DIMINISHING-INTERVAL SEARCH - A method of determining a multiprogramming level (MPL) for a first computer subsystem may be implemented on a second computer subsystem. The method may include selecting an initial MPL interval having endpoints that bound a local extremum of a computer-system operation variable that is a unimodal function of the MPL. For each interval having a length more than a threshold, operation-variable values for two intermediate MPLs in the interval may be determined. The interval may be diminished by the section of the interval between the one of the intermediate MPLs having an operation-variable value further from the extremum, and the interval endpoint adjacent to the one intermediate MPL. The operating MPL may be set equal to the other intermediate MPL when the interval has a length that is not more than the threshold.11-17-2011
20090210882SYSTEM AND METHODS FOR ASYNCHRONOUSLY UPDATING INTERDEPENDENT TASKS PROVIDED BY DISPARATE APPLICATIONS IN A MULTI-TASK ENVIRONMENT - A computer-based system for updating interdependent tasks in a multi-task environment is provided. The system includes one or more processors for processing processor-executable code and an input/output interface communicatively linked to at least one processor. The system further includes a brokering module configured to execute on the at least one processor. The brokering module can be configured to interconnect a plurality of event-responsive interdependent tasks in response to an event generated while one of the tasks is being processed. Different tasks can be provided by different applications. The brokering module is configured to initiate an asynchronous updating of the tasks, wherein the asynchronous updating comprises a background process of the multi-task environment preformed for each task not being currently processed and wherein the updating is performed while the one task is being processed. The brokering module, moreover, is further configured to provide through the interface a status notification of the updating of each of the tasks.08-20-2009
20080235707Data processing apparatus and method for performing multi-cycle arbitration - A data processing apparatus and method are provided for arbitrating between multiple access requests seeking to access a plurality of resources sharing a common access path. At least one logic element issues access requests requesting access to the resources, and each access request identifies which of the resources is to be accessed. Arbitration circuitry performs a multi-cycle arbitration operation to arbitrate between multiple access requests to be passed over the common access path, the arbitration circuitry having a plurality of pipeline stages to allow a corresponding plurality of multi-cycle arbitration operations to be in progress at any one time. Filter circuitry is provided which has a plurality of filter states, the number of filter states being dependent on the number of pipeline stages of the arbitration circuitry, and each resource being associated with one of the filter states. For a new multi-cycle arbitration operation to be performed by the arbitration circuitry, the filter circuitry selects one of the filter states that has not been selected for any other multi-cycle arbitration operation already in progress within the pipeline states of the arbitration circuitry. Then, it determines as candidate access requests for the new multi-cycle arbitration operation those access requests that are seeking to access a resource associated with the selected filter state. Such an approach allows efficient multi-cycle arbitration to take place even where the resources may have inter-access timing parameters associated therewith which prevent them from being able to receive access requests every clock cycle.09-25-2008
20090276788INFORMATION PROCESSING APPARATUS - In an information processing apparatus according to the present invention, a control unit notifies each application program of a key input event in a multi-window system. If the state of a first application program is inactive, the control unit determines whether or not the event notified to the first application program is a key input event caused by a key other than an active switching key. If it is determined that the event is a key input event caused by a key other than the active switching key, the control unit causes a clock circuit to time a predetermined time period, and performs control so as to omit part of processing by the first application program, or to provide a predetermined wait time in between the processing by the first application program, until the predetermined time period is timed out.11-05-2009
20080209437MULTITHREADED MULTICORE UNIPROCESSOR AND A HETEROGENEOUS MULTIPROCESSOR INCORPORATING THE SAME - A uniprocessor that can run multiple threads (programs) simultaneously is achieved by use of a plurality of low-frequency minicore processors, each minicore for receiving a respective thread from a high-frequency cache and processing the thread. A superscalar processor may be used in conjunction with the uniprocessor to process threads requiring high throughput.08-28-2008
20100115529Memory management apparatus and method - A memory management apparatus and a memory management method may divide an external memory area assigned to a task into a first area and a second area, and load data stored in the first area into an internal memory of a processor while the task is performed by the processor.05-06-2010
20110173630Central Repository for Wake-and-Go Mechanism - A wake-and-go mechanism is provided with a central repository wake-and-go array for a multiple processor data processing system. The wake-and-go mechanism recognizes a programming idiom that indicates that a thread running on a processor within the multiple processor data processing system is waiting for an event. The wake-and-go mechanism updates a central repository wake-and-go array with a target address associated with the event. Each entry in the central repository wake-and-go array may include a thread identification (ID), a central processing unit (CPU) ID, the target address, the expected data, a comparison type, a lock bit, a priority, and a thread state pointer, which is the address at which the thread state information is stored.07-14-2011
20090288097METHOD AND SYSTEM FOR CONCURRENTLY EXECUTING AN APPLICATION - A method for executing an application, that includes instantiating, by a first thread, a first executable object and a second executable object, creating a first processing unit and a second processing unit, instantiating an executable container object, spawning a second thread, associating the first executable object and the second executable object with the executable container object, processing the executable container object to generate a result, and storing the result. Processing the executable container object includes associating the first executable object with the first processing unit, and associating the second executable object with the second processing unit, wherein the first thread processes executable objects associated with the first processing unit, wherein the second thread processes executable objects associated with the second processing unit, and wherein the first thread and the second thread execute concurrently.11-19-2009
20090007135APPARATUS AND METHOD FOR SERVER NETWORK MANAGEMENT TO PROVIDE ZERO PLANNED RETROFIT DOWNTIME - Methods and systems are presented for updating software applications in a processor cluster, in which the cluster is divided into first and second processor groups and the first group is isolated from clients and from the second group with respect to network and cluster communications by application of IP filters. The first group of processors is updated or retrofitted with the new software and brought to a ready-to-run state while the second group is active to serve clients. The first group is then transitioned to an in-service state after isolating the then-active service providing application on second group. Thereafter, the second group of processors is offlined, updated or retrofitted, and transitioned to an in-service state to complete the installation of the new application version across the cluster with reduced or zero downtime and without requiring backward software compatibility.01-01-2009
20100138841System and Method for Managing Contention in Transactional Memory Using Global Execution Data - Transactional Lock Elision (TLE) may allow threads in a multi-threaded system to concurrently execute critical sections as speculative transactions. Such speculative transactions may abort due to contention among threads. Systems and methods for managing contention among threads may increase overall performance by considering both local and global execution data in reducing, resolving, and/or mitigating such contention. Global data may include aggregated and/or derived data representing thread-local data of remote thread(s), including transactional abort history, abort causal history, resource consumption history, performance history, synchronization history, and/or transactional delay history. Local and/or global data may be used in determining the mode by which critical sections are executed, including TLE and mutual exclusion, and/or to inform concurrency throttling mechanisms. Local and/or global data may also be used in determining concurrency throttling parameters (e.g., delay intervals) used in delaying a thread when attempting to execute a transaction and/or when retrying a previously aborted transaction.06-03-2010
20080250422EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits.10-09-2008
20080271041PROGRAM PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - According to one embodiment, a program processing method includes converting parallel execution control description into graph data structure generating information, extracting a program module based on preceding information included in the graph data structure generating information when input data is given, generating a node indicating an execution unit of the program module for the extracted program module, adding the generated node to a graph data structure configured based on preceding and subsequent information defined in the graph data structure generating information, executing a program module corresponding to a node included in a graph data structure existing at that time, by setting values for the parameter, based on performance information of the node when all nodes indicating a program module defined in the preceding information have been processed, and obtaining and saving performance information of the node when a program module corresponding to the node has been executed.10-30-2008
20090165016Method for Parallelizing Execution of Single Thread Programs - A method and apparatus for speculatively executing a single threaded program within a multi-core processor which includes identifying an idle core within the multi-core processor, performing a look ahead operation on the single thread instructions to identify speculative instructions within the single thread instructions, and allocating the idle core to execute the speculative instructions.06-25-2009
20090144748METHODS AND APPARATUS FOR PARALLEL PIPELINING AND WIDTH PROCESSING - Computer apparatus for use with a database management system and database, the apparatus comprising a CPU and a memory, the apparatus configured to provide at least two task processes each process being apportioned a section of the memory when is use, wherein in response to the database management system or apparatus being instructed to carry out a first task, such as reading, and a second task, such as decryption, on a section of data in series, a first task process is configured to begin the first task on a first part of the section of data in the database and (after a the first process on the first part of the section of the data is complete); a second task process is instructed to carry out the first task on a second part of the section of data which begins where the first part ends, and when the first task is complete and the first task process switched to carry out the second task on data on which the first task has already been carried out, or the second process is instructed to carry out the second task on the first part whilst the first process switches to carry out the first task on the second part of the data, or the second task process is instructed to carry out the first task on a second part of the section of data the first task process is switched to pipeline the second task to a third task process.06-04-2009
20110145834CODE EXECUTION UTILIZING SINGLE OR MULTIPLE THREADS - A program is executed utilizing a main hardware thread. During execution, an instruction specifies to execute a portion utilizing a worker hardware thread. If a processor state indicator is set to multi-threaded, the specified portion is executed utilizing the worker hardware thread. However, if the processor state indicator is set to single-threaded, the specified portion is executed utilizing the main hardware thread as a subroutine. The main hardware thread may pass parameter data to the worker hardware thread by copying the parameter data register or memory location for the main hardware thread to an equivalent parameter data register or memory location for the worker hardware thread. Similarly, the worker hardware thread may pass return values to the main hardware thread by copying a return value register or memory location for the worker hardware thread to an equivalent return value register or memory location for the main hardware thread.06-16-2011
20100050184MULTITASKING PROCESSOR AND TASK SWITCHING METHOD THEREOF - A multitasking processor and a task switching method thereof are provided. The task switching method includes following steps. A first task is executed by the multitasking processor, wherein the first task contains a plurality of switching-point instructions. An interrupt event occurs. Accordingly, the multitasking processor temporarily stops executing the first task and starts to execute a second task. The multitasking processor executes a handling process of the interrupt event and sets a switching flag. After finishing the handling process of the interrupt event, the multitasking processor does not perform task switching but continues to execute the first task, and the multitasking processor only performs task switching to execute the second task when it reaches a switching-point instruction in the first task.02-25-2010
20090007136Time management control method for computer system, and computer system - In a time management control method of a computer system for managing each individual time of a plurality of virtual systems, a service process or retains an overall system time and a difference time between the overall system time and a virtual system time for each virtual system, and a firmware in the virtual system acquires the overall system time and the difference time, calculates a difference time between the overall system time and the change time of the virtual system, adds the both difference times, and informs the service processor. Accordingly, the virtual system time can be changed without time management hardware in each virtual system. Further, since service processor performs update processing only, it is also possible to prevent a time set error caused by delayed calculation processing etc.01-01-2009
20100218196SYSTEM, METHODS AND APPARATUS FOR PROGRAM OPTIMIZATION FOR MULTI-THREADED PROCESSOR ARCHITECTURES - Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.08-26-2010
20100218195Software filtering in a transactional memory system - A method and apparatus for utilizing hardware mechanisms of a transactional memory system is herein described. Various embodiments relate to software-based filtering of operations from read and write barriers and read isolation barriers during transactional execution. Other embodiments relate to software-implemented read barrier processing to accelerate strong atomicity. Other embodiments are also described and claimed.08-26-2010
20100211959ADAPTIVE CLUSTER TIMER MANAGER - Described herein are techniques for adaptively managing timers that are used in various layers of a node. In many cases, the number of timers that occur in the system is reduced by proactively and reactively adjusting values of the timers based on conditions affecting the system, thereby making such a system to perform significantly better and more resiliently than otherwise.08-19-2010
20100122263METHOD AND DEVICE FOR MANAGING THE USE OF A PROCESSOR BY SEVERAL APPLICATIONS, CORRESPONDING COMPUTER PROGRAM AND STORAGE MEANS - A method of managing processor usage time includes: associating each application with a slice of the processor time and with a first or second class; and managing the processor time as a function of the processor time slices and classes. The processor time slice associated with an application of the first class is reserved for the application even if the application does not use it fully. An application of the second class has priority for using the processor during its associated time slice, wherein if part of the associated time slice is not used by the application, the unused part may be used by another application of the second class, the application being able to use more than its associated time slice by using an unused part of a time slice associated with another application of the second class or a part of a time slice associated with no application.05-13-2010
20120246662Automatic Verification of Determinism for Parallel Programs - Automatic verification of deteiminism in structured parallel programs includes sequentially establishing whether code for each of a plurality of tasks of the structured parallel program is independent, outputting sequential proofs corresponding to the independence of the code for each of the plurality of tasks and deteimining whether all memory locations accessed by parallel tasks of the plurality of tasks are independent based on the sequential proofs.09-27-2012
20110119682METHODS AND APPARATUS FOR MEASURING PERFORMANCE OF A MULTI-THREAD PROCESSOR - Disclosed are methods and apparatus for measuring performance of a multi-thread processor. The method and apparatus determine loading of a multi-thread processor through execution of an idle task in individual threads of the multi-thread processor during predetermined time periods. The idle task is configured to loop and run when no other task is running on the threads. Loop executions of the idle task on each thread are counted over each of the predetermined time periods. From these counts, loading of each of the threads of the multi-thread processor may then be determined. The loading may be used to develop a processor profile that may then be displayed in real-time.05-19-2011
20090037926METHODS AND SYSTEMS FOR TIME-SHARING PARALLEL APPLICATIONS WITH PERFORMANCE ISOLATION AND CONTROL THROUGH PERFORMANCE-TARGETED FEEDBACK-CONTROLLED REAL-TIME SCHEDULING - Certain embodiments of the present invention provide systems and method for time-sharing parallel applications with performance isolation and control through feedback-controlled real-time scheduling. Certain embodiments provide a computing system for time-sharing parallel applications. The system includes a controller adapted to determine a scheduling constraint for each thread of execution for an application based at least in part on a target execution rate for the application. The system also includes a local scheduler executing on a node in the computing system. The local scheduler schedules execution of a thread of execution for the application based on the scheduling constraint received from the controller. The local scheduler provides feedback regarding a current execution rate for the application thread to the controller, and the controller modifies the scheduling constraint for the local scheduler based on the feedback.02-05-2009
20110029985METHOD AND APPARATUS FOR COORDINATING RESOURCE ACCESS - An approach is provided for coordination resource access. A resource access coordinating application determines the conflict condition among a plurality of queries from a respective plurality of applications for access to an identical resource in an information space. The resource access coordinating application then orders the queries based on one or more characteristics (e.g., read, write, update, delete, read-only, read-update, write-update, write-add, write-add, etc.) of the queries irrespective of the applications. Thereafter, the resource access coordinating application selects one of the queries based on the order.02-03-2011
20110041137Methods And Apparatus For Concurrently Executing A Garbage Collection Process During Execution of A Primary Application Program - A wireless mobile communication device has an application program and a garbage collection program stored in memory. The garbage collection program is configured to identify a root set of referenced objects of the application program with use of a reference indicator array and to perform a mark and sweep process based on the root set of referenced objects. The reference indicator array has a plurality of reference indicators where each referenced indicator corresponding to a referenced object is set as referenced. The application program is configured to be executed during execution of a mark and sweep process of the garbage collection program, such that information received or provided via the user interface during the execution of the mark and sweep process is received or provided without suspension or delay. The application program has computer instructions which are based on an instruction set defined by a plurality of opcodes or native codes, including a single predefined opcode or a single predefined native code which is a “get object reference” instruction. Each “get object reference” instruction is associated with a target object and is defined to retrieve a reference from the target object and also set one of the reference indicators corresponding to the target object as referenced in the reference indicator array.02-17-2011
20110083136DISTRIBUTED PROCESSING SYSTEM - A distributed processing system for executing an application includes a processing element capable of performing parallel processing, a control unit, and a client that makes a request for execution of the application to the control unit. The processing element has, at least at the time of executing the application, one or more processing blocks that process respectively one or more tasks to be executed by the processing element, a processing block control section for calculating the number of parallel processes based on an index for controlling the number of parallel processes received from the control unit, a division section that divides data to be processed input to the processing blocks by the processing block control section in accordance with the number of parallel processes, and an integration section that integrates processed data output from the processing blocks by the processing block control section in accordance with the number of parallel processes.04-07-2011
20100037234DATA PROCESSING SYSTEM AND METHOD OF TASK SCHEDULING - A data processing system in a multi-tasking environment is provided. The data processing system comprises at least one processing unit (02-11-2010
20090089795Information processing apparatus, control method of information processing apparatus, and control program of information processing apparatus - According to an embodiment of the invention, a computer readable storage medium that stores a software program causing a computer system to perform a scheduling process for executing a plurality of application programs in every processor cycles, the scheduling process includes: allocating, during a current processor cycle, processor times of a next processor cycle to each of the application programs to be executed in the next processor cycle; storing the allocated processor times of the next processor cycle; determining whether or not the application programs executed in the current processor cycle include an uncompletable application program; calculating processor idle time of the next processor cycle; and allocating an additional processor time of the next processor cycle to the uncompletable application program, the additional processor time being set not to exceed the calculated processor idle time of the next processor cycle.04-02-2009
20110252430Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. By way of example, an application that may normally be placed into the background state may instead be terminated if it controls a lock on a shared system resource.10-13-2011
20090328058PROTECTED MODE SCHEDULING OF OPERATIONS - The present invention extends to methods, systems, and computer program products for protected mode scheduling of operations. Protected mode (e.g., user mode) scheduling can facilitate the development of programming frameworks that better reflect the requirements of the workloads through the use of workload-specific execution abstractions. In addition, the ability to define scheduling policies tuned to the characteristics of the hardware resources available and the workload requirements has the potential of better system scaling characteristics. Further, protected mode scheduling decentralizes the scheduling responsibility by moving significant portions of scheduling functionality from supervisor mode (e.g., kernel mode) to an application.12-31-2009
20110161981USING PER TASK TIME SLICE INFORMATION TO IMPROVE DYNAMIC PERFORMANCE STATE SELECTION - Methods and apparatus for using per task time slice information to improve dynamic performance state selection are described. In one embodiment, a new performance state is selected for a process based on one or more previous execution time slice values of the process. Other embodiments are also described.06-30-2011
20080301699Apparatus and methods for workflow management and workflow visibility - A system for viewing and managing work flow. The system includes at least one processor and memory configured to track time requirements for each of a plurality of jobs, compile and display the time requirements relative to current time in a plurality of managerial-level views, and in each view, indicate status of the jobs relative to the time requirements.12-04-2008
20080301698SERVICE ENGAGEMENT MANAGEMENT USING A STANDARD FRAMEWORK - A solution for managing a service engagement is provided. A service delivery model for the service engagement is defined within an engagement framework. The engagement framework, and consequently the service delivery model, can include a hierarchy that comprises a service definition, a set of service elements for the service definition, and a set of element tasks for each service element. The set of element tasks can be selected from a set of base tasks, each of which defines a particular task along with its input(s), output(s), and related asset(s). As a result, service engagements can be managed in a consistent manner using a data structure that promotes reuse and is readily extensible.12-04-2008
20120311608METHOD AND APPARATUS FOR PROVIDING MULTI-TASKING INTERFACE - A method and an apparatus for providing a multi-tasking interface of a device such as a portable communication device are provided. The method for providing a multi-tasking interface of a terminal preferably includes: receiving background switch input switching an display of an application being executed in a foreground to a background; switching the display of the application to the background when the background switch input is received; displaying a background control interface; and switching the display of the application to the foreground when preset switch input is received through the background control interface.12-06-2012
20120311607DETERMINISTIC PARALLELIZATION THROUGH ATOMIC TASK COMPUTATION - A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time.12-06-2012
20120311606System and Method for Implementing Hierarchical Queue-Based Locks Using Flat Combining - The system and methods described herein may be used to implement a scalable, hierarchal, queue-based lock using flat combining. A thread executing on a processor core in a cluster of cores that share a memory may post a request to acquire a shared lock in a node of a publication list for the cluster using a non-atomic operation. A combiner thread may build an ordered (logical) local request queue that includes its own node and nodes of other threads (in the cluster) that include lock requests. The combiner thread may splice the local request queue into a (logical) global request queue for the shared lock as a sub-queue. A thread whose request has been posted in a node that has been combined into a local sub-queue and spliced into the global request queue may spin on a lock ownership indicator in its node until it is granted the shared lock.12-06-2012
20120311605PROCESSOR CORE POWER MANAGEMENT TAKING INTO ACCOUNT THREAD LOCK CONTENTION - A method maintains, for each processing element in a processor, a count of threads waiting in a data structure for hand-off locks in order to execute on the processing element. The method maintains the processing element in a first power state if the count of threads waiting for hand-off locks is greater than zero. The method puts the processing element in a second power state if the count of threads waiting for hand-off locks is equal to zero and no thread is ready to be processed by the processing element. The method returns the processing element to the first power state if the count of threads becomes greater than zero, or if a thread becomes ready to be processed by the processing element.12-06-2012
20120311604DETERMINISTIC PARALLELIZATION THROUGH ATOMIC TASK COMPUTATION - A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time.12-06-2012
20110138398LOCK RESOLUTION FOR DISTRIBUTED DURABLE INSTANCES - The present invention extends to methods, systems, and computer program products for resolving lock conflicts. For a state persistence system, embodiments of the invention can employ a logical lock clock for each persisted state storage location. Lock times can be incorporated into bookkeeping performed by a command processor to distinguish cases where the instance is locked by the application host at a previous logical time from cases where the instance is concurrently locked by the application host through a different name. A logical command clock is also maintained for commands issued by the application host to a state persistence system, with introspection to determine which issued commands may potentially take a lock. The command processor can resolve conflicts by pausing command execution until the effects of potentially conflicting locking commands become visible and examining the lock time to distinguish among copies of a persisted state storage location.06-09-2011
20100138842Multithreading And Concurrency Control For A Rule-Based Transaction Engine - The subject matter disclosed herein provides methods and apparatus, including computer program products for rules-based processing. In one aspect there is provided a method. The method may include, for example, evaluating rules to determine whether to enable or disable one or more actions in a ready set of actions. Moreover, the method may include scheduling the ready set of actions, each of which is scheduled for execution and executed, the execution of each of the ready set of actions using a separate, concurrent thread, the concurrency of the actions controlled using a control mechanism. Related systems, apparatus, methods, and/or articles are also described.06-03-2010
20090300645Virtualization with In-place Translation - In a computing system having virtualization software including a guest operating system (OS), a method for executing guest OS instructions that includes: replacing each of one or more guest OS instructions with: (a) a translated instruction, which translated instruction is a one-to-one translation, or (b) a trap instruction.12-03-2009
20110197199INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER READABLE MEDIUM - An information processing apparatus includes: a reliability determination unit that determines reliability required for processing a processing target based on the processing target; a processing determination unit that makes a comparison between the reliability determined by the reliability determination unit and reliability of a processing main body and determines whether or not the processing main body can be caused to process the processing target; a processing target change unit that changes the processing target so as to change the reliability of the processing target if the processing determination unit determines that the processing main body cannot be caused to process the processing target; and a processing request unit that requests the processing main body to process the processing target changed by the processing target change unit.08-11-2011
20100031269Lock Contention Reduction - Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for lock contention reduction. In one illustrative embodiment, the computer implemented method provides a lock to an active thread, increments a lock counter, receives a request to de-schedule the active thread, and determines whether the lock is held by the active thread. The computer implemented method, responsive to a determination that the lock is held by the active thread, adds a first pre-determined amount to a time slice of the active thread.02-04-2010
20120042324Memory management method and device in a multitasking capable data processing system - A method for memory space management in a multitasking capable data processing system including a data processing device and software running thereon. The data processing device includes at least one central processing unit (CPU) and at least one user memory, and the software running on the CPU includes a first computer program application and at least a second computer program application which respectively jointly access the user memory used by both computer program applications during execution. Information of the first computer program application is stored in at least a portion of the memory space of the user memory in a temporary manner, and the integrity of the contents memory space is checked after interrupting the execution of the first computer program application. The first computer program application is only executed further when the memory integrity is confirmed through the checking or when the memory integrity has been reestablished.02-16-2012
20120047515TERMINAL DEVICE, COMMUNICATION METHOD USED IN THE TERMINAL DEVICE AND RECORDING MEDIUM - The present invention relates to a terminal device having an operation system and is capable of using a first application program for use in real time communication and a second application program for another purpose simultaneously on the operation system, the terminal device is characterized by being provided with a means for setting interval between system calls which calculates a frequency of system call executions when the issuance of the system call to the operation system by the second application program is simultaneously executed during the real time communication by the first application program, and when the execution frequency has exceeded a predetermined threshold, sets an execution interval time between the system calls to a given length of time or more.02-23-2012
20120005687SYSTEM ACTIVATION METHOD IN MULTI-TASK SYSTEM - When a multi-task system is powered on, the following steps are respectively executed: a first step in which hardware components are initialized; a second step in which sections are initialized; and a third step in which an operating system is initialized. In the third step, a task/object is statically generated when an initial access time of the task/object is at most a predefined threshold value but the task/object is dynamically generated after activation of the multi-task system is completed when the initial access time of the task/object is larger than the predefined threshold value.01-05-2012
20100095306ARITHMETIC DEVICE - An arithmetic device simultaneously processes a plurality of threads and may continue the process by minimizing the degradation of the entire performance although a hardware error occurs. An arithmetic device 04-15-2010
20100095305SIMULTANEOUS MULTITHREAD INSTRUCTION COMPLETION CONTROLLER - In a system that executes a program by simultaneously running a plurality of threads, the entries in a CSE 04-15-2010
20100011372Method and system for synchronizing the execution of a critical code section - The invention concerns a method for synchronizing the execution of at least one critical code section (C01-14-2010
20090019451ORDER-RELATION ANALYZING APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT THEREOF - An order-relation analyzing apparatus collects assigned destination processor information, a synchronization process order and synchronization information, determines a corresponding element associated with a program among a plurality of elements indicating an ordinal value of the program based on the assigned destination processor information, when an execution of the program is started, and calculates the ordinal value indicated by the corresponding element for each segment based on the synchronization information, when the synchronization process occurs while executing the program. When a first corresponding element associated with a second program, of which the execution starts after the execution of a first program associated with the first corresponding element finishes, is determined, the ordinal value of the second program is calculated by calculating the ordinal value indicated by the first corresponding element.01-15-2009
20130174179MULTITASKING METHOD AND APPARATUS OF USER DEVICE - A multitasking method and apparatus of a user device is provided for intuitively and swiftly switching between background and foreground tasks running on the user device. The multitasking method includes receiving an interaction to request for task-switching in a state where an execution screen of a certain application is displayed, displaying a stack of tasks that are currently running, switching a task selected from the stack to a foreground task, and presenting an execution window of the foreground task.07-04-2013
20110131586Method and System for Efficiently Sharing Array Entries in a Multiprocessing Environment - A method and a system efficiently and effectively share array entries among multiple threads of execution in a multiprocessor computer system. The invention comprises a method and an apparatus for array creation, a method and an apparatus for array entry data retrieval, a method and an apparatus for array entry data release, a method and an apparatus for array entry data modification, a method and an apparatus for array entry data modification release, a method and an apparatus for multiple array entry atomic release-and-renew, a method and an apparatus for array destruction, a method and an apparatus for specification of array entry discard strategy, a method and an apparatus for specification of array entry modification update strategy, and finally a method and an apparatus for specification of user-provided array entry data construction method.06-02-2011
20120240132CONTROL APPARATUS, SYSTEM PROGRAM, AND RECORDING MEDIUM - A control apparatus capable of updating a user program while processing is being performed in a multitasking manner is provided. A processor includes a memory that stores a user program containing a program organization unit as well as a central processing unit executing a task containing the user program and also updating the program organization unit stored in the memory. The central processing unit is configured to execute a plurality of tasks concurrently and to execute each task with a period corresponding to the task. Moreover, the central processing unit is configured to update the program organization unit stored in the memory during the period of time from when a plurality of tasks to be executed have been finished until when the plurality of tasks are executed again.09-20-2012
20120324473Effective Management Of Blocked-Tasks In Preemptible Read-Copy Update - A technique for managing read-copy update readers that have been preempted while executing in a read-copy update read-side critical section. A single blocked-tasks list is used to track preempted reader tasks that are blocking an asynchronous grace period, preempted reader tasks that are blocking an expedited grace period, and preempted reader tasks that require priority boosting. In example embodiments, a first pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking a current asynchronous grace period. A second pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking an expedited grace period. A third pointer may be used to segregate the blocked-tasks list into preempted reader tasks that do and do not require priority boosting.12-20-2012
20100229181NSMART SCHEDULING OF AUTOMATIC PARTITION MIGRATION BY THE USER OF TIMERS - Partition migrations are scheduled between virtual partitions of a virtually partitioned data processing system. The virtually partitioned data processing system is a tickless system in which a periodic timer interrupt is not guaranteed to be sent to the processor at a defined time interval. A request is received for a partition migration. Gaps between scheduled timer interrupts are identified. The partition migration is then scheduled to occur within the largest gap.09-09-2010
20110321059STACK OVERFLOW PREVENTION IN PARALLEL EXECUTION RUNTIME - A parallel execution runtime prevents stack overflow by maintaining an inline counter for each thread executing tasks of a process. Each time that the runtime determines that inline execution of a task is desired on a thread, the runtime determines whether the inline counter for the corresponding thread indicates that stack overflow may occur. If not, the runtime increments the inline counter for the thread and allows the task to be executed inline. If the inline counter indicates a risk of stack overflow, then the runtime performs additional one or more checks using a previous stack pointer of the stack (i.e., a lowest known safe watermark), the current stack pointer, and memory boundaries of the stack. If the risk of stack overflow remains after all checks have been performed, the runtime prevents inline execution of the task.12-29-2011
20080222649Method and computer program for managing man hours of multiple individuals working one or more tasks - A method and computer program are provided for managing man hours of multiple individuals working one or more tasks during a predefined time period that include selectively opening a plurality of tasks of differing task characteristic and task type, selectively associating one or more individuals to one of the open plurality of tasks, selectively unassociating at least one of the associated one or more individuals, maintaining at least one timer for each of the open plurality of tasks, selectively closing one or more of the open plurality of tasks, and selectively outputting an invoice for the closed plurality of tasks based on the bid price of each of the open plurality of tasks. One or more of the individuals are associated and unassociated prior to completion of the open one or more tasks. The at least one timer maintains a total time for all of the associated one or more individuals for each of the open plurality of tasks.09-11-2008
20130179896Multi-thread processing of an XML document - An indication to process an Extensible Markup Language (XML) document that includes a hierarchy of nodes is received. A set of one or more page nodes to be processed is obtained, where the set of page nodes are part of the hierarchy of nodes. A plurality of threads is created. One of the set of page nodes and those nodes, if any, in the hierarchy that descend from that node are assigned to one of the plurality of threads to be processed by that thread. Processing, by said one of the plurality of threads, of the assigned page node and those nodes that descend from that page node is initiated.07-11-2013
20120254889Application Programming Interface for Managing Time Sharing Option Address Space - A method includes receiving a start request from a client at a launcher application programming interface (API), determining whether an existing time sharing option (TSO) address space associated with a user of the client is available, retrieving security environment data associated with the user from a security product responsive to determining that no existing TSO address space associated with a user of the client is available, saving the retrieved security environment data as a security object, generating a message queue, generating a terminal status block (TSB) and saving the terminal status block, creating a TSO address space in a processor, sending an instruction to an operating system to start the TSO address space, and sending a message queue identifier associated with the message queue and an address space token associated with the TSO address space to the client.10-04-2012
20120254888PIPELINED LOOP PARALLELIZATION WITH PRE-COMPUTATIONS - Embodiments of the invention provide systems and methods for automatically parallelizing loops with non-speculative pipelined execution of chunks of iterations with pre-computation of selected values. Non-DOALL loops are identified and divided the loops into chunks. The chunks are assigned to separate logical threads, which may be further assigned to hardware threads. As a thread performs its runtime computations, subsequent threads attempt to pre-compute their respective chunks of the loop. These pre-computations may result in a set of assumed initial values and pre-computed final variable values associated with each chunk. As subsequent pre-computed chunks are reached at runtime, those assumed initial values can be verified to determine whether to proceed with runtime computation of the chunk or to avoid runtime execution and instead use the pre-computed final variable values.10-04-2012
20130104144Application Switching in a Graphical Operating System - A method for application switching in an operating system may be provided. The method may comprise providing at least two active applications on the operating system, and providing a first list of actions related to the first active application, via a first interface, to an application switching manager, and providing a second list of actions related to the second active application, via a second interface, to the application switching manager. Additionally, the method may further comprise selecting an active application out of the at least two active applications together with selecting an action selected from the first list of actions for a first application or a second action for the second list for a second application using a graphical user interface.04-25-2013
20130152104HANDLING OF SYNCHRONOUS OPERATIONS REALIZED BY MEANS OF ASYNCHRONOUS OPERATIONS - The present invention extends to methods, systems, and computer program products for handling synchronous operations by means of asynchronous operations. Upon completion of an asynchronous operation, a state flag is accessed. The state flag indicates whether or not a sync-over-async wrapper/adapter requested execution of the asynchronous operation. The sync-over-async wrapper/adapter is currently blocked awaiting notice of completion of the asynchronous operation. Based on the state flag, results of the asynchronous operation are stored at a location accessible by the sync-over-async wrapper. A completion signal is sent to the sync-over-async wrapper.06-13-2013
20120297397IMAGE FORMING APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM AND STORAGE MEDIUM - An image forming apparatus has a plurality of application execution environments, and includes a control part in each of the application execution environments, configured to control an application executed in a corresponding application execution environment. The control part in an application execution environment controls an application executed in an other application execution environment via the control part of the other application execution environment.11-22-2012
20130205304APPARATUS AND METHOD FOR PERFORMING MULTI-TASKING IN PORTABLE TERMINAL - A multi-tasking execution apparatus and a method for easily controlling applications running in a portable terminal are provided. The apparatus includes a display and a controller. The display displays an application-containing image in which at least one specific image representing at least one application running in a background is contained and arranged. The controller operatively displays at least one specific image representing at least one application running in the background, so as to be contained in the application-containing image, and controls the at least one application running in the background by controlling the specific image based on a specific gesture.08-08-2013
20120304196ELECTRONIC DEVICE WORKSPACE RESTRICTION - Some embodiments include a method that includes receiving an indication of a first of a plurality of tasks. The method includes accessing a policy associated with the first of the plurality of tasks. The method also includes determining that a restricted activity state is to be imposed on an electronic device workspace based on the policy that is associated with the first of the plurality of tasks and an application related activity. The application related activity comprises at least one of accumulation of a first time period of a user working with the first set of one or more applications and expiration of a lack of activity second time period for the second set of one or more applications. The method includes restricting the electronic device workspace to the first set of one or more applications.11-29-2012
20120096474Systems and Methods for Performing Multi-Program General Purpose Shader Kickoff - Systems and methods for thread group kickoff and thread synchronization are described. One method is directed to synchronizing a plurality of threads in a general purpose shader in a graphics processor. The method comprises determining an entry point for execution of the threads in the general purpose shader, performing a fork operation at the entry point, whereby the plurality of threads are dispatched, wherein the plurality of threads comprise a main thread and one or more sub-threads. The method further comprises performing a join operation whereby the plurality of threads are synchronized upon the main thread reaching a synchronization point. Upon completion of the join operation, a second fork operation is performed to resume parallel execution of the plurality of threads.04-19-2012

Patent applications in class Multitasking, time sharing

Patent applications in all subclasses Multitasking, time sharing