Entries |
Document | Title | Date |
20080201721 | PARALLEL PROGRAMMING INTERFACE - A computing device-implemented method includes receiving a program created by a technical computing environment, analyzing the program, generating multiple program portions based on the analysis of the program, dynamically allocating the multiple program portions to multiple software units of execution for parallel programming, receiving multiple results associated with the multiple program portions from the multiple software units of execution, and providing the multiple results or a single result to the program. | 08-21-2008 |
20080209435 | Scalable workflow management system - A scalable workflow management system is provided that includes queues for storing work items to be processed. Work items may be placed into the queues by front-end services executing within the workflow management system. When a work item is placed on a queue, it remains on the queue until an appropriate back-end service is available to de-queue the work item, validate the de-queued work item, and process the de-queued work item. Separate queues are provided for storing normal work items, work items generated according to a time schedule, and work items generated by job launching services. The state of operation of the workflow management system may be controlled by an administrative console application. | 08-28-2008 |
20080209436 | Automated testing of programs using race-detection and flipping - In accordance with one or more aspects, one or more programs having multiple actors is executed following a first execution path. A race condition among different ones of the multiple actors in the first execution path is identified, and an order in which two events involved in the race condition are executed is flipped so as to create a second execution path. The multiple actors are then executed following the second execution path, and any errors identified in the first execution path or the second execution path are reported. | 08-28-2008 |
20080222648 | DATA PROCESSING SYSTEM AND METHOD OF DATA PROCESSING SUPPORTING TICKET-BASED OPERATION TRACKING - A data processing system includes a plurality of processing units coupled by a plurality of communication links for point-to-point communication such that at least some of the communication between multiple different ones of the processing units is transmitted via intermediate processing units among the plurality of processing units. The communication includes operations having a request and a combined response representing a system response to the request. At least each intermediate processing unit includes one or more masters that initiate first operations, a snooper that receives at least second operations initiated by at least one other of the plurality of processing units, a physical queue that stores master tags of first operations initiated by the one or more masters within that processing unit, and a ticketing mechanism that assigns to second operations observed at the intermediate processing unit a ticket number indicating an order of observation with respect to other second operations observed by the intermediate processing unit. The ticketing mechanism provides the ticket number assigned to an operation to the snooper for processing with a combined response of the operation. | 09-11-2008 |
20080229322 | Method and Apparatus for a Multidimensional Grid Scheduler - A method and apparatus for scheduling execution of a grid project in accordance with multiple dimensions of dynamic load factors. The present invention provides a mechanism for determining grid node availability based on both processor load and network traffic loads on the nodes in a grid of computing devices. This availability information is used to determine scheduling of the running of grid projects. | 09-18-2008 |
20080235706 | Workflow Decision Management With Heuristics - Methods, systems, and computer program products are provided for workflow decision management. Embodiments typically include maintaining a device state history; identifying a device usage pattern in dependence upon the device state history; identifying a derived scenario in dependence upon the device usage pattern; and selecting a heuristic in dependence upon the derived scenario. In typical embodiments, the heuristic has a tolerance. Embodiments also include identifying a workflow in dependence upon the selected heuristic and executing the workflow in dependence upon the tolerance. | 09-25-2008 |
20080256548 | Method for the Interoperation of Virtual Organizations - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and possibly heterogeneous sites in a cooperative arrangement to process user-defined job requests over dynamic, continuous streams of data. A method is provided to organize the distributed sites into a plurality of virtual organizations that can be further combined and virtualized into virtualized virtual organizations. These virtualized virtual organizations can also include additional distributed sites and existing virtualized virtual organizations and all members of a given virtualized virtual organization can share data and processing resources in order to process jobs on either a task-based or goal-based allocation mechanism. The virtualized virtual organization is created dynamically using ad-hoc collaborations among the members and is arranged in either a federated or cooperative architecture. Collaborations between members is either tightly-coupled or loosely coupled. Flexible management of resources is provided with resources being provided under exclusive control or based on best-effort access. | 10-16-2008 |
20080256549 | System and Method of Planning for Cooperative Information Processing - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and possibly heterogeneous sites in a cooperative arrangement to execute jobs derived from user-define inquires over dynamic, continuous streams of data. A method is provided for cooperative planning for the execution of the jobs across the distributed plurality of sites. An identification of the resources available for sharing from each one of the plurality of sites is communicated to one or more planners disposed on the distributed sites. These planners use the resource information to generate planning domains in which the jobs can be processed. Upon receipt of an inquiry at one of the sites, the inquiry is communicated to one of the planners that uses the planning domain to create at least one distributed plan for the inquiry. Processing of the inquiry is conducted in accordance with the distributed plan. Planning can take advantage of the structure of virtual organizations including cooperative and federated virtual organizations. The distributed plans can make use of the resources within a single virtual organization or across multiple organizations. | 10-16-2008 |
20080256550 | Parallel processing system by OS for single processor - The present invention relates to a parallel processing system by an OS for single processor capable of operating an OS for single processor and an existing application on a multiprocessor and achieving parallel processing by a multiprocessor with respect to the application, wherein the multiprocessor are logically divided into two groups, i.e., a first processor side and a second processor side, and units of work that are parallelizable within the application operating on the processors on the first processor side are controlled as new units of work on the processors on the second processor side. | 10-16-2008 |
20080271040 | Method for managing message flow in a multithreaded, message flow environment - In one form, a method for managing message flow includes processing messages concurrently by processing nodes in a computer software application. The processing nodes include at least one set of lock acquisition, system resource access and lock release nodes interconnected in a flow path. In such a set, the nodes are interconnected in a flow path and process a respective one of the messages in a sequence defined by the flow path. The processing includes granting access to a system resource exclusively for the set's respective message responsive to the lock acquisition node processing its respective message. The system resource is accessed for the message responsive to the set's system resource node processing the message. The accessing of the resource for the message includes changing a certain system state. The exclusive accessing of the system resource is released responsive to the set's lock release node processing the message. | 10-30-2008 |
20080288953 | INFORMATION PROCESSING DEVICE AND METHOD - An information processing device to execute programs performing encoding processing configured from a plurality of processes, includes: a program storage unit to store a plurality of encoding programs wherein an allocation pattern for a computation processing unit as to each of a plurality of processes comprising the encoding processes or the disposal pattern of memory utilized in the plurality of processes are each different; a program selecting unit to select an encoding program to be utilized in the event of executing encoding processing from a plurality of encoding programs stored with the program storage unit, as a utilized encoding program; and a program executing unit to execute the encoding processing employing a utilized encoding program selected with the program selecting unit. | 11-20-2008 |
20080307428 | IMAGE FORMING APPARATUS, APPLICATION EXECUTION METHOD, AND STORAGE MEDIUM - A disclosed image forming apparatus includes a storage unit storing a linkage application and processing applications, each of the processing applications being implemented by a combination of software components for inputting, processing, and outputting image data. The linkage application is configured to execute a combination of the processing applications in sequence. | 12-11-2008 |
20090007134 | SHARED PERFORMANCE MONITOR IN A MULTIPROCESSOR SYSTEM - A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices. | 01-01-2009 |
20090044195 | METHOD FOR PERFORMING TASKS BASED ON DIFFERENCES IN MACHINE STATE - A task generation system and method produces tasks to be executed on a target non-volatile data system based on state differences between the target system and a source non-volatile data system as found in state files and a state difference list. The tasks are generated by state difference translators according to differences between state files of the source and target systems. | 02-12-2009 |
20090044196 | METHOD OF USING PARALLEL PROCESSING CONSTRUCTS - A computing device-implemented method includes receiving a program, analyzing and transforming the program, determining an inner context and an outer context of the program based on the analysis of the program, and allocating one or more portions of the inner context of the program to two or more labs for parallel execution. The method also includes receiving one or more results associated with the parallel execution of the one or more portions from the two or more labs, and providing the one or more results to the outer context of the program. | 02-12-2009 |
20090044197 | DEVICE FOR USING PARALLEL PROCESSING CONSTRUCTS - A device, for performing parallel processing, includes a processor to receive one or more portions of an inner context of a program created for a technical computing environment, and allocate one or more portions of the inner context of the program to two or more labs for parallel execution. The processor is also configured to receive one or more results associated with the parallel execution of the one or more portions from the two or more labs, and provide the one or more results to an outer context of the program. | 02-12-2009 |
20090064171 | UPDATING WORKFLOW NODES IN A WORKFLOW - Provided a method, system, and article of manufacture for updating workflow nodes in a workflow. A workflow program processes user input at one node in a workflow comprised of nodes and workflow paths connecting the nodes, wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program transmits information on a current node to an analyzer. The analyzer processes the information on the current node to determine whether there are modifications to at least one subsequent node following the current node over at least one workflow path from the current node. The analyzer transmits to the workflow program an update including modifications to the at least one subsequent node in response to determining the modifications. | 03-05-2009 |
20090064172 | SYSTEM AND METHOD FOR TASK SCHEDULING - A computer-based method for task scheduling is disclosed. The method includes: scheduling one or more scheduled tasks, creating a scheduled task list which contains the one or more scheduled tasks, reading parameters of each of the scheduled tasks, comparing the current tasks in the memory with the scheduled tasks in the scheduled task list according to the unique task IDs if the memory contains current tasks, adding the scheduled tasks that are present in the scheduled task lists and not in the memory into the memory, and removing the current tasks that are present in the memory and not present in the scheduled task lists according to the comparison. | 03-05-2009 |
20090070772 | MULTIPROCESSOR SYSTEM - In case of a task scheduling processing that assigns plural divided execution program tasks to plural processor units, a multiprocessor system using SOI/MOS transistors employs two processes; one process is to determine an order to execute those tasks so as to reduce the program execution time and the other process is to control the system power upon task scheduling so as to control the clock signal frequency and the body-bias voltage to temporarily speed up the operation of a processor unit that processes another task that might affect the processing performance of one object task if there is dependency among those tasks. | 03-12-2009 |
20090070773 | METHOD FOR EFFICIENT THREAD USAGE FOR HIERARCHICALLY STRUCTURED TASKS - A system and method for dividing complex tasks into sub-tasks for the purpose of improving performance in completing the task. Sub-tasks are arranged hierarchically and if a sub-task is unable to obtain a thread for execution it is executed in the thread of the parent task. Should a thread become free it is returned to a thread pool for use by any task. Should a parent task be waiting on the completion of one or more sub-tasks, the thread it uses is returned to the thread pool for use by any other task as needed. | 03-12-2009 |
20090077563 | Systems And Methods For Grid Enabling Computer Jobs - Systems and methods for executing a computer program within a multiple processor grid computing environment. Execution behavior of the computer program is captured while the computer program is sequentially executing. The captured execution behavior is linked with steps contained in the source code version of the computer program. The captured execution behavior that is linked with the supplemented source code version is analyzed in order to determine dependencies between a step of the computer program and one or more other steps of the computer program. A determination is made of which task or tasks within the computer program can be processed through the grid computing environment based upon the determined dependencies. | 03-19-2009 |
20090083752 | EVENT PROCESSING APPARATUS AND METHOD BASED ON OPERATING SYSTEM - An event processing apparatus has an event queue that accumulates a plurality of events occurred temporally. The apparatus has event queue optimization means for executing filtering processes to delete one or more event based on optimization definition information, and/or for executing chunking processes to integrate a plurality of events into an event, for a plurality of events accumulated in the event queue. | 03-26-2009 |
20090113443 | Transactional Memory Computing System with Support for Chained Transactions - A computing system processes memory transactions for parallel processing of multiple threads of execution provides execution of multiple atomic instruction groups (AIGs) on multiple systems to support a single large transaction that requires operations on multiple threads of execution and/or on multiple systems connected by a network. The support provides a Transaction Table in memory and fast detection of potential conflicts between multiple transactions. Special instructions may mark the boundaries of a transaction and identify memory locations applicable to a transaction. A ‘private to transaction’ (PTRAN) tag, directly addressable as part of the main data storage memory location, enables a quick detection of potential conflicts with other transactions that are concurrently executing on another thread. The tag indicates whether (or not) a data entry in memory is part of a speculative memory state of an uncommitted transaction that is currently active in the system. | 04-30-2009 |
20090125912 | High performance memory and system organization for digital signal processing - An innovative approach for constructing optimum, high-performance, efficient DSP systems may include a system organization to match compute execution and data availability rate and to organize DSP operations as loop iterations such that there is maximal reuse of data between multiple consecutive iterations. Independent set up and preparation of data before it is required through suitable mechanisms such as data pre-fetching may be used. This technique may be useful and important for devices that require cost-effective, high-performance, power consumption efficient VLSI IC. | 05-14-2009 |
20090133032 | Contention management for a hardware transactional memory - A hardware transactional memory | 05-21-2009 |
20090138889 | METHOD AND APPARATUS FOR EXPOSING INFORMATION TECHNOLOGY SERVICE MANAGEMENT PROBLEMS TO A SYSTEMS MANAGEMENT SYSTEM - The illustrative embodiments described herein provide a computer implemented method, apparatus, and computer program product for exposing information technology service management problems associated with a particular data processing system to a systems management system. A ticket identifier is received by a data processing system. The ticket identifier identifies a reported problem associated with the data processing system. Responsive to receiving the ticket identifier, the data processing system identifies a systems management system application associated with the data processing system. A generated message containing the reported problem associated with the data processing system is sent to the systems management system. | 05-28-2009 |
20090138890 | Contention management for a hardware transactional memory - A hardware transactional memory | 05-28-2009 |
20090150899 | System and methods for dependent trust in a computer system - A method for dependent trust in a computer system is provided. In this method, trust dependency relationships are defined among components of the computer system, specifying, for a component, which components it relies on in ensuring the integrity or confidentiality of its code or data. Subsequently, trust dependencies are resolved and the results are used in performing certain operations described in Trusted Computing Group standards including generating an attestation reply, sealing data, and unsealing data. In addition, methods for computing an integrity measurement for a Core Root of Trust for Measurement of a trust-dependent component are included. A system for dependent trust in a computer system is also described. | 06-11-2009 |
20090150900 | WORKFLOW TASK RE-EVALUATION - An occurrence of a workflow re-evaluation event during execution of tasks in a workflow is identified. In response to the workflow re-evaluation event, it is determined for each task previously executed in the workflow whether such task needs to be executed again. Those tasks in the workflow for which it was determined that the corresponding task needs to be executed again are executed again, while the tasks in the workflow that were previously executed and for which it was not determined that the corresponding task needs to be executed again are skipped. Related apparatus, systems, techniques and articles are also described. | 06-11-2009 |
20090150901 | Data processing device and method of controlling the same - A data processing device includes an instruction executing part executing a normal task and a management task scheduling an execution order of the normal task with switching the normal task and the management task, a counter measuring an execution state of the normal task being executed in the instruction executing part, and a state controller controlling the counter based on the normal task being executed in the instruction executing part. The instruction executing part determines whether the normal task to be executed next of a plurality of normal tasks scheduled by the management task is a measurement object or not, and outputs an operation signal notifying the state controller of the determination result. The state controller operates the counter in accordance with the branch operation. | 06-11-2009 |
20090158292 | USE OF EXTERNAL SERVICES WITH CLUSTERS - A method, apparatus, and system are directed toward managing a system that includes a cluster and an external resource. The external resource may be part of a second cluster that is collocated on the same hardware platforms as the cluster. A proxy resource is used to enforce a dependency relationship between a native resource of the cluster and an external resource, such that a dependency with the proxy resource serves to enforce a dependency with the external resource. The cluster framework may maintain states of the proxy resource, including an offline state, an online-standby state, and an online state. The online-standby indicates that the proxy has been started, but it has not determined that the associated external resource is enabled. The proxy may determine whether the external resource is enabled or disabled and, in response, notify the cluster framework. | 06-18-2009 |
20090158293 | Information processing apparatus - A program rewriting time is reduced when a large scale process is executed while a program having reconfigurable hardware is being rewritten. When the large scale process is processed by being divided into a smaller process unit, even if the process content is dynamically changed, the program will be flexibly rewritten, and the schedule of execution will be managed, thereby ensuring that an efficient process can be executed. Scheduler | 06-18-2009 |
20090158294 | DYNAMIC CRITICAL-PATH RECALCULATION FACILITY - A method for dynamically recalculating a critical path in a job scheduling system is disclosed. In selected embodiments, the method may include determining when a first job associated with a critical path is substantially complete. The method may further include identifying a successor job of the first job and identifying multiple predecessor jobs of the successor job. The method may then determine whether there is at least one predecessor job that has not completed. In the event there is at least one predecessor job that has not completed, the method may recalculate the critical path. A corresponding apparatus and computer program product for implementing the above-stated method are also disclosed. | 06-18-2009 |
20090165015 | MANAGING DEPENDENCIES AMONG APPLICATIONS USING SATISFIABILITY ENGINE - A solution for managing dependencies among multiple applications is disclosed. A method may comprise: receiving dependency information of each application; receiving state information of a first application of the multiple applications; determining a state transition for a second different application of the multiple applications based on the dependency information and the state information; and outputting the determined state transition information to manage the second different application. | 06-25-2009 |
20090178054 | CONCOMITANCE SCHEDULING COMMENSAL THREADS IN A MULTI-THREADING COMPUTER SYSTEM - A method and an apparatus for concomitance scheduling a work thread and assistant threads associated with the work thread in a multi-threading processor system. The method includes: searching one or more assistant threads associated with the running of the work thread when preparing to run/schedule the work thread; running the one or more assistant threads that are searched; and running the work thread after all of the one or more assistant threads associated with the running of the work thread have run. | 07-09-2009 |
20090178055 | COLLABORATIVE PLANNING ACTIONS AND RECIPES - The complexities of actions and recipes used in collaborative planning are defined using set theory and an accompanying formalization. The formalizations presented can be used as a basis for making decisions in relation choosing recipes, and other activities concerning collaborative task execution in a multi-agent environment. Introducing the notion of the complexity of a recipe and an action provides a measure of the difficulty of a task, based upon which decisions regarding the use of particular recipes and contractors can be made. | 07-09-2009 |
20090199201 | Mechanism to Provide Software Guaranteed Reliability for GSM Operations - In a global shared memory (GSM) environment, an initiating task at a first node with a host fabric interface (HFI) uses epochs to provide reliability of transmission of packets via a network fabric to a target task. The HFI generates a packet for the initiating task addressed to the target task, and automatically inserts a current epoch of the initiating task into the packet. A copy of the current epoch is maintained by the target task, which accepts for processing only packets having the correct epoch, unless the packet is tagged for guaranteed-once delivery. When a packet delivery is accepted, the target task sends a notification to the initiating task. If the initiating task does not receive the notification of delivery for the issued packet, the initiating task updates the epoch at both the target node and the initiating node and re-transmits the packet. | 08-06-2009 |
20090217289 | SYNCHRONIZATION SYSTEM FOR ENTITIES MAINTAINED BY MULTIPLE APPLICATIONS - A synchronization system provides a generic synchronization mechanism in which copies of data of an entity maintained by different applications can be synchronized through application-specific entity adapters. An entity adapter for an application receives from the synchronization system a synchronization request to synchronize an entity of the application and interacts with the application to ensure the entity is synchronized as requested. Each application that takes an action on an entity provides a transaction to the synchronization system. Upon receiving a transaction, the synchronization system stores an indication of the transaction in a synchronization store. The synchronization system waits until any sent synchronization transactions for an entity complete before sending subsequent synchronization transactions for that entity to ensure that the same transaction ordering is used by the target applications. | 08-27-2009 |
20090249353 | COMPUTER OPERATIONS CONTROL BASED ON PROBABLISTIC THRESHOLD DETERMINATIONS - Decisions whether or not to initiate certain types of computer operations, such as Just In Time (JIT) compiling or garbage collection can be made using a probabilistic threshold monitor. A decision whether to drive a threshold indicator bit to a set state is made on the detection of each of a certain kind of event occurring over a predetermined interval. The probability that the bit will be driven to a set state upon the detection of any given event is controlled. At the end of the predetermined interval, a computer operation is initiated if the threshold indicator bit is found to be in its set state. | 10-01-2009 |
20090249354 | RECORDING MEDIUM HAVING RECORDED THEREIN VIRTUAL MACHINE MANAGEMENT PROGRAM, MANAGEMENT SERVER APPARATUS AND VIRTUAL MACHINE MANAGEMENT METHOD - A virtual machine managing method includes: virtual machine list generation step of detecting a plurality of virtual machines deployed on a physical machine; a dependency list generation step of detecting dependencies among the virtual machines deployed on the physical machine; a power-off order generation step of, based on table contents of the virtual machine list table and the dependency list table, generating a power-off order management table which manages a power-off order in which the same virtual machines are sequentially powered off in descending order of dependency, in units of the physical machine; and a target presentation step of, when an instruction for selecting the power-off target physical machine is detected, reading the power-off order corresponding to this power-off target physical machine from the power-off order management table, and visually presenting this read power-off order. | 10-01-2009 |
20090260018 | METHOD FOR COMPUTATION-COMMUNICATI0N OVERLAP IN MPI APPLICATIONS - A computer implemented method is provided for optimizing at the time of compiling a program that employs a message-passing interface (MPI). The method includes: detecting an MPI application source file; identifying a non-blocking communication within the MPI application source file; and overlapping independent computation concurrently with the non-blocking communication. A system is also provided. | 10-15-2009 |
20090271799 | Executing A Distributed Java Application On A Plurality Of Compute Nodes - Methods, systems, and products are disclosed for executing a distributed Java application on a plurality of compute nodes. The Java application includes a plurality of jobs distributed among the plurality of compute nodes. The plurality of compute nodes are connected together for data communications through a data communication network. Each of the plurality of compute nodes has installed upon it a Java Virtual Machine (‘JVM’) capable of supporting at least one job of the Java application. Executing a distributed Java application on a plurality of compute nodes includes: tracking, by an application manager, a just-in-time (‘JIT’) compilation history for the JVMs installed on the plurality of compute nodes; and configuring, by the application manager, the plurality of jobs for execution on the plurality of compute nodes in dependence upon the JIT compilation history for the JVMs installed on the plurality of compute nodes. | 10-29-2009 |
20090300643 | USING HARDWARE SUPPORT TO REDUCE SYNCHRONIZATION COSTS IN MULTITHREADED APPLICATIONS - A processor configured to synchronize threads in multithreaded applications. The processor includes first and second registers. The processor stores a first bitmask in the first register and a second bitmask in the second register. For each bitmask, each bit corresponds with one of multiple threads. A given bit in the first bitmask indicates the corresponding thread has been assigned to execute a portion of a unit of work. A corresponding bit in the second bitmask indicates the corresponding thread has completed execution of its assigned portion of the unit of work. The processor receives updates to the second bitmask in the second register and provides an indication that the unit of work has been completed in response to detecting that for each bit in the first bitmask that corresponds to a thread that is assigned work, a corresponding bit in the second bitmask indicates its corresponding thread has completed its assigned work. | 12-03-2009 |
20090300644 | Method to Detect a Deadlock Condition by Monitoring Firmware Inactivity During the System IPL Process - A method for managing deadlock in a data processing system during an IPL process includes monitoring the usage of locks in the Hardware Object Model (HOM) of the data processing system. The process further includes detecting a deadlock condition in response to an indication of the IPL process in the data processing system entering a hung state when at least one lock is in use. The process further includes handling the deadlock condition by performing one or more of the following: recording error information for the deadlock condition, and terminating the IPL process. | 12-03-2009 |
20090328057 | SYSTEM AND METHOD FOR RESERVATION STATION LOAD DEPENDENCY MATRIX - A device and method may fetch an instruction or micro-operation for execution. An indication may be made as to whether the instruction is dependent upon any source values corresponding to a set of previously fetched instructions. A value may be stored corresponding to each source value from which the first instruction depends. An indication may be made for each of the set of sources of the instruction, whether the source depends on a previously loaded value or source, where indicating may include storing a value corresponding to the indication. The instruction may be executed after the stored values associated with the instruction indicate the dependencies are satisfied. | 12-31-2009 |
20100031268 | Thread ordering techniques - Techniques are described that can be used to ensure ordered computation and/or retirement of threads in a multithreaded environment. Threads may contain bundled instances of work, each with unique ordering restrictions relative to other instances of work packaged in other threads in the system. When applied to 3D graphics, video and image processing domains allow unrestricted processing of threads until reaching their critical sections. Ordering may be required prior to executing critical sections and beyond. | 02-04-2010 |
20100050183 | WORKFLOW DEVELOPING APPARATUS, WORKFLOW DEVELOPING METHOD, AND COMPUTER PRODUCT - A computer-readable recording medium stores therein a workflow developing program that causes a computer to execute acquiring a workflow for a sequence of applications, each of which requires user authentication processing prior to execution and is on an application server; detecting a description position of a first application to be executed first in the workflow acquired at the acquiring; inserting one description of the user authentication processing into the workflow so that the user authentication processing is executed before the first application at the description position detected at the detecting; and storing, in a management server controlling the application servers, the workflow after insertion at the inserting. | 02-25-2010 |
20100070979 | Apparatus and Methods for Parallelizing Integrated Circuit Computer-Aided Design Software - A system for parallelizing software in computer-aided design (CAD) software for logic design includes a computer. The computer is configured to identify dependencies among a set of tasks. The computer is also configured to perform the set of tasks in parallel such that a solution of a problem is identical to a solution produced by performing the set of tasks serially. | 03-18-2010 |
20100100889 | ACCELERATING MUTUAL EXCLUSION LOCKING FUNCTION AND CONDITION SIGNALING WHILE MAINTAINING PRIORITY WAIT QUEUES - A synchronization library of mutex functions and condition variable functions for threads which are compatible with pthread library functions conforming to a (POSIX) standard. The library can utilize a mutex data structure and a condition variable data structure both including lockwords and queuing anchors. In the library, Compare Swap (CS) instruction processing can be used to protect shared resource. The synchronization library can support priority queuing of threads and can have an ability to yield control when CS spin lock iterations exceed a set limit. | 04-22-2010 |
20100153967 | PERSISTENT LOCAL STORAGE FOR PROCESSOR RESOURCES - Local storage may be allocated for each processing resource in a process of a computer system. Each processing resource may be virtualized and may have a one-to-one or a many-to-one correspondence with with physical processors. The contents of each local storage persist across various execution contexts that are executed by a corresponding processing resource. Each local storage may be accessed without synchronization (e.g., locks) by each execution context that is executed on a corresponding processing resource. The local storages provide the ability to segment data and store and access the data without synchronization. The local storages may be used to implement lock-free techniques such as a generalized reduction where a set of values is combined through an associative operator. | 06-17-2010 |
20100162262 | Split Scheduler - In an embodiment, a scheduler implements a first dependency array that tracks dependencies on instruction operations (ops) within a distance N of a given op and which are short execution latency ops. Other dependencies are tracked in a second dependency array. The first dependency array may evaluate quickly, to support back-to-back issuance of short execution latency ops and their dependent ops. The second array may evaluate more slowly than the first dependency array. | 06-24-2010 |
20100169894 | REGISTERING A USER-HANDLER IN HARDWARE FOR TRANSACTIONAL MEMORY EVENT HANDLING - A method and apparatus for registering a user-handler in hardware for transactional memory is herein described. A user-accessible register is to hold a reference to a transactional handler. An event register may also be provided to specify handler events, which may be done utilizing user-level software, privileged software, or by hardware. When an event is detected execution vectors to the transaction handler based on the reference to the transactional handler held in the user-accessible register. The transactional handler handles the event and then execution returns to normal flow. | 07-01-2010 |
20100199286 | METHOD AND APPARATUS FOR BUILDING A PROCESS OF ENGINES - The embodiments of the present invention disclose a method and apparatus for building a process of engines. The method can comprise: obtaining a sequence relationship between every two engines based on a historical process of engines; and building a process of engines according to the sequence relationship between every two engines. Automatic engine integration can be implemented by using the method and the apparatus according to the present invention to facilitate user's use. | 08-05-2010 |
20100242049 | Real-Time Page and Flow Compositions - Task flows are utilized for real-time page compositions, real-time flow compositions, or both. At design time, a plurality of task flows are provided as a database or library. A manager, or other type of user, can associate task flows with dynamic regions in an application page being designed. The application page can include one or more dynamic regions that act as a container for task flows. Metadata is generated from the customization of input parameters. At runtime, application pages are generated on-the-fly for display in a user interface. The application pages are composed according to the task flows embedded therein. The application pages are presented to the user according to an application flow. Through a user interface, the user can enter and retrieve information related to governance, risk, and compliance (GRC) activities, or other types of activities. | 09-23-2010 |
20100275215 | METHOD AND APPARATUS FOR LINKING SERVICES - An approach is provided for linking services. A linked services enabler seamlessly links multiple services together by initiating a service and then invoking one or more linked services for the user to perform an action in one or more linked services. The linked services enabler then resumes the service after the user performs the action. | 10-28-2010 |
20100275216 | Making Hardware Objects and Operations Thread-Safe - Performance in object-oriented systems may be improved by allowing multiple concurrent hardware control and diagnostic operations to run concurrently on the system while preventing race conditions, state/data corruption, and hangs due to deadlock conditions. Deadlock prevention rules may be employed to grant or deny request for hardware operation locks, hardware communication locks, and/or data locks. | 10-28-2010 |
20100281488 | DETECTING NON-REDUNDANT COMPONENT DEPENDENCIES IN WEB SERVICE INVOCATIONS - Relationships between components in an application and the services they provide are identified, including redundant caller-callee sequences. Specific components of interest are instrumented to obtain data when they execute. Data structures are created which identify the components and their dependencies on one other. To avoid excessive overhead costs, redundant dependencies are identified. A dependency data structure can be provided for each unique dependency. When repeated instances of a dependency are detected, the associated dependency data structure can be augmented with correlation data of the repeated instances, such as transaction identifiers and sequence identifiers. Sequence identifiers identify an order in which a component is called. A flag can be used to force the creation of a new dependency data structure, and a calling component's name can be used instead of a sequence identifier. Agents report the dependency data structures to a manager to provide graph data in a user interface. | 11-04-2010 |
20100281489 | Method and system for dynamically parallelizing application program - Provided is a method and system for dynamically parallelizing an application program. Specifically, provided is a method and system having multi-core control that may verify a number of available threads according to an application program and dynamically parallelize data based on the verified number of available threads. The method and system for dynamically parallelizing the application program may divide a data block to be processed according to the application program based on a relevant data characteristic and dynamically map the threads to division blocks, and thereby enhance a system performance. | 11-04-2010 |
20100333106 | REORGANIZATION PROCESS MANAGER - Systems and methods provide a platform for facilitating organizational restructuring. The system receives an organizational chart representing the organizational restructuring and applies business processes defining tasks that implement the organizational restructuring. The system manages communication among organizational entities based upon a task dependency model for the various tasks that implement the reorganization. The system then uses the task dependency model to determine that the organizational restructuring is complete. | 12-30-2010 |
20100333107 | LOCK-FREE BARRIER WITH DYNAMIC UPDATING OF PARTICIPANT COUNT - A method of executing an algorithm in a parallel manner using a plurality of concurrent threads includes generating a lock-free barrier that includes a variable that stores both a total participants count and a current participants count. The total participants count indicates a total number of threads in the plurality of concurrent threads that are participating in a current phase of the algorithm, and the current participants count indicates a total number of threads in the plurality of concurrent threads that have completed the current phase. The barrier blocks the threads that have completed the current phase. The total participants count is dynamically updated during execution of the current phase of the algorithm. The generating, blocking, and dynamically updating are performed by at least one processor. | 12-30-2010 |
20100333108 | PARALLELIZING LOOPS WITH READ-AFTER-WRITE DEPENDENCIES - Some embodiments provide a system that increases parallelization in a computer program. During operation, the system obtains a binary associative operator and a ordered set of elements associated with a prefix operation in the computer program. Next, the system divides the elements into multiple sets of contiguous iterations based on a number of processors used to execute the computer program. The system then performs, in parallel on the processors, a set of local reductions on the contiguous iterations using the binary associative operator. Afterwards, the system calculates a set of boundary prefixes between the contiguous iterations using the local reductions. Finally, the system applies, in parallel on the processors, the boundary prefixes to the contiguous iterations using the binary associative operator to obtain a set of prefixes for the prefix operation. | 12-30-2010 |
20100333109 | SYSTEM AND METHOD FOR ORDERING TASKS WITH COMPLEX INTERRELATIONSHIPS - One or more embodiments of the invention enable a system and method for ordering tasks with complex interrelationships. The present invention as described herein may be used to produce a linear ordering of tasks with complex interrelationships including dependencies and constraints. In one or more embodiments optional tasks may be permitted such that a given task may or may not be added to the execution queue depending on the scheduling of earlier tasks following evaluation of their dependencies—that is, the system of the invention supports the management of optional tasks in a task ordering operation where some or all of tasks have complex interdependencies. | 12-30-2010 |
20100333110 | DEADLOCK DETECTION METHOD AND SYSTEM FOR PARALLEL PROGRAMS - A deadlock detection method and computer system for parallel programs. A determination is made that a lock of the parallel programs is no longer used in a running procedure of the parallel programs. A node corresponding to the lock that is no longer used, and edges relating to the lock that is no longer used, are deleted from a lock graph corresponding to the running procedure of the parallel programs in order to acquire an updated lock graph. The lock graph is constructed according to a lock operation of the parallel programs. Deadlock detection is then performed on the updated lock graph. | 12-30-2010 |
20110029984 | COUNTER AND TIMER CONSTRAINTS - A method and system for scheduling tasks using a counter constraint. A method may include identifying multiple tasks to be performed, receiving dependency data indicating that scheduling of at least one task is dependent on whether a counter satisfies a threshold in relation to an additional condition, and upon determining, during scheduling, that the counter satisfies the threshold in relation to the additional condition, triggering a scheduling action with respect to at least one task. | 02-03-2011 |
20110035755 | METHOD AND SYSTEM FOR APPLICATION MIGRATION USING PER-APPLICATION PERSISTENT CONFIGURATION DEPENDENCY - A system and method for determining application dependent components includes capturing interactions of an application stored in memory of a first environment with other components at runtime. The interactions are parsed and categorized to determine dependency information. The application is migrated to a new environment using the dependency information to reconfigure the application after migration without application-specific knowledge. | 02-10-2011 |
20110047556 | SYNCHRONIZATION CONTROL METHOD AND INFORMATION PROCESSING DEVICE - Provided are a synchronization control section that executes a current thread and a reference thread in parallel, a waiting time calculation section that calculates the time needed for the reference thread to reach a second synchronization point as a waiting time of the current thread when the reference thread does not reach the second synchronization point at a time when the current thread reaches a first synchronization point, a quality difference calculation section that estimates a quality difference between data that the current thread generates by referring to processing data at the second synchronization point of the reference thread and data that the current thread generates without referring to the processing data, and a synchronization determination section that determines whether to make the current thread wait until the reference thread reaches the second synchronization point depending on the waiting time and the magnitude of the quality difference. | 02-24-2011 |
20110107345 | MULTIPROCESSOR CIRCUIT USING RUN-TIME TASK SCHEDULING - Tasks are executed in a multiprocessing system with a master processor core ( | 05-05-2011 |
20110119680 | POLICY-DRIVEN SCHEMA AND SYSTEM FOR MANAGING DATA SYSTEM PIPELINES IN MULTI-TENANT MODEL - Methods and apparatus are described for managing data flows in a high-volume system. Jobs are grouped into pipelines of related tasks. A pipeline controller accepts schemas defining the jobs in a pipeline, their dependencies, and various policies for handling the data flow. Pipelines may be smoothly upgraded with versioning techniques and optional start/stop times for each pipeline. Late data or job dependencies may be handled with a number of strategies. The controller may also mediate resource usage in the system. | 05-19-2011 |
20110119681 | RUNTIME DEPENDENCY ANALYSIS FOR CALCULATED PROPERTIES - Techniques for determining and tracking dependent properties for a calculated property are provided. A request for a value of a first property is received. The value for the first property is calculated, including accessing values for one or more properties used to calculate the value for the first property. The accessing of the values for the one or more properties may be detected, and the one or more properties may be tracked as dependent properties for the first property in a first set of dependent properties. A change in the value of a second property may subsequently be detected. If the second property is determined to be included in the first set of dependent properties, the value of the first property is invalidated. | 05-19-2011 |
20110138397 | PROCESSING TIME ESTIMATION METHOD AND APPARATUS - A processing time estimation method for estimating a processing time of each of a plurality of jobs, the processing time estimation method including determining, executed by a computer, whether the each job has a preceding job thereof on the basis of previous execution data including previous information of a plurality of previous start times and previous finish times of respective jobs of the plurality of jobs, the preceding job of the each job being included in the plurality of jobs and at least having the previous finish time earlier than the previous finish time of the each job. | 06-09-2011 |
20110145833 | Multiple Mode Mobile Device - In one or more embodiments, one or more methods and/or systems described can perform displaying, on a handheld device, multiple icons associated with multiple segments; receiving first user input indicating a first segment of the multiple segments; executing a first virtual machine associated with the first segment on the handheld device; executing a first application on the first virtual machine; receiving second user input indicating a second segment of the multiple segments; executing a second virtual machine associated with the second segment on the handheld device; and executing a second application on the second virtual machine. In one or more embodiments, one or more methods and/or systems described can further perform before executing the second virtual machine, receiving authentication information and determining that the user is authenticated. In one or more embodiments, the authentication information can include at least one of a user name, a password, and/or biometric information. | 06-16-2011 |
20110154359 | HASH PARTITIONING STREAMED DATA - The present invention extends to methods, systems, and computer program products for partitioning streaming data. Embodiments of the invention can be used to hash partition a stream of data and thus avoids unnecessary memory usage (e.g., associated with buffering). Hash partitioning can be used to split an input sequence (e.g., a data stream) into multiple partitions that can be processed independently. Other embodiments of the invention can be used to hash repartition a plurality of streams of data. Hash repartitioning converts a set of partitions into another set of partitions with the hash partitioned property. Partitioning and repartitioning can be done in a streaming manner at runtime by exchanging values between worker threads responsible for different partitions. | 06-23-2011 |
20110154360 | JOB ANALYZING METHOD AND APPARATUS - A job analyzing method includes classifying jobs in log data in accordance with a time segment to which an end time of each of the jobs belongs; generating, for first jobs included in a first time segment, first data indicating an execution sequence relation between the first jobs based on end time of the jobs, and generating, for second jobs included in a second time segment succeeding the first time segment, second data indicating an execution sequence relation between the second jobs based on end time of the second jobs; and analyzing an execution sequence relation between the first and second jobs based on the end time of the first jobs and the end time of the second jobs, and generating data indicating the execution sequence relation between the first and second jobs across the first and second time segments. | 06-23-2011 |
20110154361 | APPARATUS AND METHOD OF COORDINATING OPERATION ACTION OF ROBOT SOFTWARE COMPONENT - Provided are an apparatus and a method of controlling the execution of components without an additional port or messaging for applying the dependency among the components. The apparatus comprises: a profile analyzing unit analyzing execution dependency information of components defined in an execution coordination profile; a component managing unit arranging the components in accordance with the execution sequence of the components caused by the execution dependency information; an execution coordination managing unit determining whether or not each of the components executes the operation on the basis of the execution dependency information of the corresponding component managed by the execution coordination units allocated to the components, respectively; and an operation executing unit executing the operation of each of the components in accordance with the determination result of the execution coordination manager. | 06-23-2011 |
20110167428 | BUSY-WAIT TIME FOR THREADS - Method to selectively assign a reduced busy-wait time to threads is described. The method comprises determining whether at least one thread is spinning on a mutex lock associated with a condition variable and assigning, when the at least one thread is spinning on the mutex lock, a predetermined reduced busy-wait time for a subsequent thread spinning on the mutex lock. | 07-07-2011 |
20110173629 | Thread Synchronization - A method of processing threads is provided. The method includes receiving a first thread that accesses a memory resource in a current state, holding the first thread, and releasing the first thread based responsive to a final thread that accesses the memory resource in the current state has been received. | 07-14-2011 |
20110202929 | Method and system for parallelizing database requests - Methods and systems are described for applying the use of shards within a single memory address space. A database request is processed by providing the request from a client to a processor, the processor then distributing the request to multiple threads within a single process but executing in a shared memory address environment, wherein each thread performs the request on a distinct shard, and aggregating the results of the multiple threads being aggregated and returning a final result to the client. By parallelizing operations in this way, the request response time can be reduced and the total amount of communication overhead can be reduced. | 08-18-2011 |
20110202930 | RESOURCE EXCLUSION CONTROL METHOD AND EXCLUSION CONTROL SYSTEM IN MULTIPROCESSORS AND TECHNOLOGY ASSOCIATED WITH THE SAME - When locking of a lock object by a process is attempted, whether the locking succeeded is determined. Having determined that the locking succeeded, the process is executed at a relatively high processing speed in an interval during which the locking is valid as compared to an interval during which the locking is invalid. | 08-18-2011 |
20110214130 | DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND DATA PROCESSING PROGRAM - The present invention provides a system which executes processes in steps so as to increase the speed of processing and implement links with other systems easily. The data processing system includes: an AP execution unit which executes operational processing while referring to/updating an in-memory DB and a disk type DB; a buffer storage unit which stores output data of the operational processing in a data buffer; a response transmission unit which issues a processing end notice of the operational processing; a temporary file storage unit which stores, in a temporary file, the output data stored in the data buffer; a request transmission unit which transmits a commitment request with respect to the disk type DB; a disk type DB commitment unit which commits the disk type DB by updating a control table; a normal file storage unit which changes the temporary file to a normal file, and an in-memory DB commitment unit which commits the in-memory DB. | 09-01-2011 |
20110225595 | TASK EXECUTION CONTROLLER AND RECORDING MEDIUM ON WHICH TASK EXECUTION CONTROL PROGRAM IS RECORDED - A slot calculation unit calculates a current slot number and stores it in a slot storage unit. When each of control tasks of a recognition processing portion, a vehicle speed calculation portion, a brake control portion, and a steering control portion is activated, a slot number at the time of output of an execution result used as input data is obtained from a task table storage unit, and it is determined whether a time constraint is violated based on a permissible slot number for the input data, stored in a constraint table storage unit. When an execution result of each control task is output, the stored current slot number is read, and it is determined whether a time constraint is violated based on a permissible slot number for the output of the execution result, stored in the constraint table storage unit. | 09-15-2011 |
20110265097 | COUPLED SYMBIOTIC OPERATING SYSTEM - A single application can be executed across multiple execution environments in an efficient manner if at least a relevant portion of the virtual memory assigned to the application was equally accessible by each of the multiple execution environments. A request by a process in one execution environment can, thereby, be directed to an operating system, or other core software, in another execution environment and can be made by a shadow of the requesting process in the same manner as the original request was made by the requesting process itself. Because of the memory invariance between the execution environments, the results of the request will be equally accessible to the original requesting process even though the underlying software that responded to the request may be executing in a different execution environment. A similar thread invariance can be maintained to provide for accurate translation of requests between execution environments. | 10-27-2011 |
20110271286 | SYSTEM AND METHOD FOR APPLICATION FUNCTION CONSOLIDATION - Systems and methods that facilitate keeping or improving the current/prior level of complexity in a software package, despite enhancement package additions. To keep the current number of business functions (e.g., some software configuration or functionality), new packages may have to consolidate older ones. Consolidating business functions may include dissolving those functions into the core set of functions (e.g., those functions that are “always on”) or to merge them with other business functions (e.g., to be switched on or off as a set). Additionally, if a function is simply not used, and will never be used again, the function may be dissolved completely. Regardless, disruption to the customer should be minimized by any consolidation of functions. Systems and methods identify functions that can be automatically consolidated, and facilitate the consolidation of any remaining functions. | 11-03-2011 |
20110289509 | METHODS AND SYSTEMS FOR AUTOMATING DEPLOYMENT OF APPLICATIONS IN A MULTI-TENANT DATABASE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for automating deployment of applications in a multi-tenant database environment. For example, in one embodiment, mechanisms include managing a plurality of machines operating as a machine farm within a datacenter by executing an agent provisioning script at a control hub, instructing the plurality of machines to download and instantiate a lightweight agent; pushing a plurality of URL (Uniform Resource Locator) references from the control hub to the instantiated lightweight agent on each of the plurality of machines specifying one or more applications to be provisioned and one or more dependencies for each of the applications; and loading, via the lightweight agent at each of the plurality of machines, the one or more applications and the one or more dependencies for each of the one or more applications into memory of each respective machine. | 11-24-2011 |
20110289510 | ATOMIC-OPERATION COALESCING TECHNIQUE IN MULTI-CHIP SYSTEMS - A cache-coherence protocol distributes atomic operations among multiple processors (or processor cores) that share a memory space. When an atomic operation that includes an instruction to modify data stored in the shared memory space is directed to a first processor that does not have control over the address(es) associated with the data, the first processor sends a request, including the instruction to modify the data, to a second processor. Then, the second processor, which already has control of the address(es), modifies the data. Moreover, the first processor can immediately proceed to another instruction rather than waiting for the address(es) to become available. | 11-24-2011 |
20110289511 | Symmetric Multi-Processor System - The present invention relates generally to computer operating systems, and more specifically, to operating system calls in a symmetric multiprocessing (SMP) environment. Existing SMP strategies either use a single lock or multiple locks to limit access to critical areas of the operating system to one thread at a time. These strategies suffer from a number of performance problems including slow execution, large software and execution overheads and deadlocking problems. The invention applies a single lock strategy to a micro kernel operating system design which delegates functionality to external processes. The micro kernel has a single critical area, the micro kernel itself, which executes very quickly, while the external processes are protected by proper thread management. As a result, a single lock may be used, overcoming the performance problems of the existing strategies. | 11-24-2011 |
20110314479 | THREAD QUEUING METHOD AND APPARATUS - In some embodiments, a method includes receiving a request to generate a thread and supplying a request to a queue in response at least to the received request. The method may further include fetching a plurality of instructions in response at least in part to the request supplied to the queue and executing at least one of the plurality of instructions. In some embodiments, an apparatus includes a storage medium having stored therein instructions that when executed by a machine result in the method. In some embodiments, an apparatus includes circuitry to receive a request to generate a thread and to queue a request to generate a thread in response at least to the received request. In some embodiments, a system includes circuitry to receive a request to generate a thread and to queue a request to generate a thread in response at least to the received request, and a memory unit to store at least one instruction for the thread. | 12-22-2011 |
20120042323 | JOB EXECUTION APPARATUS, IMAGE FORMING APPARATUS, COMPUTER READABLE MEDIUM AND JOB EXECUTION SYSTEM - A job execution apparatus includes: a receiving unit configured to receive each job; a calculation unit configured to calculate an index value representing a load needed to execute each of a pre-processing sub-job and a post-processing sub-job when each of n jobs (n is a natural number equal to or more than 2) is decomposed into a pre-processing sub-job for generating information, and a post-processing sub-job for causing an output unit to output information generated by executing the pre-processing sub-job; a pre-processing execution unit configured to sequentially execute pre-processing sub-jobs which are respectively included in jobs received by the receiving unit and registered in a pre-processing sub-job queue; and a post-processing execution unit configured to sequentially execute, upon completion of the pre-processing sub-jobs, post-processing sub-jobs which are respectively included in the received jobs and which are registered in a post-processing sub-job queue. | 02-16-2012 |
20120047514 | SCHEDULING SYSTEM AND METHOD OF EFFICIENTLY PROCESSING APPLICATIONS - A scheduling technique for use in a multicore system, which can be shared by a plurality of applications, is provided. According to the scheduling technique, it is possible to perform dependency resolving and a runnable work search in parallel with the execution of cores. | 02-23-2012 |
20120066690 | System and Method Providing Run-Time Parallelization of Computer Software Using Data Associated Tokens - A system and method of parallelizing programs assigns write tokens and read tokens to data objects accessed by computational operations. During run time, the write sets and read sets for computational operations are resolved and the computational operations executed only after they have obtained the necessary tokens for data objects corresponding to the resolved write and read sets. A data object may have unlimited read tokens but only a single write token and the write token may be released only if no read tokens are outstanding. Data objects provide a wait list which serves as an ordered queue for computational operations waiting for tokens. | 03-15-2012 |
20120079502 | DEPENDENCY-ORDERED RESOURCE SYNCHRONIZATION - A synchronization system is described herein that synchronizes resource objects in an order based on their dependency relationships so that a referenced object is available by the time an object that references it is synchronized. Reference attributes present in resources define the dependency relationship among resources. Using these relationships, the system builds a dependency tree and orders synchronization operations for environment reconciliation by precedence so that referential integrity is preserved while still synchronizing reference attributes. The system can deterministically create a change list that guarantees referential integrity, and perform change list processing in parallel. The synchronization system attempts to order the synchronization based on references available to ensure that the system creates and updates dependent resources before their parent resources. Thus, the synchronization system provides a fast, reliable update mechanism for synchronizing two related data environments. | 03-29-2012 |
20120131596 | Method and System for Synchronizing Thread Wavefront Data and Events - Systems and methods for synchronizing thread wavefronts and associated events are disclosed. According to an embodiment, a method for synchronizing one or more thread wavefronts and associated events includes inserting a first event associated with a first data output from a first thread wavefront into an event synchronizer. The event synchronizer is configured to release the first event before releasing events inserted subsequent to the first event. The method further includes releasing the first event from the event synchronizer after the first data is stored in the memory. Corresponding system and computer readable medium embodiments are also disclosed. | 05-24-2012 |
20120144398 | DELAYED EXPANSION OF VALUES IN CONTEXT - Application context changes associated with instantiated applications are monitored at a context tracking device. In response to each application context change, relationship context dependency properties between the instantiated applications and application resources associated with the instantiated applications are evaluated. At least one relationship context dependency property that is used by at least one of the instantiated applications is determined to have changed as a result of an application context change. The at least one relationship context dependency property is updated during runtime based upon the application context change. | 06-07-2012 |
20120144399 | APPARATUS AND METHOD FOR SYNCHRONIZATION OF THREADS - A method and apparatus for thread synchronization is provided. The apparatus for thread synchronization includes a reader configured to generate a data read request, a writer configured to generate a data write request, a register file configured to have a full status indicating that the register file stores data and an empty status indicating that the register file stores no data, and a controller configured to receive the data read request from the reader or the data write request from the writer, and to process the received data read request or the received data write request while stalling or releasing the reader or the writer according to whether the register file is in the full status or in the empty status and according to an operating status of the reader or the writer. | 06-07-2012 |
20120151495 | SHARING DATA AMONG CONCURRENT TASKS - A “Concurrent Sharing Model” provides a programming model based on revisions and isolation types for concurrent revisions of states, data, or variables shared between two or more concurrent tasks or programs. This model enables revisions of shared states, data, or variables to maintain determinacy despite nondeterministic scheduling between concurrent tasks or programs. More specifically, the Concurrent Sharing Model provides various techniques wherein shared states, data, or variables are conceptually replicated on forks, and only copied or written if necessary, then deterministically merged on joins such that concurrent tasks or programs can work with independent local copies of the shared states, data, or variables while ensuring automated conflict resolution. This model is applicable to a wide variety of system architectures, including applications that execute tasks on a CPU or GPU, applications that run, in full or in part, on multi-core processors without full shared-memory guarantees, and applications that run within cloud computing environments. | 06-14-2012 |
20120151496 | LANGUAGE FOR TASK-BASED PARALLEL PROGRAMING - It is an object of the present invention to provide a program that can simply input dependency of tasks. | 06-14-2012 |
20120159511 | METHOD AND SYSTEM FOR PROCESSING WORK ITEMS - A method and system for processing work items. Information identifying work items from a server responsible for handling work items based on a set of configuration rules is retrieved and stored in a cache that includes N containers. Responsive to a work item request from an application, searching is performed for matching work items in the cache. The work item request specifies n+1 criteria for work items such that n+1 is at least 2, wherein N is a product over a cardinality C | 06-21-2012 |
20120180067 | INFORMATION PROCESSING APPARATUS AND COMPUTER PROGRAM PRODUCT - According to an embodiment, based on task border information, and first-type dependency relationship information containing N number of nodes corresponding to data accesses to one set of data, containing edges representing dependency relationship between the nodes, and having at least one node with an access reliability flag indicating reliability/unreliability of corresponding data access; task border edges, of edges extending over task borders, are identified that have an unreliable access node linked to at least one end, and presentation information containing unreliable access nodes is generated. According to dependency existence information input corresponding to the set of data, conversion information indicating absence of data access to the unreliable access nodes is output. According to the conversion information, the first-type dependency relationship information is converted into second-type dependency relationship information containing M number of nodes (0≦M≦N) corresponding to data accesses to the set of data and containing edges representing inter-node dependency relationship. | 07-12-2012 |
20120180068 | SCHEDULING AND COMMUNICATION IN COMPUTING SYSTEMS - The present invention provides a particular efficient system of scheduling of tasks for parallel processing, and data communication between tasks running in parallel in a computer system. A particular field of application of the present invention is the platform-independent simulation of decomposition/partitioning of an application, in order to obtain an optimal implementation for parallel processing. | 07-12-2012 |
20120180069 | Scheduling Start-Up and Shut-Down of Mainframe Applications using Topographical Relationships - The illustrative embodiments provide for a computer-implemented method for representing actions in a data processing system. A table is generated. The table comprises a plurality of rows and columns. Ones of the columns represent corresponding ones of computer applications that can start or stop in parallel with each other in a data processing system. Ones of the rows represent corresponding ones of sequences of actions within a corresponding column. Additionally, the table represents a definition of relationships among memory address spaces, wherein the table represents when each particular address space is started or stopped during one of a start-up process, a recovery process, and a shut-down process. The resulting table is stored. | 07-12-2012 |
20120204189 | Runtime Dependence-Aware Scheduling Using Assist Thread - A runtime dependence-aware scheduling of dependent iterations mechanism is provided. Computation is performed for one or more iterations of computer executable code by a main thread. Dependence information is determined for a plurality of memory accesses within the computer executable code using modified executable code using a set of dependence threads. Using the dependence information, a determination is made as to whether a subset of a set of uncompleted iterations in the plurality of iterations is capable of being executed ahead-of-time by the one or more available threads in the data processing system. If the subset of the set of uncompleted iterations in the plurality of iterations is capable of being executed ahead-of-time, the main thread is signaled to skip the subset of the set of uncompleted iterations and the set of assist threads is signaled to execute the subset of the set of uncompleted iterations. | 08-09-2012 |
20120222043 | Process Scheduling Using Scheduling Graph to Minimize Managed Elements - A process scheduler may use a scheduling graph to determine which processes, threads, or other execution elements of a program may be scheduled. Those execution elements that have not been invoked or may be waiting for input may not be considered for scheduling. A scheduler may operate by scheduling a current set of execution elements and attempting to schedule a number of generations linked to the currently executing elements. As new elements are added to the scheduled list of execution elements, the list may grow. When the scheduling graph indicates that an execution element will no longer be executed, the execution element may be removed from consideration by a scheduler. In some embodiments, a secondary scan of all available execution elements may be performed on a periodic basis. | 08-30-2012 |
20120222044 | METHOD AND SYSTEM FOR POLLING NETWORK CONTROLLERS - Improving the performance of multitasking processors are provided. For example, a subset of M processors within a Symmetric Multi-Processing System (SMP) with N processors is dedicated for a specific task. The M (M>0) of the N processors are dedicate to a task, thus, leaving (N−M) processors for running normal operating system (OS). The processors dedicated to the task may have their interrupt mechanism disabled to avoid interrupt handler switching overhead. Therefore, these processors run in an independent context and can communicate with the normal OS and cooperation with the normal OS to achieve higher network performance. | 08-30-2012 |
20120227055 | Workflow Processing System and Method with Database System Support - Methods and apparatus, including computer program products, implementing and using techniques for automatic workflow processing in a workflow processing computer system. A data management system support module receives a data management activity description, determines a set of set references associated with the data management activity, determines a set of data sources associated with the set of set references within a data management system, determines whether the data management system includes infrastructure for accessing the references and for accessing the data sources, in response to determining that the infrastructure is not included, automatically creates the infrastructure from information in a metadata repository coupled to the data management system, replaces in the data management activity description references to set references and references to data sources by references to the infrastructure in the data management system, and delivers the data management activity description for execution by the system. | 09-06-2012 |
20120240131 | SYNCHRONISATION OF EXECUTION THREADS ON A MULTI-THREADED PROCESSOR - Method and apparatus are provided for a synchronising execution of a plurality of threads on a multi-threaded processor. Each thread is provided with a number of synchronisation points corresponding to points where it is advantageous or preferable that execution should be synchronised with another thread. Execution of a thread is paused when it reaches a synchronisation point until at least one other thread with which it is intended to be synchronised reaches a corresponding synchronisation point. Execution is subsequently resumed. Where an executing thread branches over a section of code which included a synchronisation point then execution is paused at the end of the branch until the at least one other thread reaches the synchronisation point of the end of the corresponding branch. | 09-20-2012 |
20120246661 | DATA ARRANGEMENT CALCULATING SYSTEM, DATA ARRANGEMENT CALCULATING METHOD, MASTER UNIT AND DATA ARRANGING METHOD - A data arrangement calculating system including a master unit and a plurality of slave units connected with said master unit. The master unit includes a data arranging section and a job allocating section. The data arranging section includes a data dividing section and an arranging section configured to arrange a first block of the blocks in a first slave unit of the plurality of slave units as an owner block, and arrange the replica block of a second block of the blocks next to the first block in the first slave unit. The first slave unit includes a data retaining section configured to retain said first block and the replica block of said second block and a job executing section. The job executing section executes the sliding window calculation by using the first block and the replica block of the second block. | 09-27-2012 |
20120254887 | PARALLELIZING SCHEDULER FOR DATABASE COMMANDS - A system, method, and computer-readable medium, is described that enables a parallelizing scheduler to analyze database instructions, determine data dependencies among instructions, and provide a multi-threaded approach to running instructions in parallel while preserving data dependencies. | 10-04-2012 |
20120260260 | Managing Job Execution - Various embodiments involve monitoring the execution of jobs in a work plan. For example, a system maintains a risk level associated with the critical job may be maintained to represent whether the execution of a job preceding the critical job has a problem, and a list associated with the critical job may be maintained so as to quickly identify the preceding job which may cause a delay to the critical job execution. | 10-11-2012 |
20120266182 | SYSTEM AND METHOD FOR THREAD PROTECTED TESTING - A method performed by a system including one or more data processing systems. The method includes receiving a plurality of requesting process calls for a target process from one or more requesting processes, and identifying dependencies between the requesting process calls. The method includes sending the requesting process call to the target process for execution on multiple threads, including sending thread execution parameters corresponding to the requesting process calls. The method includes receiving results, corresponding to the requesting process calls, from the target process. The method includes sending the results to the requesting processes corresponding to the respective requesting process calls. | 10-18-2012 |
20120291045 | REGISTRATION AND EXECUTION OF HIGHLY CONCURRENT PROCESSING TASKS - A dependency datastructure represents a processing task. The dependency datastructure comprising a plurality of components, each component encapsulating a code unit. The dependency datastructure may include dependency arcs to inter-component dependencies. Dependencies that are not satisfied by components within the dependency datastructure may be represented as pseudo-components. An execution environment identifies components that can be executed (e.g., have satisfied dependencies), using the dependency datastructure and/or concurrency state metadata. The execution environment may identify and exploit concurrencies in the processing task, allowing for multiple components to be executed in parallel. | 11-15-2012 |
20120304194 | Data processing apparatus and method for processing a received workload in order to generate result data - A data processing apparatus and method are provided for processing a received workload in order to generate result data. A thread group generator generates from the received workload a plurality of thread groups to be executed to process the received workload. Each thread group consists of a plurality of threads, and at least one thread group has an inter-thread dependency existing between the plurality of threads. Each thread may be either an active thread whose output is required to form the result data, or a dummy thread required to resolve the inter-thread dependency for one of the active threads but whose output is not required to form the result data. The thread group generator identifies for each thread group any dummy thread within that thread group. A thread execution unit then executes each thread within a thread group received from the thread group generator by executing a predetermined program comprising a plurality of program instructions. Execution flow modification circuitry is responsive to the received thread group having at least one dummy thread, to cause the thread execution unit to selectively omit at least part of the execution of at least one of the plurality of instructions when executing each dummy thread, in dependence on control information associated with the predetermined program. In one particular embodiment the received workload is a graphics rendering workload and the thread execution unit performs graphics rendering operations in order to generate as the result data pixel values and associated control values. Such an approach can yield significant improvements in performance, as well as reducing power consumption. | 11-29-2012 |
20120304195 | DYNAMIC ATTRIBUTE RESOLUTION FOR ORCHESTRATED MANAGEMENT - A method is provided herein for managing a plurality of computing entities. The method includes sending a dynamic attribute dependency to one or more of the computing entities. The dynamic attribute dependency specifies a constraint for performing the management operation based on a dynamic attribute of each of the one or more computing entities. Additionally, the method includes scheduling, based on the plan, an atomic task configured to perform the management operation on each of the one or more a computing entities based on whether the constraint is resolved. The method further includes performing the atomic task if the constraint is resolved. | 11-29-2012 |
20120324472 | TRANSACTIONAL COMPUTATION ON CLUSTERS - Computations are performed on shared datasets in a distributed computing cluster using aggressive speculation and a distributed runtime that executes code transactionally. Speculative transactions are conducted with currently available data on the assumption that no dependencies exist that will render the input data invalid. For those specific instances where this assumption is found to be incorrect—that the input data did indeed have a dependency (thereby impacting the correctness of the speculated transaction)—the speculated transaction is aborted and its results (and all transactions that relied on its results) are rolled-back accordingly for re-computation using updated input data. In operation, shared state data is read and written using only the system's data access API which ensures that computations can be rolled-back when conflicts stemming from later-determined dependencies are detected. | 12-20-2012 |
20130019250 | Interdependent Task Management - An illustrative embodiment of a computer-implemented process for interdependent task management selects a task from an execution task dependency chain to form a selected task, wherein a type selected from a set of types including “forAll,” “runOnce” and none is associated with the selected task and determines whether there is a “forAll” task. Responsive to a determination that there is no “forAll” task, determines whether there is a “runOnce” task and responsive to a determination that there is a “runOnce” task further determines whether there is a semaphore for the selected task. Responsive to a determination that there is a semaphore for the selected task, the computer-implemented process determines whether the semaphore is “on” for the selected task and responsive to a determination that the semaphore is “on,” sets the semaphore “off” and executes the selected task. | 01-17-2013 |
20130031564 | Asynchronously Refreshing, Networked Application with Single-Threaded User Interface - An invention is disclosed for updating a networked, single-threaded application's data model without blocking the application's entire user interface. In embodiments of the invention, a client executes a networked application with a single-threaded user interface that communicates with a server to refresh its data model. The client sends a message to the server that requests a refresh of the data model. Before the data model has been refreshed, the client receives local user input to perform an action on the data model. The client sends a message to the server to cancel the refresh. When the client receives an acknowledgement from the server that the refresh has been cancelled, the client performs the action. After performing the action, the client sends a second message to the server that requests a refresh of the data model, and then refreshes the data model upon receiving the refreshed data model from the server. | 01-31-2013 |
20130036425 | USING STAGES TO HANDLE DEPENDENCIES IN PARALLEL TASKS - Technologies are described herein for using stages for managing dependencies between tasks executed in parallel. A request for permission to execute a task from a group or batch of tasks is received. The specified task is retrieved from a task definition list defining a task ID, stage, and maximum stage for each task in the group. If another pending or currently running task exists with a stage and maximum stage less than the stage defined for the retrieved task, then the retrieved task is not allowed to run. If no other pending or currently running task exists with a stage and maximum stage less than the stage defined for the retrieved task, then the permission to execute the specified task is given. | 02-07-2013 |
20130042254 | Performing A Local Barrier Operation - Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value. | 02-14-2013 |
20130055284 | MANAGING SHARED COMPUTER RESOURCES - Various systems, processes, and products may be used to manage shared computer resources. In particular implementations, managing shared computer resources may include the ability to execute a first process on a first central processing unit and execute a second process on a second central processing units, wherein the first process and the second process are operable to access a first resource, and to determine at a mutex controller which of the first process and the second process is allowed to access the first resource at a given time. | 02-28-2013 |
20130081049 | Acquiring and transmitting tasks and subtasks to interface devices - Computationally implemented methods and systems include acquiring one or more subtasks that correspond to portions of a task of acquiring data requested by a task requestor, wherein the task of acquiring data is configured to be carried out by two or more discrete interface devices, transmitting at least one of the one or more subtasks to at least two of the two or more discrete interface devices, wherein the one or more subtasks are configured to be carried out in an absence of information regarding the task requestor and/or the task of acquiring data, and receiving result data corresponding to a result of an executed one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081050 | Acquiring and transmitting tasks and subtasks to interface devices - Computationally implemented methods and systems include acquiring one or more subtasks that correspond to portions of a task of acquiring data requested by a task requestor, wherein the task of acquiring data is configured to be carried out by two or more discrete interface devices, transmitting at least one of the one or more subtasks to at least two of the two or more discrete interface devices, wherein the one or more subtasks are configured to be carried out in an absence of information regarding the task requestor and/or the task of acquiring data, and receiving result data corresponding to a result of an executed one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081051 | Acquiring tasks and subtasks to be carried out by interface devices - Computationally implemented methods and systems include receiving a request to carry out a task of acquiring data requested by a task requestor, acquiring one or more subtasks related to the task of acquiring data, determining a set of two or more discrete interface devices that are configured to carry out the one or more subtasks at a particular time and in an absence of information regarding the at least one task and/or the task requestor, and facilitating a transmission of one or more subtasks to two or more of the set of two or more discrete interface devices. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081052 | Acquiring tasks and subtasks to be carried out by interface devices - Computationally implemented methods and systems include receiving a request to carry out a task of acquiring data requested by a task requestor, acquiring one or more subtasks related to the task of acquiring data, determining a set of two or more discrete interface devices that are configured to carry out the one or more subtasks at a particular time and in an absence of information regarding the at least one task and/or the task requestor, and facilitating a transmission of one or more subtasks to two or more of the set of two or more discrete interface devices. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130111496 | PERFORMING A LOCAL BARRIER OPERATION | 05-02-2013 |
20130125134 | SYSTEM AND CONTROL METHOD - If a specific task is to perform a process at the time of processing a specific job, the specific task performs a process specified in the specific task by accessing a third party system using a stored access token and receiving a third party service. | 05-16-2013 |
20130145378 | Determining Collective Barrier Operation Skew In A Parallel Computer - Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time. | 06-06-2013 |
20130145379 | DETERMINING COLLECTIVE BARRIER OPERATION SKEW IN A PARALLEL COMPUTER - Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time. | 06-06-2013 |
20130160025 | RUNTIME OPTIMIZATION OF AN APPLICATION EXECUTING ON A PARALLEL COMPUTER - Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. | 06-20-2013 |
20130191846 | DATA PROCESSING METHOD, DATA PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING DATA PROCESSING PROGRAM - A data processing method according to the present invention includes executing a third thread for performing a series of procedures (reception, operation, storage, and transmission), in which the series of procedures includes receiving a control signal transmitted from a first thread that supplies input data, then executing an operation using the input data, storing a result of the operation to a data region specified by the control signal, and transmitting the control signal to a second thread that uses the result. This guarantees exclusive data access without locking/unlocking data at the time of executing threads with data dependency and also reduces data transfer cost. | 07-25-2013 |
20130198760 | AUTOMATIC DEPENDENT TASK LAUNCH - One embodiment of the present invention sets forth a technique for automatic launching of a dependent task when execution of a first task completes. Automatically launching the dependent task reduces the latency incurred during the transition from the first task to the dependent task. Information associated with the dependent task is encoded as part of the metadata for the first task. When execution of the first task completes a task scheduling unit is notified and the dependent task is launched without requiring any release or acquisition of a semaphore. The information associated with the dependent task includes an enable flag and a pointer to the dependent task. Once the dependent task is launched, the first task is marked as complete so that memory storing the metadata for the first task may be reused to store metadata for a new task. | 08-01-2013 |
20130205303 | Efficient Checking of Pairwise Reachability in Multi-Threaded Programs - Disclosed is a simple but yet effective strategy to check pairwise reachability in an online analysis under a general locking scheme where locks may be acquired in recursive, non-nested, or nested manner. Under data abstraction, such an approach guarantees true positives and negatives for two-threaded system. For more than two threaded, it guarantees either true positive or true negative (but not both). It uses time stamped lock/unlock events to identify and avoid redundant and inconsistent sequence. Importantly, the approach is incremental and reduce amortized cost of checking multiple pairwise reachability problems. The worst case complexity is quadratic in the length of the history; in practice, however, the running cost is linear in the length of the history. Such an approach improves the accuracy of the race prediction for general locking style that includes recursive, nesting/non-nesting, and thereby improving the overall runtime verification | 08-08-2013 |
20130227586 | Recording Activity of Software Threads in a Concurrent Software Environment - The present disclosure provides a method, computer program product, and activity recording system for identifying idleness in a processor via a concurrent software environment. A thread state indicator records an indication of a synchronization state of a software thread that is associated with an identification of the software thread. A time profiler identifies a processor of the computer system being idle and records an indication that the processor is idle. A dispatch monitor identifies a dispatch of the software thread to the processor. In response to the dispatch monitor determining the indication identifies that the processor is idle and the indication of a synchronization state of the software thread indicating the software thread ceases to execute in the processor, the dispatch monitor generates a record attributing the idleness of the processor to the software thread and the indicated synchronization state. | 08-29-2013 |
20130239120 | CONCURRENT ASSERTION - A concurrency assertions system disclosed herein provides for atomic evaluation of an assertion expression by locking an assertion lock upon initiating an assertion and thereby protecting the assertion evaluation from concurrent modifications to the variables in the assertion expressions. When a violation of an assertion is detected, the concurrency assertions system ensures that the exception statistics at the time of the assertion violation represents a program state where the assertion is violated, thus improving analysis of assertion violations. Furthermore, the concurrency assertions system continuously evaluates an expression for an assertion for a time period while other threads in the program are being executed. | 09-12-2013 |
20130239121 | UNIFIED NETWORK ARCHITECTURE FOR SCALABLE SUPER-CALCULUS SYSTEMS - A network architecture is used for the communication between elementary calculus units or nodes of a supercomputer to execute a super-calculus processing application, partitionable and scalable at the level of calculus power in the range of PetaFLOPS. The supercomputer comprises a plurality of modular structures, each of which comprises a plurality of elementary calculus units or nodes defined by node cards, a backplane, a root card, and a node communication network of the switched fabric fat tree type; ii) a synchronization architecture comprising a plurality of distinct node communication networks, configured for the communication of specific synchronization information different from network to network and with different characteristics; iii) a re-configurable Programmable Network Processor that implements the nodes both of the n-toroidal network and those of the synchronization networks. | 09-12-2013 |
20130268945 | IDENTIFYING GLOBALLY CONSISTENT STATES IN A MULTITHREADED PROGRAM - In a method of identifying a globally consistent state in a multithreaded program, a plurality of locally consistent states is identified, in which a locally consistent state of a thread comprises a set of memory locations and their corresponding data values accessed between points in the multithreaded program where no locks are held. Globally consistent states are identified based at least in part on the locally consistent states. | 10-10-2013 |
20130275994 | METHOD AND APPARATUS FOR ACTIVITY MANAGEMENT ACROSS MULTIPLE DEVICES - A method, apparatus and computer program product are provided to synchronize multiple devices. In regards to a method, an indication is received that a view of a task is presented by a first device. The method causes state information to be provided to a second device to permit the second device to be synchronized with the first device and to present a different view of the task than that presented by the first device. The method also receives information relating to a change in state of the task that is provided by one of the devices while a first view of the task is presented thereupon. Further, the method causes updated state information to be provided to another one of the devices to cause the other device to remain synchronized and to update a second view of the task, different than the first view of the task, that is presented. | 10-17-2013 |
20130275995 | Synchronizing Multiple Threads Efficiently - In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed. | 10-17-2013 |
20130275996 | SYNCHRONIZATION METHOD - A synchronization method of multiple threads is executed by a computer. The synchronization method includes determining a type of a synchronization process of a first thread performing the synchronization process for synchronization with a second thread; starting time measurement when the type of the synchronization process of the first thread is a first type; performing the synchronization process of the first thread and a synchronization process of the second thread based on a synchronization process history of the second thread when the measured time exceeds a permitted response period of the first thread; and updating the permitted response period and performing the synchronization processes of the first thread and the second thread based on the synchronization process history of the second thread, when another processing request is received. | 10-17-2013 |
20130305258 | METHOD AND SYSTEM FOR PROCESSING NESTED STREAM EVENTS - One embodiment of the present disclosure sets forth a technique for enforcing cross stream dependencies in a parallel processing subsystem such as a graphics processing unit. The technique involves queuing waiting events to create cross stream dependencies and signaling events to indicated completion to the waiting events. A scheduler kernel examines a task status data structure from a corresponding stream and updates dependency counts for tasks and events within the stream. When each task dependency for a waiting event is satisfied, an associated task may execute. | 11-14-2013 |
20130305259 | HARDWARE CONTROL METHOD AND APPARATUS - A hardware control method for multitasking drivers under a user mode is provided. The control method includes steps of: receiving a request for access to a hardware device from a current process under the user mode; determining whether the current process has obtained a mutual exclusion (mutex) of the hardware device; if affirmative, determining whether an identification of the current process and an identification of a previous process accessed the hardware device are the same; if negative, performing a context switch on the current process and the previous process accessed the hardware device to allow the current process to access the hardware device. Accordingly, when accessing complicated hardware devices, the disclosure significantly enhances driver performance under a user mode while also implementing secured random access to hardware devices in a multitasking environment. | 11-14-2013 |
20130312007 | SYSTEMS AND METHODS FOR PROVIDING SEMAPHORE-BASED PROTECTION OF SYSTEM RESOURCES - Embodiments include systems and methods that implement semaphore-based protection of various system resources. In an embodiment, a job scheduling module receives a job execution request from a requesting module (e.g., a CPU or other autonomous module). In response to receiving the job execution request, the job scheduling module identifies a descriptor, where the descriptor includes code configured to access a semaphore-protected resource. The job scheduling module causes a descriptor controller module to execute the descriptor. More specifically, execution of the descriptor includes the descriptor controller module performing a semaphore-based access of the protected resource. The job scheduling module also may coordinate sharing the descriptor among multiple descriptor controller modules (e.g., allowing parallel execution of portions of the descriptor). In various embodiments, using protection status flags or tokens that are accessed by the descriptor, semaphore-based protection of the resource is enforced even while the descriptor is being shared. | 11-21-2013 |
20130312008 | Integrated Network System - An embodiment of the present invention establishes a neural network of handheld devices with a master server so that the master server may parcel out a large task into many smaller tasks to be assigned to one or more networked and subservient handheld devices. The handheld devices will then use its computing power to process the assigned smaller task and send the output to the master server for its compilation of the output data for producing an answer to the large task. | 11-21-2013 |
20130318540 | DATA FLOW GRAPH PROCESSING DEVICE, DATA FLOW GRAPH PROCESSING METHOD, AND DATA FLOW GRAPH PROCESSING PROGRAM - A data flow graph processing device that transforms a data flow graph including a loop structure into a pipeline operation capable of determining node execution order and judging whether or not executable, comprises: a delay node divider that divides a delay node included in t data flow graph into a value update node and a value output node; a dependency relation adder that adds dependency relations from the start node of the data flow graph to the value output node; and a hidden dependency relation adder that adds hidden dependency relations, indicating previous iteration and current iteration dependencies, from the value update node to the value output node. | 11-28-2013 |
20130326536 | SYSTEMS AND METHODS FOR DETECTING CONFLICTING OPERATIONS AND PROVIDING RESOLUTION IN A TASKING SYSTEM - A mechanism for detecting conflicting operations and providing resolutions in a tasking system is disclosed. A method includes receiving, by a processing device in a tasking system, a request for a call including at least one operation to be executed on at least one resource of a plurality of resources that are managed by the tasking system. The method also includes detecting an occurrence of a conflict between the at least one operation on the call request and queued operations associated with the plurality of resources. The method also includes generating at least one of a task or an error report for the at least one operation in the call request based on the conflict. The method further includes identifying task dependencies associated with the at least one task and executing the at least one task only after execution of the task dependencies. | 12-05-2013 |
20130326537 | DEPENDENCY MANAGEMENT IN TASK SCHEDULING - A task is marked as dependent upon a preceding task. The task that is attempted to be taken for execution from a head of a pending task queue that is marked is deferred. The deferred task is removed from the pending task queue and placed in a deferred task queue. The deferred task is reinserted back into the pending task queue for execution upon determining that the preceding tasks are completed. | 12-05-2013 |
20130332939 | DATA PROCESSING APPARATUS AND METHOD FOR PROCESSING A RECEIVED WORKLOAD IN ORDER TO GENERATE RESULT DATA - A data processing apparatus and method are provided for processing a received workload in order to generate result data. A thread group generator generates from the received workload a plurality of thread groups to be executed to process the received workload. Each thread group consists of a plurality of threads, and at least one thread group has an inter-thread dependency existing between the plurality of threads. Each thread may be either an active thread whose output is required to form the result data, or a dummy thread required to resolve the inter-thread dependency for one of the active threads but whose output is not required to form the result data. The thread group generator identifies for each thread group any dummy thread within that thread group. A thread execution unit then executes each thread within a thread group received from the thread group generator by executing a predetermined program comprising a plurality of program instructions. Execution flow modification circuitry is responsive to the received thread group having at least one dummy thread, to cause the thread execution unit to selectively omit at least part of the execution of at least one of the plurality of instructions when executing each dummy thread, in dependence on control information associated with the predetermined program. In one particular embodiment the received workload is a graphics rendering workload and the thread execution unit performs graphics rendering operations in order to generate as the result data pixel values and associated control values. Such an approach can yield significant improvements in performance, as well as reducing power consumption. | 12-12-2013 |
20130347001 | COMPUTER-READABLE RECORDING MEDIUM, EXCLUSION CONTROL APPARATUS, AND EXCLUSION CONTROL METHOD - A exclusion control method includes setting, for at least one or more operation information defining operations for an information processing apparatus and being included in a plurality of work flow information that indicate operation procedures, exclusive sections that indicate units of exclusion control performing an exclusive lock, calculating priorities of the exclusive sections using operation importance level information that indicate importance levels of the operations according to types of the operation information and operation urgency level information that indicate an urgency levels of the operations, when the operations are executed based on the operation information corresponding to the exclusive sections of the plurality of workflow information, and executing the exclusion control in the exclusive sections for a plurality of workflow based on the priorities, when a competitive regarding the exclusive lock occurs between the exclusive sections. | 12-26-2013 |
20140007133 | SYSTEM AND METHOD TO PROVIDE SINGLE THREAD ACCESS TO A SPECIFIC MEMORY REGION | 01-02-2014 |
20140007134 | MANAGING COMPUTING RESOURCES THROUGH AGGREGATED CORE MANAGEMENT | 01-02-2014 |
20140026148 | LOW POWER EXECUTION OF A MULTITHREADED PROGRAM - Technologies for low power execution of one or more threads of a multithreaded program by one or more processing elements are generally disclosed. | 01-23-2014 |
20140033224 | METHOD FOR MONITORING THE COORDINATED EXECUTION OF SEQUENCED TASKS BY AN ELECTRONIC CARD COMPRISING AT LEAST TWO PROCESSORS SYNCHRONIZED TO ONE AND THE SAME CLOCK - A method for monitoring the coordinated execution of sequenced tasks by an electronic card including at least one first processor (PP | 01-30-2014 |
20140033225 | METHOD FOR MONITORING THE COORDINATED EXECUTION OF SEQUENCED TASKS BY AN ELECTRONIC CARD COMPRISING AT LEAST TWO PROCESSORS SYNCHRONIZED TO TWO DIFFERENT CLOCKS - A method for monitoring the coordinated execution of sequenced tasks by an electronic device including a main electronic card including at least one main processor synchronized to a main clock and at least one auxiliary electronic card including at least one auxiliary processor synchronized to an auxiliary clock, includes
| 01-30-2014 |
20140053163 | THREAD PROCESSING METHOD AND THREAD PROCESSING SYSTEM - A thread processing method that is executed by a multi-core processor, includes supplying a command to execute a first thread to a first processor; judging a dependence relationship between the first thread and a second thread to be executed by a second processor; comparing a first threshold and a frequency of access of any one among shared memory and shared cache memory by the first thread; and changing a phase of a first operation clock of the first processor when the access frequency is greater than the first threshold and upon judging that no dependence relationship exists. | 02-20-2014 |
20140059562 | COMPUTER-READABLE RECORDING MEDIUM ON WHICH SCHEDULE MANAGEMENT PROGRAM IS RECORDED, SCHEDULE MANAGEMENT APPARATUS AND SCHEDULE MANAGEMENT METHOD - A processor registers a scheduled start timing and a scheduled end timing for each of a plurality of processes in advance into a storage unit, decides, based on the scheduled start timings and the scheduled end timings registered in the storage unit, whether or not the processes have a dependency relationship therebetween, and extracts a plurality of schedule paths by connecting those of the processes decided to have the dependency relationship therebetween to each other. A schedule path of the processes can be produced in a simplified maker without significantly breaking the accuracy of the time relationship among the processes. | 02-27-2014 |
20140059563 | DEPENDENCY MANAGEMENT IN TASK SCHEDULING - A task is marked as dependent upon a preceding task. The task that is attempted to be taken for execution from a head of a pending task queue that is marked is deferred. The deferred task is removed from the pending task queue and placed in a deferred task queue. The deferred task is reinserted back into the pending task queue for execution upon determining that the preceding tasks are completed. | 02-27-2014 |
20140059564 | METHOD AND SYSTEM FOR PROCESSING ADMINISTRATION COMMANDS IN A CLUSTER - The disclosure relates in particular to the processing of commands targeting at least one element of a cluster including a plurality of elements, the at least one element having a link of dependency according to the at least one command with at least one other element. After having identified the at least one element and at least one dependency rule from the at least one command, a dependency graph is generated from the at least one identified element, by applying the at least one identified dependency rule, the dependency graph including peaks representing at least the element and the at least one other element, an action linked with the at least one command being associated with the peaks of the dependency graph. A sequence of instructions is then generated from the dependency graph. | 02-27-2014 |
20140075449 | Method and Apparatus for Synchronous Processing Based on Multi-Core System - Embodiments of the present invention relate to the field of communications network technologies and provide a method and an apparatus for synchronization processing based on a multi-core system, which can improve efficiency in system scheduling and consume fewer resources. According to the solutions provided in the present invention, an initialization setting is sent by any processing device in a first group of processing devices that synchronously process a same current task and initialization is performed; then a notification message sent by any processing device in the first group of processing devices is received and 1 is subtracted from a value of a counting semaphore; and when the value of the counting semaphore is 0, a control message is sent to a second group of processing devices through a message sending interface. The solutions provided in the present invention are applicable to processing synchronization and communication between multiple modules. | 03-13-2014 |
20140101673 | DYNAMIC DEPENDENCY EVALUATION FOR COMPUTING TASK EXECUTION - The subject disclosure is directed towards scheduling computing task execution by dynamically evaluating dependencies between computing tasks. After executing independent computing tasks in parallel, one or more dependent tasks are scheduled for execution. Unless a task failed, dependencies between remaining tasks are examined to identify one or more computing tasks that do not correspond to a dependency and/or one or more computing tasks that depend upon successfully completed tasks. Dynamic dependency evaluation may be applied to improve self-healing task execution. | 04-10-2014 |
20140109106 | CODE DEPENDENCY CALCULATION - Generation of a dependency graph for code that includes code portions such as resources or functions or both. For some or all of the nodes, the dependency is calculated by determining that the given node, a depending node, depends on an affecting node. The dependency is recorded so as to be associated with the node. Furthermore, the dependency calculation method is recorded so as to be associated with the dependency. The code may perhaps include portions within two different domains, in which the mechanism for calculating dependencies may differ. In some cases, the dependency graph may be constructed in stages, and perhaps additional properties may be associated with the node, and metadata of the properties may also be recorded. | 04-17-2014 |
20140115604 | METHODS AND SYSTEMS TO IDENTIFY AND REPRODUCE CONCURRENCY VIOLATIONS IN MULTI-THREADED PROGRAMS - Methods and systems to identify and reproduce concurrency violations in multi-threaded programs are disclosed. An example method disclosed herein comprises determining whether a condition is met and serializing an operation of a first thread of a multi-threaded program relative to an operation of a second thread of the multi-threaded program. The serialization of the operations of the first and second threads results in a concurrency violation or bug thereby causing the multi-threaded program to crash. In this way, the operations of the first and second threads of the multi-threaded program that are responsible for the concurrency violation are identified and can be revised to remove the bug. | 04-24-2014 |
20140130059 | Lattice Computing - This invention relates to a machine implemented method of executing CPU instructions on a plurality of computers in one or more locations, logically arranged in a weighted, lattice-like structure representing information about CPUs, CPU cores, operating system threads, network interconnects, and computer locations in a many-to-many relationship. This approach, by weighting nodes and costing edges, provides a natural method for commoditizing the execution of a workload. Furthermore, this approach lends itself to a means of determining the incremental value (or cost) of additional nodes. Consequently, the creation of a virtual crowd-sourcing market—in which either CPUs singularly or lattices as a whole are market participants—is a natural extension of the method. | 05-08-2014 |
20140157286 | SYSTEM AND METHOD FOR THREAD PROTECTED TESTING - A method performed by a system including one or more data processing systems. The method includes receiving a plurality of requesting process calls for a target process from one or more requesting processes and identifying dependencies between the requesting process calls. The method includes sending the requesting process call to the target process for execution on multiple threads, including sending thread execution parameters corresponding to the requesting process calls, the thread execution parameters indicating that the requesting process calls can be executed by the target process simultaneously and independently, that the requesting process calls must be processed in a specific order based on the dependencies, or that the requesting process calls are to be executed with shared process objects. The method includes receiving results from the target process. The method includes sending the results to the requesting processes corresponding to the respective requesting process calls. | 06-05-2014 |
20140173625 | TASK COMPLETION THROUGH INTER-APPLICATION COMMUNICATION - Among other things, one or more techniques and/or systems for facilitating task completion through inter-application communication and/or for registering a target application for contextually aware task execution are provided. That is, a current application may display content comprising an entity (e.g., a mapping application may display a restaurant entity). One or more actions capable of being performed on the entity may be exposed (e.g., a reserve table action). Responsive to selection of an action, one or more target applications capable of performing the action on the entity may be presented. Responsive to selection of a target application, contextual information for the entity and/or the action may be passed to the target application so that the target application may be launched in a contextually relevant state to facilitate completion of a task. For example, a dinning application may be launched to a table reservation form for the restaurant entity. | 06-19-2014 |
20140173626 | BROADCASTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for broadcasting shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer detecting, by a runtime optimizer of the parallel computer, a change in SVD information within an SVD associated with a first thread. Embodiments also include a runtime optimizer identifying a plurality of threads requiring notification of the change in the SVD information. Embodiments also include the runtime optimizer in response to detecting the change in the SVD information, broadcasting to each thread of the identified plurality of threads, a broadcast message header and update data indicating the change in the SVD information. | 06-19-2014 |
20140173627 | REQUESTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION FROM A PLURALITY OF THREADS IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for requesting shared variable directory (SVD) information from a plurality of threads in a parallel computer are provided. Embodiments include a runtime optimizer detecting that a first thread requires a plurality of updated SVD information associated with shared resource data stored in a plurality of memory partitions. Embodiments also include a runtime optimizer broadcasting, in response to detecting that the first thread requires the updated SVD information, a gather operation message header to the plurality of threads. The gather operation message header indicates an SVD key corresponding to the required updated SVD information and a local address associated with the first thread to receive a plurality of updated SVD information associated with the SVD key. Embodiments also include the runtime optimizer receiving at the local address, the plurality of updated SVD information from the plurality of threads. | 06-19-2014 |
20140181835 | HYBRID DEPENDENCY ANALYSIS USING DYNAMIC AND STATIC ANALYSES - A method, computer program product, and system for performing a hybrid dependency analysis is described. According to an embodiment, a method may include computing, by one or more computing devices, one or more dynamic hints based on a finite set of executions of a computer program. The method may further include performing, by the one or more computing devices, a hybrid dependence analysis of one or more statements of the computer program. | 06-26-2014 |
20140181836 | HYBRID DEPENDENCY ANALYSIS USING DYNAMIC AND STATIC ANALYSES - A method, computer program product, and system for performing a hybrid dependency analysis is described. According to an embodiment, a method may include computing, by one or more computing devices, one or more dynamic hints based on a finite set of executions of a computer program. The method may further include performing, by the one or more computing devices, a hybrid dependence analysis of one or more statements of the computer program. | 06-26-2014 |
20140189711 | COOPERATIVE THREAD ARRAY GRANULARITY CONTEXT SWITCH DURING TRAP HANDLING - Techniques are provided for restoring thread groups in a cooperative thread array (CTA) within a processing core. Each thread group in the CTA is launched to execute a context restore routine. Each thread group, executes the context restore routine to restore from a memory a first portion of context associated with the thread group, and determines whether the thread group completed an assigned function prior to executing the context restore routine. If the thread group completed an assigned function prior to executing the context restore routine, then the thread group exits the context restore routine. If the thread group did not complete the assigned function prior to executing the context restore routine, then the thread group executes one or more operations associated with a trap handler routine. One advantage of the disclosed techniques is that the trap handling routine operates efficiently in parallel processors. | 07-03-2014 |
20140196057 | Managing Job Execution - This disclosure describes monitoring the execution of jobs in a work plan. In an embodiment, a system maintains a risk level associated with the critical job to represent whether the execution of a job preceding the critical job has a problem, and it maintains the list associated with the critical job so as to quickly identify the preceding job which may cause a delay to the critical job execution. | 07-10-2014 |
20140201759 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND PROCESS EXECUTION METHOD - An information processing system includes a managing unit that sorts a process execution request based on a type of process of the process execution request; a storing unit that stores the sorted process execution request according to the type of process of the process execution request; and a plurality of executing units that are configured to execute a process corresponding to the process execution request stored in the storing unit. At least one executing unit of the plurality of executing units is configured to split the process corresponding to the process execution request stored in the storing unit into a plurality of processes to be executed by at least two other executing units of the plurality of executing units and store in the storing unit a split process execution request including the split processes for prompting the other executing units to cooperatively execute the split processes. | 07-17-2014 |
20140215487 | OPTIMIZING EXECUTION AND RESOURCE USAGE IN LARGE SCALE COMPUTING - A method for tuning workflow settings in a distributed computing workflow comprising sequential interdependent jobs includes pairing a terminal stage of a first job and a leading stage of a second, sequential job to form an optimization pair, in which data segments output by the terminal stage of the first job comprises data input for the leading stage of the second job. The performance of the optimization pair is tuned by determining, with a computational processor, an estimated minimum execution time for the optimization pair and increasing the minimum execution time to generate an increased execution time. The method further includes calculating a minimum number of data segments that still permit execution of the optimization pair within the increased execution time. | 07-31-2014 |
20140250441 | DEADLOCK PREVENTING APPARATUS, DEADLOCK PREVENTING METHOD, AND PROGRAM - A deadlock preventing apparatus includes a deadlock detecting section | 09-04-2014 |
20140259024 | Computer System and Method for Runtime Control of Parallelism in Program Execution - A computer system and method are provided to assess a proper degree of parallelism in executing programs to obtain efficiency objectives, including but not limited to increases in processing speed or reduction in computational resource usage. This assessment of proper degree of parallelism may be used to actively moderate the requests for threads by application processes to control parallelism when those efficiency objectives would be furthered by this control. | 09-11-2014 |
20140259025 | METHOD AND APPARATUS FOR PARALLEL COMPUTING - The present invention relates to a method and apparatus for parallel computing. According to one embodiment of the present invention, there is provided a job parallel processing method, the job processing at least comprising executing an upstream task in a first phase and executing a downstream task in a second phase. The method comprises: quantitatively determining data dependence between the upstream task and the downstream task; and selecting time for initiating the downstream task at least partially based on the data dependence. There is further disclosed a corresponding apparatus. According to embodiments of the present invention, it is possible to more accurately and quantitatively determine data dependence between tasks during different phases and thus select the right time to initiate a downstream task. | 09-11-2014 |
20140282599 | COLLECTIVELY LOADING PROGRAMS IN A MULTIPLE PROGRAM MULTIPLE DATA ENVIRONMENT - Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the program needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program. | 09-18-2014 |
20140282600 | EXECUTING ALGORITHMS IN PARALLEL - Among other things, a machine-based method comprises receiving an application specification comprising one or more algorithms. Each algorithm is not necessarily suitable for concurrent execution on multiple nodes in parallel. One or more different object classes are grouped into one or more groups, each being appropriate for executing the one or more algorithms of the application specification. The executing involves data that is available in objects of the object classes. A user is enabled to code an algorithm of the one or more algorithms for one group in a single threaded environment without regard to concurrent execution of the algorithm on multiple nodes in parallel. An copy of the coded algorithm is distributed to each of the multiple nodes, without needing additional coding. The coded algorithm is caused to be executed on each node in association with at least one instance of a group independently of and in parallel to executing the other copies of the coded algorithm on the other nodes. | 09-18-2014 |
20140282601 | METHOD FOR DEPENDENCY BROADCASTING THROUGH A BLOCK ORGANIZED SOURCE VIEW DATA STRUCTURE - A method for dependency broadcasting through a block organized source view data structure. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks; using a plurality of register templates to track instruction destinations and instruction sources by populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the blocks of instructions; populating a block organized source view data structure, wherein the source view data structure stores sources corresponding to the instruction blocks as recorded by the plurality of register templates; upon dispatch of one block of the instruction blocks, broadcasting a number belonging to the one block to a column of the source view data structure that relates that block and marking the column accordingly; and updating the dependency information of remaining instruction blocks in accordance with the broadcast. | 09-18-2014 |
20140282602 | GENERIC WAIT SERVICE: PAUSING A BPEL PROCESS - A method of pausing a plurality of service-oriented application (SOA) instances may include receiving, from an instance of an SOA entering a pause state, an initiation message. The initiation message may include an exit criterion that identifies a business condition that must be satisfied before the instance of the SOA exits the pause state. The method may also include receiving a notification from an event producer, the notification comprising a status of a business event and determining whether the status of the business event satisfies the business condition of the exit criterion. The method may additionally include sending, in response to a determination that the status of the business event satisfies the business condition of the exit criterion, an indication to the instance of the SOA that the business condition has been satisfied such that the instance of the SOA can exit the pause state. | 09-18-2014 |
20140304713 | METHOD AND APPARATUS FOR DISTRIBUTED PROCESSING TASKS - Methods and apparatuses for realizing and enabling a requested computational main task by operation of a task manager and a service node. The task manager defines a set of sub-tasks that accomplish the main task, and sends a source code to the service node including a device instruction for a device connected to the service node, to fetch and execute a sub-task from the task manager. The service node then sends the source code to the device which accordingly fetches and executes the sub-task according to the instruction. When the task manager has received enough sub-task results such that the main task has been completed, it returns an aggregated total result of the main task in response to the main task request. | 10-09-2014 |
20140317636 | Method And Apparatus For Exploiting Data Locality In Dynamic Task Scheduling - A method for scheduling tasks to processor cores of a parallel computing system may include the steps of processing a source code which comprises at least one parallel lambda function having a function body called by a task and having a capture list specifying the data structures accessed in the function body of said parallel lambda function and used to derive data location information; executing the task calling said function body on the processor core which is associated to a memory unit of the parallel computing system where the data of the data structures specified by said capture list is stored, wherein the memory unit is selected or localized based on the derived data location information. | 10-23-2014 |
20140325525 | Monitoring of Computer Events - A data processing system ( | 10-30-2014 |
20140337854 | DATABASE DISPATCHER - Example methods and systems are directed to dispatching database tasks. An application may access data associated with a task. The data may indicate features (e.g., processing functionality) that will be used to complete the task. The application may determine whether all such features are implemented in the database layer. The application may dispatch the task to the database layer if all features are implemented therein. The application may perform the task in the application layer if one or more of the features are not available in the database layer. In some example embodiments, the task involves materials requirements planning. Such a task may include determining, for a given bill of materials (“BOM”), the quantity of materials available on-hand, the quantity available from suppliers, the transport or delivery time for the various quantities, and other data regarding the BOM. | 11-13-2014 |
20140351825 | SYSTEMS AND METHODS FOR DIRECT MEMORY ACCESS COHERENCY AMONG MULTIPLE PROCESSING CORES - A multi-core system configured to execute a plurality of tasks and having a semaphore engine and a direct memory access (DMA) engine capable of selecting, by a task scheduler of a first core, a first task for execution by the first core. In response to a semaphore lock request, the task scheduler of the first core switches the first task to an inactive state and selects a next task for execution by the first core. After the semaphore engine acquires the semaphore lock of the first semaphore, a data transfer request is provided to the DMA engine. In response to the data transfer request, the DMA engine transfers data associated with the locked first semaphore to the entry of the workspace of the first core. | 11-27-2014 |
20140366036 | Class-Based Mutex - When different types of shared resources need mutex protection, the shared resources can be organized into classes. Each class of shared resources can have multiple types of resources. A mutex pool can store multiple mutex objects, each mutex object corresponding to a class of resources. The mutex object can be used to protect each shared resource in the corresponding class. | 12-11-2014 |
20140366037 | Planning Execution of Tasks with Dependency Resolution - A computer-implemented method, program product and system for planning execution of a plurality of tasks according to a plurality of dependencies includes receiving an indication of a task type and execution time, ordering the tasks into a task list according to a primary ordering criterion, receiving an indication of a dependency type for the task type and an indication of a dependency time for the execution time of a predecessor one of the tasks, ordering the dependencies into a dependency list according to the primary ordering criterion, scanning the dependency list for resolving each current one of the dependencies, identifying the predecessor task as a current one of the tasks having the task type meeting the dependency type and the execution time meeting the dependency time, and planning the execution of the tasks according to the resolved dependencies. | 12-11-2014 |
20140373027 | APPLICATION LIFETIME MANAGEMENT - One or more techniques and/or systems are provided for facilitating lifetime management of dynamically created child applications and/or for managing dependencies between a set of applications of an application package. In an example, a parent application may dynamically create a child application. A child lifetime of the child application may be managed independently and/or individually from lifetimes of other applications with which the child application does not have a dependency relationship. In another example, an application within an application package may be identified as a dependency application that may provide functionality depended upon by another application, such as a first application, within the application package. A dependency lifetime of the dependency application may be managed according to a first lifetime of the first application. In this way, lifetimes (e.g., initialization, execution, suspension, termination, etc.) of applications may be managed to take into account dynamically created child applications and/or dependency relationships. | 12-18-2014 |
20140373028 | Software Only Inter-Compute Unit Redundant Multithreading for GPUs - A system, method and computer program product to execute a first and a second work-group, and compare the signature variables of the first work-group to the signature variables of the second work-group via a synchronization mechanism. The first and the second work-group are mapped to an identifier via software. This mapping ensures that the first and second work-groups execute exactly the same data for exactly the same code without changes to the underlying hardware. By executing the first and second work-groups independently, the underlying computation of the first and second work-groups can be verified. Moreover, system performance is not substantially affected because the execution results of the first and second work-groups are compared only at specified comparison points. | 12-18-2014 |
20140373029 | Recording Activity of Software Threads in a Concurrent Software Environment - The present disclosure provides a method for identifying idleness in a processor via a concurrent software environment. A thread state indicator records an indication of a synchronization state of a software thread that is associated with an identification of the software thread. A time profiler identifies a processor of the computer system being idle and records an indication that the processor is idle. A dispatch monitor identifies a dispatch of the software thread to the processor. In response to the dispatch monitor determining the indication identifies that the processor is idle and the indication of a synchronization state of the software thread indicating the software thread ceases to execute in the processor, the dispatch monitor generates a record attributing the idleness of the processor to the software thread and the indicated synchronization state. | 12-18-2014 |
20150026699 | INFORMATION ACQUISITION METHOD AND INFORMATION ACQUISITION APPARATUS - A storage unit stores information indicting the past execution records of a plurality of related tasks. An operating unit obtains information indicating the execution status of a plurality of tasks. After obtaining the information, the operating unit obtains the execution status of an unexecuted task included in the plurality of related tasks, which was not yet executed, at an estimated time by which the unexecuted task is completed and which is estimated based on the information stored in the storage unit. | 01-22-2015 |
20150046929 | USING-SUB-PROCESSES ACROSS BUSINESS PROCESSES IN DIFFERENT COMPOSITES - A system and method for facilitating reuse of a portion of process logic by different processes. An example method includes providing a subprocess that is adapted to perform the process logic in a file accessible to a composite system, wherein the subprocess is adapted to be called by a first parent process via a subprocess extension to a business process language employed to encode the first parent process; using a call activity defined as part of the subprocess extension, and included in a scope of the first parent process to facilitate access to functionality of the subprocess by the parent process; and employing a business process engine to facilitate instantiating the subprocess, resulting in an instantiated subprocess in response thereto; and using a second parent process to share use of the instantiated subprocess with the first parent process. | 02-12-2015 |
20150052537 | BARRIER SYNCHRONIZATION WITH DYNAMIC WIDTH CALCULATION - A sequencer of a processing unit determines, at runtime, a barrier width of a barrier operation for a group threads, wherein the barrier width is smaller than a total number of threads in the group of threads, and wherein threads in the group of threads execute data parallel code on one or more compute units. In response to each thread in a subgroup of the group of threads having executed the barrier operation, the subgroup including a same number of threads as the barrier width, the sequencer may enable the subgroup of the group of threads to execute on the one or more processors past the barrier operation without waiting for other threads in the group of threads to execute the barrier operation, wherein the subgroup of the group of threads is smaller than the total number of threads in the group of threads. | 02-19-2015 |
20150058865 | MANAGEMENT OF BOTTLENECKS IN DATABASE SYSTEMS - Management is provided for threads of a database system that is subject to a plurality of disparate bottleneck conditions for resources. A monitor thread retrieves, from a first thread, first monitor data for first bottleneck condition of a first type. The monitor thread compares the first monitor data to a trigger level for the first bottleneck condition and then determines, in response to the comparison of the first monitor data to the trigger level, a potential source of the first bottleneck condition. A potential blocker thread is identified based upon the potential source of the first bottleneck condition. The monitor thread retrieves, from the potential blocker thread, second monitor data for a second type of bottleneck condition that is different from the first type of bottleneck condition. Based upon monitor data, a blocking thread is identified, and a particular blocking solution is applied to the blocking thread. | 02-26-2015 |
20150067697 | COMPUTER SYSTEM AND PROGRAM - It is provided a computer system comprising a management computer to be coupled to a management subject resource managed by the management computer, which includes at least one of a server apparatus, a storage apparatus or a network apparatus, and a display computer coupled to the management computer. The management computer includes a memory storing at least one workflow program including a work procedure, and a CPU configured to execute the at least one workflow program. The work procedure changes a configuration of the management subject resource, and acquires information from the management subject resource. The CPU executes prior verification processing of verifying an operation environment of the management subject resource to operate the work procedure included in the at least one workflow program before execution of the at least one workflow program, and displays an execution result of the prior verification processing on the display computer. | 03-05-2015 |
20150067698 | METHOD AND APPARATUS FOR PERSISTENT ORCHESTRATED TASK MANAGEMENT - Method and Apparatus for rapid scalable unified infrastructure system management platform are disclosed by discovery of compute nodes, network components across data centers, both public and private for a user; assessment of type, capability, VLAN, security, virtualization configuration of the discovered unified infrastructure nodes and components; configuration of nodes and components covering add, delete, modify, scale; and rapid roll out of nodes and components across data centers both public and private. | 03-05-2015 |
20150293786 | METHOD FOR PROCESSING CR ALGORITHM BY ACTIVELY UTILIZING SHARED MEMORY OF MULTI-PROCESSOR, AND PROCESSOR USING THE SAME - A method for processing a CR algorithm by actively utilizing a shared memory of a multi-processor, and a processor using the same are provided. A processor includes: a first multi-processor configured to process a first group of elements of a matrix in accordance with an algorithm; a second multi-processor configured to process a second group of the elements of the matrix in accordance with the algorithm; and a third multi-processor configured to process a third group which comprises some of the elements of the first group, some of the elements of the second group, and some of the elements which are not comprised in the first group and the second group, in accordance with the algorithm. Accordingly, a TDM having many elements can be calculated fast. | 10-15-2015 |
20150301860 | TECHNIQUES FOR GENERATING INSTRUCTIONS TO CONTROL DATABASE PROCESSING - An apparatus includes a task selector to receive an indication of a database task to be performed, wherein the database task includes a set of subtasks; a source selector to receive an indication of a source device to perform the set of subtasks, and to retrieve from the source device an indication of a processing environment currently available within the source device that includes an identity and version level of a database routine of the source device; and an instruction generator to determine a set of languages able to be interpreted by the database routine based on the identity and version level, select a language of the set of languages in which to generate instructions for each subtask based on the processing environment, and generate and transmit the instructions to the source device. | 10-22-2015 |
20150301870 | Systems and Methods for Reordering Sequential Actions - Systems and methods for reordering sequential actions in a process or workflow by determining which actions are required to enable another action in the process or workflow. | 10-22-2015 |
20150309833 | METHODS AND SYSTEMS FOR AUTOMATING DEPLOYMENT OF APPLICATIONS IN A MULTI-TENANT DATABASE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for automating deployment of applications in a multi-tenant database environment. For example, in one embodiment, mechanisms include managing a plurality of machines operating as a machine farm within a datacenter by executing an agent provisioning script at a control hub, instructing the plurality of machines to download and instantiate a lightweight agent; pushing a plurality of URL (Uniform Resource Locator) references from the control hub to the instantiated lightweight agent on each of the plurality of machines specifying one or more applications to be provisioned and one or more dependencies for each of the applications; and loading, via the lightweight agent at each of the plurality of machines, the one or more applications and the one or more dependencies for each of the one or more applications into memory of each respective machine. | 10-29-2015 |
20150309845 | SYNCHRONIZATION METHOD - A synchronization method in a computer system with multiple cores, wherein a group of threads executes in parallel on a plurality of cores, the group of threads being synchronised using barrier synchronisation in which each thread in the group waits for all the others at a barrier before progressing; the group of threads executes until a first thread reaches the barrier; the first thread enters a polling state, repeatedly checking for a release condition indicating the end of the barrier; subsequent threads to reach the barrier are moved to the core on which the first thread is executing; and other cores are powered down as the number of moved threads increases; and wherein when the first thread detects the release condition, the powered down cores are powered up and are available for use by the threads. | 10-29-2015 |
20150317190 | Method and system for converting a single-threaded software program into an application-specific supercomputer - The invention comprises (i) a compilation method for automatically converting a single-threaded software program into an application-specific supercomputer, and (ii) the supercomputer system structure generated as a result of applying this method. The compilation method comprises: (a) Converting an arbitrary code fragment from the application into customized hardware whose execution is functionally equivalent to the software execution of the code fragment; and (b) Generating interfaces on the hardware and software parts of the application, which (i) Perform a software-to-hardware program state transfer at the entries of the code fragment; (ii) Perform a hardware-to-software program state transfer at the exits of the code fragment; and (iii) Maintain memory coherence between the software and hardware memories. If the resulting hardware design is large, it is divided into partitions such that each partition can fit into a single chip. Then, a single union chip is created which can realize any of the partitions. | 11-05-2015 |
20150324241 | LEVERAGING PATH INFORMATION TO GENERATE PREDICTIONS FOR PARALLEL BUSINESS PROCESSES - Systems and methods for determining a representation of an execution trace include identifying at least one execution trace of a business process model, the business process model including parallel paths where a path influences an outcome of a decision. Path information of the business process model is determined using a processor, the path information including at least one of task execution order for each parallel path, task execution order across parallel paths, and dependency between parallel paths. A path representation for the at least one execution trace is selected based upon the path information to determine a representation of the at least one execution trace. | 11-12-2015 |
20150347188 | DELEGATED BUSINESS PROCESS MANAGEMENT FOR THE INTERNET OF THINGS - A delegated business process management system for the Internet of Things is provided. The system includes a plurality of platform levels, wherein a business process is created and managed in a first platform level. The first platform level can delegate the ability and authorization to perform a task of a created business process to a second platform level, wherein the ability and authorization to perform the task includes the ability and authorization to communicate with other systems to obtain needed data. Further, the ability and authorization to perform the task includes the ability and authorization to delegate sub-tasks to a third platform level, wherein the ability and authorization to perform the sub-task includes the ability and authorization to communicate with other systems to obtain needed data. The ability to delegate to additional platform levels may occur for any number of possible platform levels that are available. | 12-03-2015 |
20150363230 | PARALLELISM EXTRACTION METHOD AND METHOD FOR MAKING PROGRAM - A method of extracting parallelism of an original program by a computer includes: a process of determining whether or not a plurality of macro tasks to be executed after a condition of one conditional branch included in the original program is satisfied are executable in parallel; and a process of copying the conditional branch regarding which the macro tasks are determined to be executable in parallel, to generate a plurality of conditional branches. | 12-17-2015 |
20150363242 | METHODS AND APPARATUS TO MANAGE CONCURRENT PREDICATE EXPRESSIONS - Methods, apparatus, systems and articles of manufacture are disclosed to manage concurrent predicate expressions. An example method discloses inserting a first condition hook into a first thread, the first condition hook associated with a first condition, inserting a second condition hook into a second thread, the second condition hook associated with a second condition, preventing the second thread from executing until the first condition is satisfied, and identifying a concurrency violation when the second condition is satisfied. | 12-17-2015 |
20160004572 | METHODS FOR SINGLE-OWNER MULTI-CONSUMER WORK QUEUES FOR REPEATABLE TASKS - There are provided methods for single-owner multi-consumer work queues for repeatable tasks. A method includes permitting a single owner thread of a single owner, multi-consumer, work queue to access the work queue using atomic instructions limited to only a single access and using non-atomic operations. The method further includes restricting the single owner thread from accessing the work queue using atomic instructions involving more than one access. The method also includes synchronizing amongst other threads with respect to their respective accesses to the work queue. | 01-07-2016 |
20160019091 | AUTOMATION OF WORKFLOW CREATION AND FAILURE RECOVERY - A system includes a processor and a non-transitory computer-readable medium. The non-transitory computer-readable medium comprises instructions executable by the processor to cause the system to perform a method. The method comprises receiving a first job to execute and executing the first job. A plurality of data associated with the first job is determined The plurality of data comprises data associated with (i) a second job executed immediately prior to the first job, (ii) a third job executed immediately after the first job, (iii) a determination of whether the first job failed or executed successfully and (iv) a type of data associated with the first job. The determined plurality of data is stored. | 01-21-2016 |
20160019100 | Method, Apparatus, and Chip for Implementing Mutually-Exclusive Operation of Multiple Threads - Multiple lock assemblies are distributed on a chip, each lock assembly manage a lock application message for applying for a lock and a lock release message for releasing a lock that are sent by one small core. Specifically, embodiments include receiving a lock message sent by a small core, where the lock message carries a memory address corresponding to a lock requested by a first thread in the small core; calculating, using the memory address of the requested lock, a code number of a lock assembly to which the requested lock belongs; and sending the lock message to the lock assembly corresponding to the code number, to request the lock assembly to process the lock message. | 01-21-2016 |
20160034304 | DEPENDENCE TRACKING BY SKIPPING IN USER MODE QUEUES - A system and methods embodying some aspects of the present embodiments for maintaining compact in-order queues are provided. The queue management method includes requesting a work pointer from a primary queue, wherein the work pointer points to a work assignment comprising an indirect queue and a dependency list; responsive to the dependency list not being cleared, invalidating the work pointer in the primary queue and adding a new pointer to the end of the primary queue, the new pointer configured to point to the work assignment; and responsive to the dependency list being clear, removing the work pointer from the primary queue and performing work in the indirect queue. | 02-04-2016 |
20160055029 | Programmatic Decoupling of Task Execution from Task Finish in Parallel Programs - A computing device may be configured to commence or begin executing a first task via a first thread (e.g., in a first processor or core), begin executing a second task via a second thread (e.g., in a second processor or core), identify an operation of the second task as being dependent on the first task finishing execution, and change an operating state of the second task to “executed” prior to the first task finishing execution so as to allow the computing device to enforce task-dependencies while the second thread continues to process additional tasks. The computing device may begin executing a third task via the second thread (e.g., in a second processing core) prior to the first task finishing execution, and change the operating state of the second task to “finished” after the first task finishes. | 02-25-2016 |
20160062806 | METHOD AND DEVICE FOR DETECTING A RACE CONDITION AND A COMPUTER PROGRAM PRODUCT - A method is provided for detecting a race condition of a parallel task when accessing a shared resource in a multi-core processing system. The method requires that a core requires only a read access to the data set of another core thereby ensuring better decoupling of the tasks. In an initialisation phase, initial values of global variables are assigned, in an activation phase, each core determines if the other core has written new values to the variables and if so, detects a race condition. Initial values are restored for each variable in a deactivation phase. | 03-03-2016 |
20160070593 | Coordinated Garbage Collection in Distributed Systems - Fast modern interconnects may be exploited to control when garbage collection is performed on the nodes (e.g., virtual machines, such as JVMs) of a distributed system in which the individual processes communicate with each other and in which the heap memory is not shared. A garbage collection coordination mechanism (a coordinator implemented by a dedicated process on a single node or distributed across the nodes) may obtain or receive state information from each of the nodes and apply one of multiple supported garbage collection coordination policies to reduce the impact of garbage collection pauses, dependent on that information. For example, if the information indicates that a node is about to collect, the coordinator may trigger a collection on all of the other nodes (e.g., synchronizing collection pauses for batch-mode applications where throughput is important) or may steer requests to other nodes (e.g., for interactive applications where request latencies are important). | 03-10-2016 |
20160070595 | ASYNCHRONOUS TASK MULTIPLEXING AND CHAINING - The described technology is directed towards sharing asynchronous (async) tasks between task chains, including in a way that prevents cancellation of lower-level chain entity from cancelling a shared async task. A shared async task is wrapped in multiplexer code that maintains lower-level entity identities as a set of listeners of the shared async task, and when a listener cancels, only removes that listener from the set of listeners so that the shared async task does not cancel as long as one listener remains in the set. Also described is optimization to share an async task, and wrapping tasks in cancel-checking code that prevents the task from running its work if the task is intended to be cancelled but is queued to run before the cancel request is queued to run. | 03-10-2016 |
20160077889 | SYSTEM AND METHOD FOR SUPPORTING WAITING THREAD NOTIFICATION OFFLOADING IN A DISTRIBUTED DATA GRID - A system and method for waiting-thread notification offloading supports thread notification offloading in a multi-threaded messaging system such as a distributed data grid. Pending notifiers are maintained in a pending notifier collection. A service thread adds pending notifiers to the collection instead of signaling the notifiers on the service thread. An active thread associated with the service thread determines that it is ready to enter a wait state. Before entering the wait state or instead of entering the wait state, the active thread retrieves pending notifiers from the pending notifier collection, signals the retrieved pending notifiers, and wakes the waiting threads associated with the pending notifiers, thereby offloading the notifier signaling overhead from the service thread to the active thread. Such waiting-thread notification offloading of notifier processing from the service thread improves performance of the service thread with respect to other tasks thereby improving performance of the service thread and the multi-threaded messaging system. | 03-17-2016 |
20160092280 | Adaptive Lock for a Computing System having Multiple Runtime Environments and Multiple Processing Units - A method for operating a lock in a computing system having plural processing units and running under multiple runtime environments is provided. When a requester thread attempts to acquire the lock while the lock is held by a holder thread, determine whether the holder thread is suspendable or non-suspendable. If the holder thread is non-suspendable, put the requester thread in a spin state regardless of whether the requester thread is suspendable or non-suspendable; otherwise determines whether the requester thread is suspendable or non-suspendable unless the requester thread quits acquiring the lock. If the requester thread is non-suspendable, arrange the requester thread to attempt acquiring the lock again; otherwise add the requester thread to a wait queue as an additional suspended thread. Suspended threads stored in the wait queue are allowable to be resumed later for lock acquisition. The method is applicable for the computing system with a multicore processor. | 03-31-2016 |
20160098295 | INCREASED CACHE PERFORMANCE WITH MULTI-LEVEL QUEUES OF COMPLETE TRACKS - Exemplary method, system, and computer program product embodiments for increased cache performance using multi-level queues by a processor device. The method includes distributing to each one of a plurality of central processing units (CPUs) workload operations for creating complete tracks from partial tracks, creating sub-queues of the complete tracks for distributing to each one of the CPUs, and creating demote scan tasks based on workload of the CPUs. Additional system and computer program product embodiments are disclosed and provide related advantages. | 04-07-2016 |
20160103715 | ISSUE CONTROL FOR MULTITHREADED PROCESSING - A multithreaded data processing system performs processing using resource circuitry which is a finite resource. A saturation signal is generated to indicate when the resource circuitry is no longer able to perform processing operations issued to it. This saturations signal may be used to select a scheduling algorithm to be used for further scheduling, such as switching to scheduling from a single thread as opposed to round-robin scheduling from all of the threads. Re-execution queue circuitry is used to queue processing operations which have been enabled to be issued so as to permit other processing operations which may not be blocked by the lack of use of circuitry to attempt issue. | 04-14-2016 |
20160139965 | METHOD AND APPARATUS FOR A HIERARCHICAL SYNCHRONIZATION BARRIER IN A MULTI-NODE SYSTEM - A hierarchical barrier synchronization of cores and nodes on a multiprocessor system, in one aspect, may include providing by each of a plurality of threads on a chip, input bit signal to a respective bit in a register, in response to reaching a barrier; determining whether all of the plurality of threads reached the barrier by electrically tying bits of the register together and “AND”ing the input bit signals; determining whether only on-chip synchronization is needed or whether inter-node synchronization is needed; in response to determining that all of the plurality of threads on the chip reached the barrier, notifying the plurality of threads on the chip, if it is determined that only on-chip synchronization is needed; and after all of the plurality of threads on the chip reached the barrier, communicating the synchronization signal to outside of the chip, if it is determined that inter-node synchronization is needed. | 05-19-2016 |
20160139966 | ALMOST FAIR BUSY LOCK - Managing exclusive control of a shareable resource includes publishing a claim non atomically to a lock by a thread that is next to own the lock in an ordered set of threads that have requested to own the lock. The claim includes a structure capable of being read and written only in a single memory access. A determination is made of whether the next owning thread has been pre-empted. Responsive to the determination, the next owning thread of the lock acquires the lock if the next owning thread has not been pre-empted and retries acquisition of the lock if the next owning thread has been pre-empted. Responsive to the next owning thread being pre-empted, a subsequent owning thread acquires the lock unfairly and atomically, consistently modifies the lock such that a next lock owner can determine that the next lock owner has been preempted. | 05-19-2016 |
20160147577 | SYSTEM AND METHOD FOR ADAPTIVE THREAD CONTROL IN A PORTABLE COMPUTING DEVICE (PCD) - Systems and methods for adaptive thread control in a portable computing device (PCD) are provided. During operation a plurality of parallelized tasks for an application on the PCD are created. The application is executed with at least one processor of the PCD processing at least one main thread of the application. A determination is made whether a portion of the application being executed includes one or more of the parallelized tasks. A determination is made whether to perform the parallelized tasks in parallel. Based on the determination whether to perform the parallelized tasks in parallel, the parallelized tasks are executed with the at least one main thread of the application if the determination is not to perform the parallelized tasks in parallel, or if the determination is to perform the parallelized tasks in parallel, at least one worker thread is activated to execute the parallelized task in parallel with the main thread. | 05-26-2016 |
20160170799 | MULTIPLE-THREAD PROCESSING METHODS AND APPARATUSES | 06-16-2016 |
20160170811 | JOB SCHEDULING AND MONITORING | 06-16-2016 |
20160170813 | TECHNOLOGIES FOR FAST SYNCHRONIZATION BARRIERS FOR MANY-CORE PROCESSING | 06-16-2016 |
20160179570 | Parallel Computing Without Requiring Antecedent Code Deployment | 06-23-2016 |
20160179574 | WORK-EFFICIENT, LOAD-BALANCED, MERGE-BASED PARALLELIZED CONSUMPTION OF SEQUENCES OF SEQUENCES | 06-23-2016 |
20160188381 | METHOD AND SYSTEM FOR ENSURING INTEGRITY OF CRITICAL DATA - A method and system for ensuring integrity of manipulatable critical data, including a processor configured to execute at least one restartable processing thread module, a shared memory communicatively coupled with the processor and having at least some manipulatable critical data wherein when request to restart the at least one restartable processing thread module is received, the at least one restartable processing thread module is restarted. | 06-30-2016 |
20160196162 | Devices and Methods Implementing Operations for Selective Enforcement of Task Dependencies | 07-07-2016 |
20160253219 | DATA STREAM PROCESSING BASED ON A BOUNDARY PARAMETER | 09-01-2016 |
20160253221 | PAGERANK ALGORITHM LOCK ANALYSIS | 09-01-2016 |
20180024870 | PRECONDITION EXCLUSIVITY MAPPING OF TASKS TO COMPUTATIONAL LOCATIONS | 01-25-2018 |