Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Batch or transaction processing

Subclass of:

718 - Electrical computers and digital processing systems: virtual machine task or process management or task management/control

718100000 - TASK MANAGEMENT OR CONTROL

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20090199188INFORMATION PROCESSING SYSTEM, COMPUTER READABLE RECORDING MEDIUM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND COMPUTER DATA SIGNAL - An information processing system includes: an administrator command restricting execution unit that executes an administrator command with a restriction, when a user not having administrative authority requests execution of the administrator command that can be executed by an administrator having the administrative authority: an execution history memory that stores the execution history of the administrator command executed by the administrator command restricting execution unit: and a state changing unit that, upon receipt of an acceptance of the execution history, puts the result of execution of the administrator command shown in the execution history and executed by the administrator command restricting execution unit, into the state that is observed where the administrator command shown in the execution history is executed without the restriction.08-06-2009
20090064147TRANSACTION AGGREGATION TO INCREASE TRANSACTION PROCESSING THROUGHOUT - Provided are techniques for increasing transaction processing throughput. A transaction item with a message identifier and a session identifier is obtained. The transaction item is added to an earliest aggregated transaction in a list of aggregated transactions in which no other transaction item as the same session identifier. A first aggregated transaction in the list of aggregated transactions that has met execution criteria is executed. In response to determining that the aggregated transaction is not committing, the aggregated transaction is broken up into multiple smaller aggregated transactions and a target size of each aggregated transaction is adjusted based on measurements of system throughput.03-05-2009
20090193422UNIVERSAL SERIAL BUS DRIVING DEVICE AND METHOD - A universal serial bus (USB) driving device electrically coupled to a data receiver is configured for driving a USB to forward data requests from the data receiver to a data transmitter for processing the data requests. The USB driving device may preset a maximum active transaction number, initialize an active transaction number, and determine if the active transaction number is less than the maximum active transaction number. The USB driving device may drive the USB to forward a data request from the data receiver to the data transmitter if the active transaction number is less than the maximum active transaction number and increase the active transaction number after the USB driving device forwards a data request to the data transmitter. A USB driving method is also provided.07-30-2009
20090193421Method For Determining The Impact Of Resource Consumption Of Batch Jobs Within A Target Processing Environment - Exemplary embodiments of the present invention provide a solution that comprises the capability to dispatch jobs to target system according to the declared resource consumption by providing a way for automatically calculating the resource consumption at a target processing system. The algorithmic solution provided can also be utilized by standalone reporting tools to calculate resource consumption offline and show resource impact based upon database query results in the event that data samples are available. The solution provided by exemplary embodiments of the present invention is obtained by reducing the resource consumption problem to an optimization problem involving a set of linear equations.07-30-2009
20090193420METHOD AND SYSTEM FOR BATCH PROCESSING FORM DATA - The input and batch processing of data for insertion in a database. In one aspect of the invention, processing input data includes receiving data for insertion into a database, the data including data fields holding data entries. At least one of the data fields is determined to be a standard field having a standard data entry, and at least one different data field is determined to have been designated a batch mode field, where each batch mode field has a plurality of associated batch mode data entries. A data record is created for each batch mode data entry of the batch mode field, where each data record includes a different batch mode data entry, and each data record includes a copy of the standard data entry.07-30-2009
20130086588System and Method of Using Transaction IDS for Managing Reservations of Compute Resources Within a Compute Environment - A system and method for reserving resources within a compute environment such as a cluster or grid are disclosed. The method aspect of the disclosure includes receiving a request for resource availability in a compute environment from a requestor, associating a transaction identification with the request and resources within the compute environment that can meet the request and presenting the transaction identification to the requestor. The transaction ID can also be associated with a time frame in which resources are available and can also be associated with modifications to the resources and supersets of resources that could be drawn upon to meet the request. The transaction ID can also be associated with metrics that identify how well the resource fit with the request and modifications that can make the resources better match the workload which would be submitted under the request.04-04-2013
20080256541METHOD AND SYSTEM FOR OPTIMAL BATCHING IN A PRODUCTION ENVIRONMENT - A method for processing a plurality of jobs in a production environment may include receiving a plurality of Jobs and receiving one or more instructions into a workflow management system to process the plurality of jobs. The one or more instructions may include a setup characteristic. The method may also include clustering, by the workflow management system, the plurality of jobs into super-groups based on the setup characteristic, determining, by the workflow management system, a processing sequence based on the clustering and processing the jobs according to the determined processing sequence.10-16-2008
20100077398Using Idempotent Operations to Improve Transaction Performance - An apparatus for optimizing a transaction comprising an initial sequence of computer operations, the apparatus includes a processing unit which identifies one or more idempotent operations comprised within the initial sequence, and which reorders the initial sequence to form a reordered sequence comprising a first sub-sequence of the computer operations followed by a second sub-sequence of the computer operations, the second sub-sequence comprising only the one or more idempotent operations.03-25-2010
20130081025Adaptively Determining Response Time Distribution of Transactional Workloads - An adaptive mechanism is provided that learns the response time characteristics of a workload by measuring the response times of end user transactions, classifies response times into buckets, and dynamically adjusts the response time distribution as response time characteristics of the workload change. The adaptive mechanism maintains the actual distribution across changes and, thus, helps the end user to understand changes of workload behavior that take place over a longer period of time. The mechanism is stable enough to suppress spikes and returns a constant view of workload behavior, which is required for long term, performance analysis and capacity planning. The mechanism distinguishes between an initial learning phase of establishing the distribution and one or multiple reaction periods. The reaction periods can be for example a fast reaction period for strong fluctuations of the workload behavior and a slow reaction period for small deviations.03-28-2013
20100042999TRANSACTIONAL QUALITY OF SERVICE IN EVENT STREAM PROCESSING MIDDLEWARE - Computer implemented method, system and computer usable program code for achieving transactional quality of service in a transactional object store system. A transaction is received from a client and is executed, wherein the transaction comprises reading a read-only derived object, or reading or writing another object, and ends with a decision to request committing the transaction or a decision to request aborting the transaction. Responsive to a decision to request committing the transaction, wherein the transaction comprises writing a publishing object, events are delivered to event stream processing queries, and are executed in parallel with executing of the transaction. Responsive to a decision to request committing a transaction that comprises reading a read-only derived object, a validation is performed to determine whether the transaction can proceed to be committed, whether the transaction should abort, or whether the validation should delay waiting for one or more event stream processing queries to complete.02-18-2010
20100042998ONLINE BATCH EXECUTION - Online batch processing. A job request is received from a user for processing . The job request includes a job configuration and a plurality of operations to process the data. The job configuration is extracted from the job request and stored in a configuration cache. A metadata configuration code is extracted from the job configuration and stored in a code cache. A runtime configuration code is extracted from the job configuration and stored in an instance cache. This allows information to be obtained from the configuration cache, the code cache and the instance cache for processing subsequent job requests with the similar job configuration and the plurality of operations. The data is fetched from at least one of the job request and an external storage device. The plurality of operations is executed on the data to generate a result. The result is provided to the user through at least one of an output stream and the external storage device.02-18-2010
20090158281INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A disclosed information processing apparatus includes data processing components configured to process data; a workflow execution unit configured to request the data processing components to process the data according to a workflow defining processes to be executed to process the data and the order of the processes; and a processing component selection unit. The processing component selection unit is configured to receive a request to execute an undefined process not defined in the workflow from the workflow execution unit, the request including identification information for identifying the undefined process; to search a list for the identification information, the list associating the identification information with one of the data processing components for executing the undefined process; and if the identification information is found in the list, to request the workflow execution unit to request the one of the data processing components associated with the identification information to process the data.06-18-2009
20090158280Automated Execution of Business Processes Using Dual Element Events - Systems and methods for providing interaction of multiple business process events by using management and transactional events, where the management event accepts initial transaction information, maintains state information, and initiates one or more of the transactional events. One of the transactional events receives initial transactional information and state information from the management event, performs a transaction based upon the initial transactional information and the state information, and provides resulting transactional information to the management event. The management event then completes execution of the business process based upon the resulting transactional information.06-18-2009
20120167099Intelligent Retry Method Using Remote Shell - Method for issuing and monitoring a remote batch job, method for processing a batch job, and system for processing a remote batch job. The method for issuing and monitoring a remote batch job includes formatting a command to be sent to a remote server to include a sequence identification composed of an issuing server identification and a time stamp, forwarding the command from the issuing server to the remote server for processing, and determining success or failure of the processing of the command at the remote server. When the failure of the processing of the command at the remote server is determined, the method further includes instructing the remote server to retry the command processing.06-28-2012
20090307695APPARATUS, AND ASSOCIATED METHOD, FOR HANDLING CONTENT PURSUANT TO TRANSFER BETWEEN ENTERPRISE CONTENT MANAGEMENT REPOSITORIES - An apparatus, and an associated method, for facilitating bulk transfer of large volumes of data-center, ECM repository-stored content. Multiple, simultaneous threads or tasks are concurrently run both to import and to export content, as desired. A controller controls the running of the tasks and is connected to a thread container that runs the tasks by way of a TCP/IP socket or other suitable communication connection.12-10-2009
20120192188Resource Allocator With Knowledge-Based Optimization - An automated resource allocation technique for scheduling a batch computer job in a multi-computer system environment. According to example embodiments, resource allocation processing may be performed when receiving a batch computer job that needs to be run by a software application executable on more than one computing system in the multi-computer system environment. The job may be submitted for pre-processing analysis by the software application. A pre-processing analysis result comprising job evaluation information may be received from the software application and the result may be evaluated to select a computing system in the multi-computer system environment that is capable of executing the application to run the job. The job may be submitted to the selected computing system to have the software application run the job to completion.07-26-2012
20120192187Customizing Automated Process Management - Embodiments of an event-driven process management and automation system are disclosed. Such system may be particularly appropriate for a multi-tenant environment so that a single process handling flow may be generated for a given process. Because in a multi-tenant environment many different entities may desire to customize or optimize this process handling flow for their particular usage, modifications to the process flow may be easily handled by a non-technical user to realize process modification without incurring additional development costs. Using a multi-level hierarchical inheritance model in accordance with an embodiment of the present invention, a process may be standardized, with focused customization available on a macro and/or micro level.07-26-2012
20130074079SYSTEM AND METHOD FOR FLEXIBLE DATA TRANSFER - A method and system for flexibly transferring data from one or more data sources to one or more data destinations within an information network where each of the one or more data sources have data in a particular source format and each of the one or more data destinations have data in the same or another particular destination format using a parameter database that includes parameters to control the transfer of data, a scheduler that initiates the transfer of data, and a data loader in communications with the parameter database and scheduler that, upon initiation by the scheduler, extracts data from the one or more data sources, manipulates the extracted source data into one or more destination formats associated with the one or more data destinations, and inserts the data into one or more data destinations according to the parameters within the parameter database.03-21-2013
20090031308Method And Apparatus For Executing Multiple Simulations on a Supercomputer - A supercomputer processing system is provided that is configured to execute a plurality of simulations through transaction processing. The supercomputer processing system includes a supercomputer configured to execute a first simulation of the plurality of simulations and generate an output based upon execution of the first simulation, and a transaction hub. The transaction hub includes a relational database configured to store the output of the first simulation, and an application server having a service-oriented architecture (SOA) that supports an event triggering service. The event triggering service is configured to detect the output of the first simulation and automatically trigger the supercomputer to execute a second simulation of the plurality of simulations using the output of the first simulation stored in the relational database.01-29-2009
20130067478RESOURCE MANAGEMENT SYSTEM - Provided are: information acquisition unit that periodically acquires usage state information of resource by load; user terminal that creates permitted usage period data; period setting unit that sets each load's permitted usage period based on permitted usage period data; determination unit that determines whether each load's resource usage is within permitted usage period; and display unit that distinctively displays whether resource usage period is within permitted usage period based on determination result by determination unit. User terminal creates single batch permitted usage period data. Period setting unit includes batch setting unit that performs batch setting whereby batch permitted usage period is set as permitted usage periods of all loads.03-14-2013
20110023038BATCH SCHEDULING WITH SEGREGATION - In accordance with the disclosed subject matter there are described techniques for segregating requests issued by threads running in a computer system.01-27-2011
20110023037APPLICATION SELECTION OF MEMORY REQUEST SCHEDULING - The present disclosure generally describes systems, methods and devices for operating a computer system with memory based scheduling. The computer system may include one or more of an application program and a memory controller in communication with memory banks. The memory controller may include a scheduler for scheduling requests. The application program may select a scheduling algorithm for scheduling requests from a plurality of scheduling algorithms. The application program may instruct the scheduler to schedule requests using the selected scheduling algorithm.01-27-2011
20090024997Batch processing apparatus - There are provided a batch processing apparatus and a batch processing method capable of significantly reducing the burden on a system designer, a system administrator, and an operator operating the system as well as significantly reducing the development cost. The batch processing apparatus acquires from a repository the metadata defined as information on at least data item name, input, processing content, and output, as well as information stored and registered in advance in the predetermined repository, inputs input data according to a declaration process of the acquired metadata, creates output data by processing the input data, and outputs the output data. Herein, the batch processing apparatus creates the output data by changing all the output data related to the metadata according to change of the metadata.01-22-2009
20090007118Native Virtualization on a Partially Trusted Adapter Using PCI Host Bus, Device, and Function Number for Identification - A mechanism that allows a single physical I/O adapter, such as a PCI, PCI-X, or PCI-E adapter, to perform I/O transactions using the PCI host bus, device, and function numbers to validate that an I/O transaction originated from the proper host is provided. Additionally, a method for facilitating identification of a transaction source partition is provided. An input/output transaction that is directed to a physical adapter is originated from a system image of a plurality of system images. The host data processing system adds an identifier of the system image to the input/output transaction. The input/output transaction is then conveyed to the physical adapter for processing of the input/output transaction.01-01-2009
20080295098System Load Based Dynamic Segmentation for Network Interface Cards11-27-2008
20090037914AUTOMATIC CONFIGURATION OF ROBOTIC TRANSACTION PLAYBACK THROUGH ANALYSIS OF PREVIOUSLY COLLECTED TRAFFIC PATTERNS - A system and method which accesses or otherwise received collected performance data for at least one server application, where the server application capable of performing a plurality of transactions with client devices and the client devices are geographically dispersed from the server in known geographical locales, which automatically determines from the performance data which of the transactions are utilized by users of the client devices, which selects utilized transactions according to at least one pre-determined selection criteria, which automatically generates a transaction playback script for each of the selected transactions substituting test information in place of user-supplied or user-unique information in the transactions, which designates each script for execution from a geographical locale corresponding to the locale of the clients which execute said utilized transactions, which deploys the playback scripts to robotic agents geographically co-located with client devices according to the locale designation, and which executes the playback scripts from the robotic agents in order to exercise the server application across similar network topologies and under realistic conditions.02-05-2009
20110283283DETERMINING MULTIPROGRAMMING LEVELS - A method of managing the execution of a workload of transactions of different transaction types on a computer system. Each transaction type may have a different resource requirement. The method may include intermittently, during execution of the workload, determining the performance of each transaction type. A determination may be made of whether if there is an overloaded transaction type in which performance is degraded with an increase in the number of transactions of the transaction type. If there is an overloaded transaction type, the number of transactions of at least one transaction type may be changed.11-17-2011
20110296419EVENT-BASED COORDINATION OF PROCESS-ORIENTED COMPOSITE APPLICATIONS - A process model specified using, for example, UML activity diagrams can be translated into an event-based model that can be executed on top of a coordination middleware. For example, a process model may be encoded as a collection of coordinating objects that interact with each other through a coordination middleware including a shared memory space. This approach is suitable for undertaking post-deployment adaptation of process-oriented composite applications. In particular, new control dependencies can be encoded by dropping new (or enabling existing) coordinating objects into the space and/or disabling existing ones.12-01-2011
20090300622DISTRIBUTED TRANSACTION PROCESSING SYSTEM - An infrastructure and method for processing a transaction using a plurality of target systems. A method is disclosed including: generating a request from a source system, wherein the request includes an initial identifier and a counter value; submitting the request to at least two target systems; processing the request at a first target system and ignoring the request at a second target system based on the initial identifier; submitting a resubmitted request to the at least two target systems if a timely response is not received by the source system, wherein the resubmitted request includes an incremented counter value; and processing the resubmitted request by only one of the first and second target systems based on the incremented counter value.12-03-2009
20130219396TRANSACTION PROCESSING SYSTEM AND METHOD - According to one example of the present invention, there is provided a transaction processing system. The transaction processing system comprises a transaction analyzer for determining characteristics of a received transaction, a processing agent selector for selecting, based on the determined characteristics, a processing agent for processing the received transaction, and a dispatcher for dispatching the received transaction and the selected processing agent to a processing resource to cause the transaction to be processed in accordance with the selected processing agent on at least one of the computing devices.08-22-2013
20100115520COMPUTER SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR MANAGING BATCH JOB - A computer system for managing batch jobs is described. The computer system includes a storage unit for storing at least one job template, and an execution unit for creating or updating a job net definition following a condition defined in the at least one job template, creating or updating a job net, or executing a discovery of a job conflict using at least one attribute or relationship in a set of data including at least one predetermined attribute of an configuration item, and a relationship between the configuration item and another configuration item, the set of data being stored in a repository and updatable through a discovery for detecting information about a configuration item. The present invention further provides a method and computer program product for managing batch jobs.05-06-2010
20100269114INSTANT MESSENGER AND METHOD FOR DISPATCHING TASK WITH INSTANT MESSENGER - Embodiments of the present invention provide an Instant Messenger (IM) and a method for dispatching tasks by the IM. The method includes: presetting task information in a start-up program configuration table, and dispatching, by the IM, tasks in batches according to the task information in the start-up program configuration table. Preferably, the task information includes the execution delay information and priority information of the tasks. The IM includes a logging-on flow management module and a task dispatching management module. The logging-on flow management module is adapted to store the start-up program configuration table, which is configured with the task information. The task dispatching management module is adapted to dispatch the tasks in batches according to the task information in the start-up program configuration table. With embodiments of the invention, the start-up delay of the IM may be reduced.10-21-2010
20120110582REAL-TIME COMPUTING RESOURCE MONITORING - Techniques used to enhance the execution of long-running or complex software application instances and jobs on computing systems are disclosed herein. In one embodiment, a real time, self-predicting job resource monitor is employed to predict inadequate system resources on the computing system and failure of a job execution on the computing system. This monitor may not only determine if inadequate resources exist prior to execution of the job, but may also detect in real time if inadequate resources will be encountered during the execution of the job for cases where resource availability has unexpectedly decreased. If a resource deficiency is predicted on the executing computer system, the system may pause the job and automatically take corrective action or alert a user. The job may resume after the resource deficiency is met. Additional embodiments also integrate this resource monitoring capability with the adaptive selection of a computer system or application execution environment based on resource capability predictions and benchmarks.05-03-2012
20100122254BATCH AND APPLICATION SCHEDULER INTERFACE LAYER IN A MULTIPROCESSOR COMPUTING ENVIRONMENT - A multiprocessor computer system batch system interface between an application level placement scheduler and one or more batch systems comprises a predefined protocol operable to convey processing node resource request and availability data between the application level placement scheduler and the one or more batch systems.05-13-2010
20130219395BATCH SCHEDULER MANAGEMENT OF TASKS - A request from a client to perform a task is received. The client has a predetermined limit of compute resources. The task is dispatched from a batch scheduler to a compute node as a non-speculative task if a quantity of compute resources is available at the compute node to process the task, and the quantity of compute resources in addition to a total quantity of compute resources being utilized by the client is less than or equal to the predetermined limit, such that the non-speculative task is processed without being preempted by an additional task requested by an additional client. The task is dispatched, from the batch scheduler to the compute node, as a speculative task if the quantity of compute resources is available to process the task, and the quantity of compute resources in addition to the total quantity of compute resources is greater than the predetermined limit.08-22-2013
20110197194TRANSACTION-INITIATED BATCH PROCESSING - A system and method is provided for initiating batch processing on a computer system from a terminal. The method generates a message from the terminal, where the message defines a transaction to be performed on a computer system. The transaction schedules and runs a program that extracts data from the message. The message is then transmitted to the computer system. The data is then used to generate batch job control language and a batch job is run on the computer system. The output of the batch job is then routed back to the terminal.08-11-2011
20100088702CHECKING TRANSACTIONAL MEMORY IMPLEMENTATIONS - A transactional memory implementation is tested using an automatically generated test program and a locking memory model implementation which defines atomicity semantics. Schedules of the test program specify different interleavings of read operations and write operations of the test program threads. Executing the schedules under the locking memory model implementation provides legal final states of the shared variable(s). Executing the schedules under the transactional memory implementation produces candidate final states of the shared variable(s). If the candidate final states are also legal final states, then the transactional memory implementation passes the test.04-08-2010
20100088703Multi-core system with central transaction control - There is provided a multi-core system that includes a lower-subsystem including a first processor and a number of slave processing cores. Each of the slave processing cores can be a coprocessor or a digital signal processor. The first processor is configured to control processing on the slave processing cores and includes a system dispatcher configured to control transactions for execution on the slave processing cores. The system dispatcher is configured to generate the transactions to be executed on the slave processing cores. The first processor can include a number of hardware drivers for receiving the transactions from the system dispatcher and providing the transactions to the slave processing cores for execution. The multi-core system can further include an upper sub-system in communication with the lower-subsystem and including a second processor configured to provide protocol processing.04-08-2010
20100083256TEMPORAL BATCHING OF I/O JOBS - Batching techniques are provided to maximize the throughput of a hardware device based on the saturation point of the hardware device. A balancer can determine the saturation point of the hardware device and determine the estimated time cost for IO jobs pending in the hardware device. A comparison can be made and if the estimated time cost total is lower than the saturation point one or more IO jobs can be sent to the hardware device.04-01-2010
20090064150Process Manager - A process manager (03-05-2009
20110173619Apparatus and method for optimized application of batched data to a database - A computer readable medium storing executable instructions includes executable instructions to: receive a continuous stream of database transactions; form batches of database transactions from the continuous stream of database transactions; combine batches of database transactions with similar operations to form submission groups; identify dependencies between submission groups to designate priority submission groups; and apply priority submission groups to a database target substantially synchronously with the receipt of the continuous stream of database transactions.07-14-2011
20090276777Multiple Programs for Efficient State Transitions on Multi-Threaded Processors - A system and method to optimize processor performance and minimizing average thread latency by selectively loading a cache when a program state, resources required for execution of a program or the program itself change, is described. An embodiment of the invention supports a “cache priming program” that is selectively executed for a first thread/program/sub-routine of each process. Such a program is optimized for situations when instructions and other program data are not yet resident in cache(s), and/or whenever resources required for program execution or the program itself changes. By pre-loading the cache with two resources required for two instructions for only a first thread, average thread latency is reduced because the resources are already present in the cache. Since, such a mechanism is carried out only for one thread in a program cycle, pitfalls of a conventional general pre-fetch scheme that involves parsing of the program in advance to determine which resources and instructions will be needed at a later time, are avoided.11-05-2009
20090282410Systems and Methods for Supporting Software Transactional Memory Using Inconsistency-Aware Compilers and Libraries - Systems and methods to reduce overhead associated with read set consistency validation in software transactional memory implementations are disclosed. These systems and methods may employ an inconsistency-aware compiler-library technique, in which an inconsistency-aware compiler communicates to various inconsistency-aware library functions knowledge about whether a given transaction has read consistent values to date. The inconsistency-aware library functions may exploit this information to avoid the need to validate the transaction, or portions thereof. If read set values are known to be consistent prior to the function call, the compiler may pass a parameter value to the function indicating as much. Otherwise, it may pass a value indicating that the read set values may be inconsistent. An inconsistency-aware function may determine that it will not perform a dangerous action, even though its parameters may not be consistent. Otherwise, the inconsistency-aware function may invoke a validation operation, or may perform other error avoidance operations.11-12-2009
20090282409METHOD, SYSTEM AND PROGRAM PRODUCT FOR GROUPING RELATED PROGRAM SEQUENCES - The invention resides in a method, system and program product for grouping related program sequences for performing a task. The method includes establishing, using a first code for grouping, one or more groups that can be formed between one or more related group-elements obtained from a plurality of groupable program flow documents, and executing, using a group program sequence engine, the groupable program flow documents, wherein each group-element considered an ancestor group-element of a group established and validated by the first code is executed before executing a related group-element obtained from the group, and wherein the related group-element of the group is executed only once during execution of the groupable program flow documents for performing the task. In an embodiment, the establishing step includes identifying a name attribute specified in the one or more related group-elements for establishing the one or more groups.11-12-2009
20100138836System and Method for Reducing Serialization in Transactional Memory Using Gang Release of Blocked Threads - Transactional Lock Elision (TLE) may allow multiple threads to concurrently execute critical sections as speculative transactions. Transactions may abort due to various reasons. To avoid starvation, transactions may revert to execution using mutual exclusion when transactional execution fails. Because threads may revert to mutual exclusion in response to the mutual exclusion of other threads, a positive feedback loop may form in times of high congestion, causing a “lemming effect”. To regain the benefits of concurrent transactional execution, the system may allow one or more threads awaiting a given lock to be released from the wait queue and instead attempt transactional execution. A gang release may allow a subset of waiting threads to be released simultaneously. The subset may be chosen dependent on the number of waiting threads, historical abort relationships between threads, analysis of transactions of each thread, sensitivity of each thread to abort, and/or other thread-local or global criteria.06-03-2010
20080250411RULE BASED ENGINE FOR VALIDATING FINANCIAL TRANSACTIONS - A method and system for checking whether customer orders for transactions of financial instruments conform to business logic rules. Executable rule files are created and stored in a repository. New executable rule files can be created by scripting the new business logic rules in a script file which is converted into a corresponding source code file written in a computer programming language. The source code file is compiled to create an individual executable rule file. A rule selection repository contains identification of groups of selected executable rule files. The invention determines the category of the customer order and reads, from the rule selection repository, a group of executable rule files that correspond to the identified category of the customer order. The selected executable rule files are executed to check the conformance of the customer order. Execution results are stored in a status repository for subsequent retrieval and analysis.10-09-2008
20110209151AUTOMATIC SUSPEND AND RESUME IN HARDWARE TRANSACTIONAL MEMORY - An apparatus and method is disclosed for a computer processor configured to access a memory shared by a plurality of processing cores and to execute a plurality of memory access operations in a transactional mode as a single atomic transaction and to suspend the transactional mode in response to determining an implicit suspend condition, such as a program control transfer. As part of executing the transaction, the processor marks data accessed by the speculative memory access operations as being speculative data. In response to determining a suspend condition (including by detecting a control transfer in an executing thread) the processor suspends the transactional mode of execution, which includes setting a suspend flag and suspending marking speculative data. If the processor later detects a resumption condition (e.g., a return control transfer corresponding to a return from the control transfer), the processor is configured to resume the marking of speculative data.08-25-2011
20090265710Mechanism to Enable and Ensure Failover Integrity and High Availability of Batch Processing - A method, system and computer program product for managing a batch processing job is presented. The method includes partitioning a batch processing job for execution by a cluster of computers. One of the computers from the cluster of computers is designated as a primary command server that oversees and coordinates execution of the batch processing job. Stored in an object data grid structure in the primary command server is an alarm setpoint, boundaries, waiting batch processes and executing batch process states. The object data grid structure is replicated and stored as a replicated object grid structure in a failover command server. If the primary command server fails, the failover command server freezes all of the currently executing batch processes, interrogates processing states of the cluster of computers, and restarts execution of the batch processes in the cluster of computers in accordance with the processing states of the cluster of computers.10-22-2009
20110271282Multi-Threaded Sort of Data Items in Spreadsheet Tables - To sort data items in a spreadsheet table, data items in the spreadsheet table are divided into a plurality of blocks. Multiple threads are used to sort the data items in the blocks. After the data items in the blocks are sorted, multiple merge threads are used to generate a final result block. The final result block contains each of the data items in the spreadsheet table. Each of the merge threads is a thread that merges two source blocks to generate a result block. Each of the source blocks is either one of the sorted blocks or one of the result blocks generated by another one of the merge threads. A sorted version of the spreadsheet table is then displayed. The data items in the sorted version of the spreadsheet table are ordered according to an order of the data items in the final result block.11-03-2011
20080282245Media Operational Queue Management in Storage Systems - A method for media operational queue management in disk storage systems evaluates a plurality of pending storage operations requiring a destage storage operation. A first set of the plurality of pending storage operations is organized in a first array queue grouping (AQG). The AQG is structured such that all of the storage operations are completed within a predefined latency period. A computer-implemented method manages a plurality of pending storage operations in a disk storage system. A pending operation queue is examined to determine a plurality of read and write operations for a first array. A first set of the plurality of read and write operations is grouped into a first array queue grouping (AQG). The first set of the plurality of read and write operations is sent to a redundant array of independent disks (RAID) controller adapter for processing.11-13-2008
20090119667METHOD AND APPARATUS FOR IMPLEMENTING TRANSACTION MEMORY - A method and apparatus for implementing transactional memory (TM). The method includes: allocating a hardware-based transaction footprint recorder to the transaction, for recording footprints of the transaction when a transaction is begun; determining that the transaction is to be switched out; and switching out the transaction, where the footprints of the switched-out transaction are still kept in the hardware-based transaction footprint recorder. According to the present invention, transaction switching is supported by TM, and the cost of conflict detection between an active transaction and a switched-out transaction is greatly reduced since the footprints of the switched-out transaction are still kept in the hardware-based transaction footprint recorder.05-07-2009
20090328044Transfer of Event Logs for Replication of Executing Programs - A mechanism for replicating programs executing on a computer system having a first storage means is provided. The mechanism identifies the events corresponding to requests from one executing program, which may be different from the executing program to be replicated, which are non-deterministic and identifies the ‘Non Abortable Events’ (NAE's), which change irremediably the state of the external world that need to be reproduced in the replay of the programs. These events are immediately transferred for replay and the executing program is blocked until the transfer is acknowledged. For the other non-deterministic events, they are logged and sent to the executing program, the executing programs remaining blocked only if the log is full and/or if a timer between two NAEs expires, in this case a log transfer to the standby machine is performed to prepare replication before unblocking of the executing program.12-31-2009
20090031311PROCESSING TECHNIQUES FOR SERVERS HANDLING CLIENT/SERVER TRAFFIC AND COMMUNICATIONS - The present invention relates to a system for handling client/server traffic and communications pertaining to the delivery of hypertext information to a client. The system includes a central server which processes a request for a web page from a client. The central server is in communication with a number of processing/storage entities, such as an annotation means, a cache, and a number of servers which provide identification information. The system operates by receiving a request for a web page from a client. The cache is then examined to determine whether information for the requested web page is available. If such information is available, it is forwarded promptly to the client for display. Otherwise, the central server retrieves the relevant information for the requested web page from the pertinent server. The relevant information is then processed by the annotation means to generate additional relevant computer information that can be incorporated to create an annotated version of the requested web page which includes additional displayable hypertext information. The central server then relays the additional relevant computer information to the client so as to allow the annotated version of the requested web page to be displayed. In addition, the central server can update the cache with information from the annotated version. The central server can also interact with different servers to collect and maintain statistical usage information. In handling its communications with various processing/storage entities, the operating system running behind the central server utilizes a pool of persistent threads and an independent task queue to improve the efficiency of the central server. A task needs to have a thread assigned to it before the task can be executed. The pool of threads are continually maintained and monitored by the operating system. Whenever a thread is available, the operating system identifies the next executable task in the task queue and assigns the available thread to such task so as to allow it to be executed. Upon conclusion of the task execution, the assigned thread is released back into the thread pool. An additional I/O queue for specifically handling input/output tasks can also be used to further improve the efficiency of the central server.01-29-2009
20090164999Job execution system, portable terminal apparatus, job execution apparatus, job data transmission and receiving methods, and recording medium - A job execution system has a portable terminal apparatus and a job execution apparatus capable of being interconnected. Job data stored in a storage of the portable terminal apparatus is automatically transmitted to the job execution apparatus, if establishment of a connection between the portable terminal apparatus and the job execution apparatus is detected on the portable terminal apparatus, or alternatively, if establishment of a connection between the portable terminal apparatus and the job execution apparatus is detected on the job execution apparatus and then a request for the job data is transmitted to the portable terminal apparatus from the job execution apparatus.06-25-2009
20090049444Service request execution architecture for a communications service provider - A service request execution architecture promotes acceptance and use of self-service provisioning by consumers, leading to increased revenue and cost savings for the service provider as consumers order additional services. The architecture greatly reduces the technical burden of managing exceptions that occur while processing requests for services. The architecture accelerates the process of fulfilling requests for services by efficiently and effectively reducing the system resources needed to process exceptions by eliminating redundant exceptions corresponding to related service requests.02-19-2009
20090178042Managing A Workload In A Database - Described herein is a workload manager for managing a workload in a database that includes: an admission controller operating to divide the workload into a plurality of batches, with each batch having at least one workload process to be performed in the database, and each batch having a memory requirement based on the available memory for processing workloads in the database; a scheduler operating to assign a unique priority to each of the at least one workload process in each of the plurality of batches, the unique priority provides an order in which each workload process is executed in the database; and an execution manager operating to execute the at least one workload process in each of the plurality of batches in accordance with the unique priority assigned to each workload process.07-09-2009
20090172677Efficient State Management System - The present invention provides an efficient state management system for a complex ASIC, and applications thereof. In an embodiment, a computer-based system executes state-dependent processes. The computer-based system includes a command processor (CP) and a plurality of processing blocks. The CP receives commands in a command stream and manages a global state responsive to global context events in the command stream. The plurality of processing blocks receive the commands in the command stream and manage respective block states responsive to block context events in the command stream. Each respective processing block executes a process on data in a data stream based on the global state and the block state of the respective processing block.07-02-2009
20090187906SEMI-ORDERED TRANSACTIONS - Embodiments of the present invention provide a system that facilitates transactional execution in a processor. The system starts by executing program code for a thread in a processor. Upon detecting a predetermined indicator, the system starts a transaction for a section of the program code for the thread. When starting the transaction, the system executes a checkpoint instruction. If the checkpoint instruction is a WEAK_CHECKPOINT instruction, the system executes a semi-ordered transaction. During the semi-ordered transaction, the system preserves code atomicity but not memory atomicity. Otherwise, the system executes a regular transaction. During the regular transaction, the system preserves both code atomicity and memory atomicity.07-23-2009
20090064149Latency coverage and adoption to multiprocessor test generator template creation - A multi-core multi-node processor system has a plurality of multiprocessor nodes, each including a plurality of microprocessor cores. The plurality of microprocessor nodes and cores are connected and form a transactional communication network. The multi-core multi-node processor system has further one or more buffer units collecting transaction data relating to transactions sent from one core to another core. An agent is included which calculates latency data from the collected transaction data, processes the calculated latency data to gather transaction latency coverage data, and creates random test generator templates from the gathered transaction latency coverage data. The transaction latency coverage data indicates at least the latencies of the transactions detected during collection of the transaction data having a pre-determined latency, and includes, for example, four components for transaction type latency, transaction sequence latency, transaction overlap latency, and packet distance latency. Thus, random test generator templates may be created using latency coverage.03-05-2009
20090055824TASK INITIATOR AND METHOD FOR INITIATING TASKS FOR A VEHICLE INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to.02-26-2009
20110225586Intelligent Transaction Merging - An apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria, including, but not limited to, one or more of the following: the number of queued transactions; the rate of growth of the number of queued transactions; the calendrical time; estimates of the time required to execute the individual transactions; a measure of importance of the individual transactions; the transaction types of the individual transactions; a measure of importance of one or more data updated by the individual transactions; a measure of availability of one or more resources; a current estimate of the time penalty associated with shadowing a page of memory; and the probability of rollback for the individual transactions, and for the merged transaction.09-15-2011
20090260011COMMAND LINE TRANSACTIONS - A computer system with a command shell that supports execution of commands within transactions. The command shell responds to commands that start, complete or undo transactions. To support transactions, the command shell may maintain and provide transaction state information. The command shell may interact with a transaction manager that interfaces with resource managers that process transacted instructions within transacted task modules to commit or roll back transacted instructions from those task modules based on transaction state information maintained by the shell. Parameters associated with commands can control behavior in association with transaction process, including supporting nesting transactions and non-nested transactions and bypassing transacted processing in some instances of a command.10-15-2009
20090064148Linking Transactions with Separate Systems - Methods and apparatuses enable linking stateful transactions with multiple separate systems. The first and second stateful transactions are associated with a transaction identifier. Real time data from each of the multiple systems is concurrently presented within a single operation context to provide a transparent user experience. Context data may be passed from one system to another to provide a context in which operations in the separate systems can be linked.03-05-2009
20110231848FORECASTING SYSTEMS AND METHODS - Improved methods and systems are provided for asynchronously updating forecast rollup numbers. The asynchronousity is achieved by decoupling the source data change from further manipulations of the source data, for example in calculating and updating forecast rollup numbers by user role hierarchy, layer by layer. An event message queue implementation can be used for asynchronous processing. The process works by dequeuing a batch of event messages and then deduping and sorting them before applying forecast logic. Forecast numbers are updated based on target data and then rolled up the user role levels by aggregating forecast numbers for all subordinate forecast data entries.09-22-2011
20090254905FACILITATING TRANSACTIONAL EXECUTION IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING - Embodiments of the present invention provide a system that executes a transaction on a simultaneous speculative threading (SST) processor. In these embodiments, the processor includes a primary strand and a subordinate strand. Upon encountering a transaction with the primary strand while executing instructions non-transactionally, the processor checkpoints the primary strand and executes the transaction with the primary strand while continuing to non-transactionally execute deferred instructions with the subordinate strand. When the subordinate strand non-transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate the first strand ID. When the primary strand transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate a second strand ID.10-08-2009
20090222824Distributed transactions on mobile phones - A message is received by a mobile phone via a messaging service provided by a mobile network operator, wherein the messaging service is supported by the mobile phone. It is determined whether the message is associated with a distributed transaction. The message is forwarded to a resource manager resident on the mobile phone if the message is associated with the distributed transaction. The resource manager performs an action upon receiving the message based on contents of the message, wherein the action is associated with the distributed transaction.09-03-2009
20090222822Nested Queued Transaction Manager - A method and apparatus that manages transactions during a data migration. The transfer of data from an old database to a new database is structured as a set of small transactions. The transactions can be structured in a hierarchy of dependent transactions such that the transactions are nested or similarly hierarchical. A migration manager includes a set of transaction management methods or processes that enable the processing of the nested transactions thereby providing a higher level of granularity in transaction size and providing the ability to rollback small individual transactions as well as affected related transactions. The transaction management methods and processes manage a set of queues that are utilized by the migration manager to generate and execute the nested transactions.09-03-2009
20090222821Non-Saturating Fairness Protocol and Method for NACKing Systems - Processing transaction requests in a shared memory multi-processor computer network is described. A transaction request is received at a servicing agent from a requesting agent. The transaction request includes a request priority associated with a transaction urgency generated by the requesting agent. The servicing agent provides an assigned priority to the transaction request based on the request priority, and then compares the assigned priority to an existing service level at the servicing agent to determine whether to complete or reject the transaction request. A reply message from the servicing agent to the requesting agent is generated to indicate whether the transaction request was completed or rejected, and to provide reply fairness state data for rejected transaction requests.09-03-2009
20090235258Multi-Thread Peripheral Processing Using Dedicated Peripheral Bus - One embodiment of the present invention performs peripheral operations in a multi-thread processor. A peripheral bus is coupled to a peripheral unit to transfer peripheral information including a command message specifying a peripheral operation. A processing slice is coupled to the peripheral bus to execute a plurality of threads. The plurality of threads includes a first thread sending the command message to the peripheral unit.09-17-2009
20090222823QUEUED TRANSACTION PROCESSING - A method, system, and computer program product for processing a transaction between a client and an application server asynchronously in a distributed transaction processing environment having at least one transaction queue manager. An application request is received from a client to initiate a transaction. The request is placed in a transaction request queue by the transaction queue manager. The request is processed at the application server asynchronously relative to the receipt of the request. A response to the request is determined, and the response is placed in a transaction response queue for retrieval by the client.09-03-2009
20100275207GATHERING STATISTICS IN A PROCESS WITHOUT SYNCHRONIZATION - Each processing resource in a scheduler of a process executing on a computer system maintains counts of the number of tasks that arrive at the processing resource and the number of tasks that complete on the processing resource. The counts are maintained in storage that is only writeable by the corresponding processing resource. The scheduler collects and sums the counts from each processing resource and provides statistics based on the summed counts and previous summed counts to a resource manager in response to a request from the resource manager. The scheduler does not reset the counts when the counts are collected and stores copies of the summed counts for use with the next request from the resource manager. The counts may be maintained without synchronization and with thread safety to minimize the impact of gathering statistics on the application.10-28-2010
20120272246DYNAMICALLY SCALABLE PER-CPU COUNTERS - Embodiments include a multiprocessing method including obtaining a local count of a processor event at each of a plurality of processors in a multiprocessor system. A total count of the processor event is dynamically updated to include the local count at each processor having reached an associated batch size. The batch size associated with one or more of the processors is dynamically varied according to the value of the total count.10-25-2012
20090249342SYSTEMS AND METHODS FOR TRANSACTION QUEUE ANALYSIS - A method and system for determining a wait time for a transaction queue is disclosed. In the method, video data related to a first transaction queue is received. The video data is processed to determine a number of items presented by a first entity for a transaction in the first transaction queue. A total transaction time is estimated for the first entity based on the number of items presented by the first entity and a transaction time for each of the number of items. A wait time for the first transaction queue is determined based on the estimated total transaction time for the first entity. If the wait time for the first transaction queue is greater than a first threshold, then the availability of a second transaction queue is indicated to a second entity.10-01-2009
20100162248COMPLEX DEPENDENCY GRAPH WITH BOTTOM-UP CONSTRAINT MATCHING FOR BATCH PROCESSING - Architecture that includes a batch framework engine incorporated into the server and that supports a rich set of dependencies between tasks in a single batch job. A bottom-up approach is employed where analysis is performed if a task can run based on the parent tasks. The framework runs batch jobs without the need of a client, and provides the ability to create dependencies between tasks, which allow the execution of tasks in parallel or in sequence. Using an AND/OR relationship engine, a task can require that all parent tasks (logical AND) meet requirements to run, or that only one parent (logical OR) is required to meet its requirements in order to run. Clean-up or non-important tasks can have the a flag set where even if such tasks fail when executing, the batch job will ignore these tasks when defining the final status of the job.06-24-2010
20080307418Enabling and Disabling Byte Code Inserted Probes Based on Transaction Monitoring Tokens - A method of enabling transaction probes used to monitor a transaction or modify a primary application handling the transaction. The method begins with retrieving a token associated with the transaction. The token contains information regarding which transaction probes from a plurality of transaction probes will be enabled with respect to the transaction. The token is then read to determine the set of transaction probes from the plurality of transaction probes that will be enabled. The determined set of transaction probes is then enabled.12-11-2008
20080307417Document registration system, information processing apparatus, and computer usable medium therefor - A document registration system for registering a plurality of electronic documents is provided. The document registration system includes an information processing apparatus having a display unit and a storage unit. The information processing apparatus is provided with a registration unit, which can be operated to perform a registration process to register the electronic documents in an interactive processing mode, wherein the electronic documents are registered manually, and in a batch processing mode, wherein the electronic documents are registered automatically in a batch, and a first switching unit to mutually switch activation of the interactive processing mode and the batch processing mode.12-11-2008
20100186015METHOD AND APPARATUS FOR IMPLEMENTING A TRANSACTIONAL STORE SYSTEM USING A HELPER THREAD - A method, apparatus, and computer readable article of manufacture for executing a transaction by a processor apparatus that includes a plurality of hardware threads. The method includes the steps of: creating a main software thread for executing the transaction; creating a helper software thread for executing a barrier function; executing the main software thread and the helper software thread using the plurality of hardware threads; deciding whether the execution of the barrier function is required; executing the barrier function by the helper software thread; and returning to the main software thread. The step of executing the barrier function includes: stalling the main software thread; activating the helper software thread; and exiting the helper software thread in response to completion of the execution.07-22-2010
20100162250OPTIMIZATION FOR SAFE ELIMINATION OF WEAK ATOMICITY OVERHEAD - A method and apparatus for optimizing weak atomicity overhead is herein described. A state table is maintained either during static or dynamic compilation of code to track data non-transactionally accessed. Within execution of a transaction, such as at transactional memory accesses or within a commit function, it is determined if data associated with memory access within the transaction is to be conflictingly accessed outside the transaction from the state table. If the data is not accessed outside the transaction, then the transaction potentially commits without weak atomicity safety mechanisms, such as privatization. Furthermore, even if data is accessed outside the transaction, optimized safety mechanisms may be performed to ensure isolation between the potentially conflicting accesses, while eliding the mechanisms for data not accessed outside the transaction.06-24-2010
20100162249OPTIMIZING QUIESCENCE IN A SOFTWARE TRANSACTIONAL MEMORY (STM) SYSTEM - A method and apparatus for optimizing quiescence in a transactional memory system is herein described. Non-ordering transactions, such as read-only transactions, transactions that do not access non-transactional data, and write-buffering hardware transactions, are identified. Quiescence in weak atomicity software transactional memory (STM) systems is optimized through selective application of quiescence. As a result, transactions may be decoupled from dependency on quiescing/waiting on previous non-ordering transaction to increase parallelization and reduce inefficiency based on serialization of transactions.06-24-2010
20100162247METHODS AND SYSTEMS FOR TRANSACTIONAL NESTED PARALLELISM - Methods and systems for executing nested concurrent threads of a transaction are presented. In one embodiment, in response to executing a parent transaction, a first group of one or more concurrent threads including a first thread is created. The first thread is associated with a transactional descriptor comprising a pointer to the parent transaction.06-24-2010
20100162246USE OF ANALYTICS TO SCHEDULE, SUBMIT OR MONITOR PROCESSES OR REPORTS - Embodiments of the invention provide for executing a batch process on a repository of information. According to one embodiment, executing a batch process can comprise presenting one or more aspects of records of the repository and receiving a selection of a criteria for at least one aspect of the records. Records matching the selected criteria can be identified and a summary of the information can be presented. The batch process can comprise one of a plurality of batch processes. In such a case, a selection of the batch process can be received and parameters of the batch process can be populated with the selected criteria. The batch process can then be executed with the parameters. For example, executing the batch process can comprise generating a report based on the parameters and the records of the repository.06-24-2010
20100169886DISTRIBUTED MEMORY SYNCHRONIZED PROCESSING ARCHITECTURE - A data processing system comprises a plurality of processors, where each processor is coupled to a respective dedicated memory. The data processing system also comprises a voter module that is disposed between the plurality of processors and one or more peripheral devices such as a network interface, output device, input device, or the like. Each processor provides an I/O transaction to the voter module and the voter module determines whether a majority (or predominate) transaction is present among the I/O transactions received from each of the processors. If a majority transaction is present, the voter module releases the majority transaction to the peripheral. However, if no majority transaction is determined, the system outputs a no majority transaction signal (or raises an exception). Also, a processor error signal (or exception) is output for any processor providing an I/O transaction not corresponding to the majority transaction. The error signal may also optionaly prompt the recovery of any or all processors with methods such as but not limited to reboot/reset based upon predetermined or emergent criteria.07-01-2010
20120198456REDUCING THE NUMBER OF OPERATIONS PERFORMED BY A PERSISTENCE MANAGER AGAINST A PERSISTENT STORE OF DATA ITEMS - Method, apparatus, and computer program product for reducing the number of operations performed by a persistence manager against a persistent store of data items. A plurality of requests from an application are received. Each request is mapped into a transaction for performance against the persistent store, each transaction having at least one operation. Transactions are accumulated and preprocessed to reduce the number of operations for performance against the persistent store.08-02-2012
20090077555TECHNIQUES FOR IMPLEMENTING SEPARATION OF DUTIES USING PRIME NUMBERS - A technique for implementing separation of duties for transactions includes determining a current task assignment number of an entity. The technique also includes determining whether the entity can perform a new task based upon the current task assignment number and a task transaction number (which is based on at least one prime number) assigned to the new task.03-19-2009
20090077554APPARATUS, SYSTEM, AND METHOD FOR DYNAMIC ADDRESS TRACKING - An apparatus, system, and method are disclosed for dynamic address tracking. A token module creates a token for a job that accesses data in a storage system comprising a plurality of storage devices. The token comprises a job name. The job is a batch job. A storage module stores location information for the data accessed by the job in a token table. The location information is indexed by the token. In addition, the location information includes an input/output device name, an address space, a data set name, and a storage device name. A communication module receives a diagnostic command comprising the job name. The token module reconstructs the token using the job name. The storage module retrieves the location information indexed by the token in response to the diagnostic command.03-19-2009
20090077556IMAGE MEDIA MODIFIER - A method and apparatus for back-end processing a recordable media production job after it has been generated and sent to a recordable media production system is described that intercepts the image file generation at a low level within the recordable media production system and allows for adding, deletion and modification of the underlying data files and/or modification of the production job itself under control of an external user defined process, such as an application, DLL, script or plug-in. This interception of the image file generation occurs before the final image is assembled and handed off to the media recorder/producer to be written to the recordable media and is invoked at multiple stages of reading the production job edit list, allowing changes to occur at each stage of the imaging or pre-mastering (file system creation) process.03-19-2009
20100162245RUNTIME TASK WITH INHERITED DEPENDENCIES FOR BATCH PROCESSING - A batch job processing architecture that dynamically creates runtime tasks for batch job execution and to optimize parallelism. The task creation can be based on the amount of processing power available locally or across batch servers. The work can be allocated across multiple threads in multiple batch server instances as there are available. A master task splits the items to be processed into smaller parts and creates a runtime task for each. The batch server picks up and executes as many runtime tasks as the server is configured to handle. The runtime tasks can be run in parallel to maximize hardware utilization. Scalability is provided by splitting runtime task execution across available batch server instances, and also across machines. During runtime task creation, all dependency and batch group information is propagated from the master task to all runtime tasks. Dependencies and batch group configuration are honored by the batch engine.06-24-2010
20100146509SELECTION OF TRANSACTION MANAGERS BASED ON TRANSACTION METADATA - One or more transaction managers are automatically selected from a plurality of transaction managers for use in processing a transaction. The selection is based on types of resources used by the transaction and supported resource types of the transaction managers. The selection of the one or more transaction managers enables less than all of the transaction managers of an application server to be used in transaction commit processing, thereby improving performance.06-10-2010
20100211952BUSINESS EVENT PROCESSING - Techniques for business event processing are presented. Producer services produce events that are managed and distributed by a transport service. Consumer services acquire events from the transport service and perform actions in response to the events. The production, distribution, and processing of the events and actions may be asynchronously and concurrently performed.08-19-2010
20100146510Automated Scheduling of Mass Data Run Objects - Techniques are described in which indication of a computer application to be configured for use in a particular business enterprise is received. A mass data run object is identified. The mass data run object defines a computer operation to be performed by the computer application to transform business transaction data as part of a business process. The mass data run object identifies i) selection parameters to select business transaction data to be transformed by the computer operation defined by the mass data run object and ii) instructions, that when executed, perform the computer operation to transform the selected business transaction data. A mass data run object instance corresponding to the identified mass data run object is generated and scheduled for execution.06-10-2010
20090031309System and Method for Split Hardware Transactions - A split hardware transaction may split an atomic block of code to be executed using multiple hardware transactions, while logically taking effect as a single atomic transaction. A split hardware transaction may use software to combine the multiple hardware transactions into one logically atomic operation. In some embodiments, a split hardware transaction may allow execution of atomic blocks including non-hardware-transactionable (NHT) operations without resorting to exclusively software transactions. A split hardware transaction may maintain a thread-local buffer logs all memory accesses performed by the split hardware transaction. A split hardware transaction may use a hardware transaction to copy values read from shared memory locations into a local memory buffer. To execute a non-hardware-transactionable operation, the split hardware transaction may commit the active hardware transaction, execute the non-hardware-transactionable operation, and then initiate a new hardware transaction to execute the rest of the atomic block.01-29-2009
20100083257ARRAY OBJECT CONCURRENCY IN STM - A software transactional memory system is provided that creates an array of transactional locks for each array object that is accessed by transactions. The system divides the array object into non-overlapping portions and associates each portion with a different transactional lock. The system acquires transactional locks for transactions that access corresponding portions of the array object. By doing so, different portions of the array object can be accessed by different transactions concurrently. The system may use a shared shadow or undo copy for accesses to the array object.04-01-2010
20100083255NOTIFICATION BATCHING BASED ON USER STATE - Batching messages such as notifications intended for a user to preserve battery life on a computing device associated with the user. A server such as a proxy server receives the messages from one or more service providers. The proxy server maintains a state of the user. If the state indicates that the user is idle, the messages are stored at the proxy server unless the messages correspond to activating messages. The activating messages are sent to the user upon receipt. The stored messages are sent when the state changes to an active state or when a defined duration of time elapses. In some embodiments, the messages are presence notifications in an instant messaging session on a mobile computing device. By reducing the frequency of sent notifications, the battery life of the mobile computing device is preserved.04-01-2010
20090328043INFRASTRUCTURE OF DATA SUMMARIZATION INCLUDING LIGHT PROGRAMS AND HELPER STEPS - A method of summarizing data includes providing a multi-method summarization program including instructions for summarizing data for a transaction processing system. At least one functional aspect of the transaction processing system for which a summarization of a subset of the data is desired is determined. The functional subset to a user as a light summarization program is exposed. The dependencies of the functional subset can be enforced at runtime allowing packaging flexibility. A method for efficient parallel processing involving not necessarily filled requests for help.12-31-2009
20110067028DISTRIBUTED SERVICE POINT TRANSACTION SYSTEM - A device for processing electronic transactions is disclosed. The device includes a processor configured to receive, from a client processing device, a request for information to complete an electronic transaction by a user at an access device affiliated with an educational institution. The processor is further configured to transmit, to the client processing device, a response to the request, the response configured to be transmitted by the client processing device to the access device. The request for information is triggered at the access device by an identification carrier. The response to the request includes at least one of a permission or denial whether to provide, to the user, access to an educational space or item, access to electronic educational information, or determining at least one of the price and availability of an educational item to the user. A client-side device is also disclosed. Methods and machine-readable mediums are also disclosed.03-17-2011
20090164998Management of speculative transactions - Circuitry for receiving transaction requests from a plurality of masters and the masters themselves are disclosed. The circuitry comprises: an input port for receiving said transaction requests, at least one of said transaction requests received comprising an indicator indicating if said transaction is a speculative transaction; an output port for outputting a response to said master said transaction request was received from; and transaction control circuitry; wherein said transaction control circuitry is responsive to a speculative transaction request to determine a state of at least a portion of a data processing apparatus said circuitry is operating within and in response to said state being a predetermined state said transaction control circuitry generates a transaction cancel indicator and outputs said transaction cancel indicator as said response, said transaction cancel indicator indicating to said master that said speculative transaction will not be performed.06-25-2009
20090113430HARDWARE DEVICE INTERFACE SUPPORTING TRANSACTION AUTHENTICATION - A hardware device interface supporting transaction authentication is described herein. At least some illustrative embodiments include a device, including an interconnect interface, and processing logic (coupled to the bus interface) that provides access to a plurality of functions of the device through the interconnect interface. A first transaction received by the device, and associated with a function of the plurality of functions, causes a request identifier within the first transaction to be assigned to the function. Access to the function is denied if a request identifier of a second transaction, subsequent to the first transaction, does not match the request identifier assigned to the function.04-30-2009
20090113431METHOD FOR DETERMINING PARTICIPATION IN A DISTRIBUTED TRANSACTION - A method and system for determining whether a plurality of participants who are participating in a distributed transaction have registered their intention to commit their part of the transaction with a transaction manager, the method comprising the steps of: receiving a message from a participant, the message comprising a character sequence identifying the participant and the part of the transaction which the participant is processing; analyzing the character sequence to determine whether the character sequence further comprises an identifier for identifying whether a subsequent message is to be received by a second participant; and in dependence on the identifier identifying that there are no further subsequent messages to be received, informing each of the participants to commit their part of the transaction.04-30-2009
20110055834Enrollment Processing - A system for enrollment processing optimization for controlling batch job processing traffic transmitted to a mainframe computer includes an enrollment data input operations system operatively coupled to the mainframe computer and configured to provide a universal front end for data entry of enrollment information. Enrollment records based on the enrollment information is then created. A database system stores the enrollment records, and a workflow application module operatively coupled to the database system is configured to manage processing of the enrollment records and manage transmission of the enrollment records to the mainframe computer for batch processing. A batch throttling control module operatively coupled to the workflow application module and to the mainframe computer controls the rate and the number of enrollment records transmitted by the workflow application module to the mainframe computer for batch processing.03-03-2011
20090024998INITIATION OF BATCH JOBS IN MESSAGE QUEUING INFORMATION SYSTEMS - A method, system, and computer program product for initiating batch jobs in a message queuing information system are provided. The method, system, and computer program product provide for monitoring a message queue in the message queuing information system, detecting a predetermined condition in the message queue, determining whether a member name is associated with the predetermined condition, determining whether a server is available responsive to a member name being associated with the predetermined condition, and sending the member name to the server for the server to attach a batch job to load or unload one or more messages in the message queue based on information included in the member name responsive to a server being available.01-22-2009
20120246651SYSTEM AND METHOD FOR SUPPORTING BATCH JOB MANAGEMENT IN A DISTRIBUTED TRANSACTION SYSTEM - A system and method can support batch job management in a distributed system using a queue system with a plurality of queues and one or more job management servers. The queue system can represent a life cycle for executing a job by a job execution component, with each queue in the queue system adapted to receive one or more messages that represent a job status in the life cycle for executing the job. The one or more job management servers in the distributed system can direct the job execution component to execute the job, with each job management server monitoring one or more queues in the queue system, and performing at least one operation on the one or more messages in the queue system corresponding to a change of a job status for executing the job.09-27-2012
20090037915Staging block-based transactions - In one embodiment, the present invention includes a method for converting a write request from a file system transaction to a transaction record, forwarding the transaction record to a non-volatile storage for storage, where the transaction record has a different protocol than the file system transaction, and later forwarding it to the target storage. Other embodiments are described and claimed.02-05-2009
20090037913METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS - Automated techniques are disclosed for coordinating request or transaction processing in a data processing system. For example, a technique for handling requests in a data processing system comprises the following steps. A compound request comprising at least two individual requests of different types is received. An individual request r02-05-2009
20100306776DATA CENTER BATCH JOB QUALITY OF SERVICE CONTROL - A machine-controlled method can include determining an extended interval quality of service (QoS) specification for a batch job and determining a remaining data center resource requirement for the batch job based on the extended interval QoS specification. The machine-controlled method can also include determining an immediate QoS specification for the batch job based on the remaining data center resource requirement.12-02-2010
20090241117METHOD FOR INTEGRATING FLOW ORCHESTRATION AND SCHEDULING FOR A BATCH OF WORKFLOWS - Techniques for executing a batch of one or more workflows on one or more domains are provided. The techniques include receiving a request for workflow execution, sending at least one of one or more individual jobs in each workflow and dependency information to a scheduler, computing, by the scheduler, one or more outputs, wherein the one or more outputs are based on one or more performance objectives, and integrating orchestration of one or more workflows and scheduling of at least one of one or more jobs and one or more data transfers, wherein the integrating is used to execute a batch of one or more workflows based on at least one of one or more outputs of the scheduler, static information and run-time information.09-24-2009
20100333093FACILITATING TRANSACTIONAL EXECUTION THROUGH FEEDBACK ABOUT MISSPECULATION - One embodiment provides a system that facilitates the execution of a transaction for a program in a hardware-supported transactional memory system. During operation, the system records a misspeculation indicator of the transaction during execution of the transaction using hardware transactional memory mechanisms. Next, the system detects a transaction failure associated with the transaction. Finally, the system provides the recorded misspeculation indicator to the program to facilitate a response to the transaction failure by the program.12-30-2010
20110016470Transactional Conflict Resolution Based on Locality - Mechanisms are provided for handling conflicts in a transactional memory system. The mechanisms execute threads in a data processing system in a first conflict resolution mode of operation in which threads execute conflicting transactional blocks speculatively. The mechanisms determine, for a transactional block, if the first conflict resolution mode of operation is to be transitioned to a second conflict resolution mode of operation in which threads accessing conflicting transactional blocks are executed serially and non-speculatively. Moreover, the mechanisms execute a thread that accesses the transactional block using the second conflict resolution mode of operation in response to the determination indicating that the first conflict resolution mode of operation is to be transitioned to the second conflict resolution mode of operation.01-20-2011
20110055836METHOD AND DEVICE FOR REDUCING POWER CONSUMPTION IN APPLICATION SPECIFIC INSTRUCTION SET PROCESSORS - A method and device for converting first program code into second program code, such that the second program code has an improved execution on a targeted programmable platform, is disclosed. In one aspect, the method includes grouping operations on data for joint execution on a functional unit of the targeted platform, scheduling operations on data in time, and assigning operations to an appropriate functional unit of the targeted platform. Detailed word length information, rather than the typically used approximations like powers of two, may be used in at least one of the grouping, scheduling or assigning operations.03-03-2011
20090083739NETWORK RESOURCE ACCESS CONTROL METHODS AND SYSTEMS USING TRANSACTIONAL ARTIFACTS - Methods and systems are provided for use with digital data processing systems to control or otherwise limit access to networked resources based, at least in part, on transactional artifacts and/or derived artifacts.03-26-2009
20110078686METHODS AND SYSTEMS FOR HIGHLY AVAILABLE COORDINATED TRANSACTION PROCESSING - Embodiments of the invention provide a coordinated transaction processing system capable of providing primary-primary high availability as well as minimal response time to queries via utilization of a virtual reply system between partner nodes. One or more global queues ensure peer nodes are synchronized.03-31-2011
20100153952METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR MANAGING BATCH OPERATIONS IN AN ENTERPRISE DATA INTEGRATION PLATFORM ENVIRONMENT - Methods, system, and computer program products for managing batch operations are provided. A method includes defining a window of time in which a batch will run by entering a batch identifier into a batch table, the batch identifier specifying a primary key of the batch table and is configured as a foreign key to a batch schedule table. The time is entered into the batch schedule table. The method further includes entering extract-transform-load (ETL) information into the batch table. The ETL information includes a workflow identifier, a parameter file identifier, and a location in which the workflow resides. The method includes retrieving the workflow from memory via the workflow identifier and location, retrieving the parameter file, and processing the batch, according to the process, workflow, and parameter file.06-17-2010
20100131953Method and System for Hardware Feedback in Transactional Memory - Multi-threaded, transactional memory systems may allow concurrent execution of critical sections as speculative transactions. These transactions may abort due to contention among threads. Hardware feedback mechanisms may detect information about aborts and provide that information to software, hardware, or hybrid software/hardware contention management mechanisms. For example, they may detect occurrences of transactional aborts or conditions that may result in transactional aborts, and may update local readable registers or other storage entities (e.g., performance counters) with relevant contention information. This information may include identifying data (e.g., information outlining abort relationships between the processor and other specific physical or logical processors) and/or tallied data (e.g., values of event counters reflecting the number of aborted attempts by the current thread or the resources consumed by those attempts). This contention information may be accessible by contention management mechanisms to inform contention management decisions (e.g. whether to revert transactions to mutual exclusion, delay retries, etc.).05-27-2010
20110078687SYSTEM AND METHOD FOR SUPPORTING RESOURCE ENLISTMENT SYNCHRONIZATION - A system uses a transaction manager for supporting resource enlistment synchronization on an application server with a plurality of threads. This system also includes a plurality of wrapper objects, each of which wrapper object wraps a resource object associated with the application server. Upon receiving a request from a thread to enlist a resource object in a transaction, the transaction manager first checks with a wrapper object that wraps the resource object to see if there is a lock being held on the resource object by another said thread in another said transaction. If there is a lock, the transaction manager allows the thread to wait and signal the thread once the lock is freed by another said thread in another said transaction. Otherwise, the transaction manager grants a lock to the thread and holds the lock until an owner of the thread delists the resource object.03-31-2011
20090031310System and Method for Executing Nested Atomic Blocks Using Split Hardware Transactions - Split hardware transaction techniques may support execution of serial and parallel nesting of code within an atomic block to an arbitrary nesting depth. An atomic block including child code sequences nested within a parent code sequence may be executed using separate hardware transactions for each child, but the execution of the parent code sequence, the child code sequences, and other code within the atomic block may appear to have been executed as a single transaction. If a child transaction fails, it may be retried without retrying the parent code sequence or other child code sequences. Before a child transaction is executed, a determination of memory consistency may be made. If a memory inconsistency is detected, the child transaction may be retried or control may be returned to its parent. Memory inconsistencies between parallel child transactions may be resolved by serializing their execution before retrying at least one of them.01-29-2009
20090204969TRANSACTIONAL MEMORY WITH DYNAMIC SEPARATION - Strong semantics are provided to programs that are correctly synchronized in their use of transactions by using dynamic separation of objects that are accessed in transactions from those accessed outside transactions. At run-time, operations are performed to identify transitions between these protected and unprotected modes of access. Dynamic separation permits a range of hardware-based and software-based implementations which allow non-conflicting transactions to execute and commit in parallel. A run-time checking tool, analogous to a data-race detector, may be provided to test dynamic separation of transacted data and non-transacted data. Dynamic separation may be used in an asynchronous I/O library.08-13-2009
20090313628DYNAMICALLY BATCHING REMOTE OBJECT MODEL COMMANDS - A client-server architecture provides mechanisms to assist in minimizing round trips between a client and server. The architecture exposes an object model for client use that is structured similarly to the server based object model. The client batches commands and then determines when to execute the batched commands on the server. Proxy objects act as proxies for objects and serve as a way to suggest additional data retrieval operations for objects which have not been retrieved. Conditional logic and exceptions may be handled on the server without requiring additional roundtrips between the client and server.12-17-2009
20090217274APPARATUS AND METHOD FOR LOG BASED REPLICATION OF DISTRIBUTED TRANSACTIONS USING GLOBALLY ACKNOWLEDGED COMMITS - A computer readable storage medium includes executable instructions to read source node transaction logs to capture transaction data, including local transaction data, global transaction identifiers and participating node data. The global transaction identifiers and participating node data are stored in target node queues. The target node queues are accessed to form global transaction data. Target tables are constructed based upon the local transaction data and the global transaction data.08-27-2009
20090217273CONTROLLING INTERFERENCE IN SHARED MEMORY SYSTEMS USING PARALLELISM-AWARE BATCH SCHEDULING - A “request scheduler” provides techniques for batching and scheduling buffered thread requests for access to shared memory in a general-purpose computer system. Thread-fairness is provided while preventing short- and long-term thread starvation by using “request batching.” Batching periodically groups outstanding requests from a memory request buffer into larger units termed “batches” that have higher priority than all other buffered requests. Each “batch” may include some maximum number of requests for each bank of the shared memory and for some or all concurrent threads. Further, average thread stall times are reduced by using computed thread rankings in scheduling request servicing from the shared memory. In various embodiments, requests from higher ranked threads are prioritized over requests from lower ranked threads. In various embodiments, a parallelism-aware memory access scheduling policy improves intra-thread bank-level parallelism. Further, rank-based request scheduling may be performed with or without batching.08-27-2009
20090217272Method and Computer Program Product for Batch Processing - A method and computer program product for batch processing, the method includes: receiving a representation of a batch job that comprises a business logic portion and a non business logic portion; generating in real time business logic batch transactions in response to the representation of the batch job; and executing business logic batch transactions and online transactions; wherein the executing of business logic batch transactions is responsive to resource information and timing information.08-27-2009
20100058344ACCELERATING A QUIESCENCE PROCESS OF TRANSACTIONAL MEMORY - A method to perform validation of a read set of a transaction is presented. In one embodiment, the method compares a read signature of a transaction to a plurality of write signatures associated with a plurality of transactions. The method determines based on the result of comparison, whether to update a local value of the transaction to a commit value of another transaction from the plurality of the transactions.03-04-2010
20100070974SUPPORT APPARATUS FOR INFORMATION PROCESSING APPARATUS, SUPPORT METHOD AND COMPUTER PROGRAM - A support apparatus that supports an information processing apparatus is provided. The support apparatus comprising: a storage unit configured to associate and store settings of an executed job, a leakage amount of a memory leak, and a peak amount of memory; an acquisition unit configured to acquire a job group and settings for executing each job; a prediction unit configured to compare the settings stored in the storage unit and the settings acquired by the acquisition unit, and predict a leakage amount and a peak amount when the job is executed by the information processing apparatus; and a determination unit configured to determine whether there is a job in the job group in which a total value of the predicted peak amount of the job and the predicted leakage amount of a job executed preceding the job exceeds a memory capacity of the information processing apparatus.03-18-2010
20100058345AUTOMATIC AND DYNAMIC DETECTION OF ANOMOLOUS TRANSACTIONS - Anomalous transactions are identified and reported. Transactions are monitored from the server at which they are performed. A baseline is dynamically determined for transaction performance based on recent performance data for the transaction. The more recent performance data may be given a greater weight than less recent performace data. Anomalous transactions are then identified based on comparing the actual transaction performance to the baseline for the transaction. An agent installed on an application server performing the transaction receives monitoring data, determines baseline data, and identifies anomalous transactions. For each anomalous transaction, transaction performance data and other data is reported.03-04-2010
20110252426PROCESSING BATCH TRANSACTIONS - A batch data stream, which comprises inputs to a serial batch application program, is received. Batch code from the serial batch application program is translated into parallel code that is executable in parallel by multiple execution units. Checkpoints are applied to the batch data stream that has been received, and data between the checkpoints defines multiple threads. The multiple threads are stored in an input queue that feeds data inputs to multiple execution units. The parallel code is then executed in the multiple execution units by using the multiple threads as inputs.10-13-2011
20110078685SYSTEMS AND METHODS FOR MULTI-LEG TRANSACTION PROCESSING - Embodiments of the invention broadly contemplate systems, methods and arrangements for processing multi-leg transactions. Embodiments of the invention process multi-leg transactions while allowing later arrived orders to get processed during the time when an earlier, tradable multi-leg transaction is pending using a look-ahead mechanism without violating any relevant timing or exchange rules.03-31-2011
20110035748DATA PROCESSING METHOD, DATA PROCESSING PROGRAM, AND DATA PROCESSING SYSTEM - An execution system executes an update batch according to an update batch execution request from a terminal device and gives a batch execution command to each standby system. Each system stores the content of updated data in its update buffer; and subject to termination of the update batch by each system, the post-update data content is reflected in a database. While the above processing is performed, the execution system and the standby systems accept a reference request from the terminal device; and in a case of “batch not executed” or “batch in execution”, each system searches the database and then returns the pre-update data content to the terminal device; and in a case of “update content being reflected”, each system searches the database or the update buffer and then returns the post-update data content to the terminal device.02-10-2011
20110258630METHODS AND SYSTEMS FOR BATCH PROCESSING IN AN ON-DEMAND SERVICE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue.10-20-2011
20120204180MANAGING JOB EXECUTION - A method, system or computer usable program product for managing jobs scheduled for execution on a target system in which some jobs may spawn additional jobs scheduled for execution on the target system including intercepting jobs scheduled for execution in the target system, determining whether there is resource sufficiency in the target system for executing jobs, responsive to an affirmative determination of resource sufficiency, releasing previously intercepted jobs for execution in the target system, computing a limit of a number of jobs which can be concurrently scheduled by an external system to the target system, and transmitting the computed limit to the external system.08-09-2012
20090254906METHOD AND APPARATUS FOR ENABLING ENTERPRISE PROJECT MANAGEMENT WITH SERVICE ORIENTED RESOURCE AND USING A PROCESS PROFILING FRAMEWORD - A service-oriented architecture for enterprise project management integrates business processes, human resources and project management within an enterprise or across the value chain network. A representation having direction and attributes is provided to show the dependencies between a business value layer and a project-portfolio layer, and between the project-portfolio layer and resources. The representation is mapped to a Web Services representation in UDDI, Web Services interfaces, and Web Services based business processes through rope hyper-linking.10-08-2009
20090241118SYSTEM AND METHOD FOR PROCESSING INTERFACE REQUESTS IN BATCH - A batch messaging management system configured to process incoming request messages and provide reply messages in an efficient manner is disclosed. Instead of treating individual requests as individual transactions, the system reduces processing overhead within a mainframe computing environment by storing requests within a queue, spawning batch jobs according to the queue and processing multiple transactions using batch job processing.09-24-2009
20080320476VARIOUS METHODS AND APPARATUS TO SUPPORT OUTSTANDING REQUESTS TO MULTIPLE TARGETS WHILE MAINTAINING TRANSACTION ORDERING - A method, apparatus, and system are described, which generally relate to an integrated circuit having an interconnect that implements internal controls. The interconnect in an integrated circuit communicates transactions between initiator Intellectual Property (IP) cores and target IP cores coupled to the interconnect. The interconnect implements logic configured to support multiple transactions issued from a first initiator IP core to the multiple target IP cores while maintaining an expected execution order within the transactions. The logic supports a second transaction to be issued from the first initiator IP core to a second target IP core before a first transaction issued from the same first initiator IP core to a first target IP core has completed while ensuring that the first transaction completes before the second transaction and while ensuring an expected execution order within the first transaction and second transaction are maintained. The logic does not include any reorder buffering.12-25-2008
20090187907RECORDING MEDIUM IN WHICH DISTRIBUTED PROCESSING PROGRAM IS STORED, DISTRIBUTED PROCESSING APPARATUS, AND DISTRIBUTED PROCESSING METHOD - A master calculator assigns a series of processing groups to a communicable worker calculator. The master receives information about an execution time and a waiting time from the worker calculator for the series of processing groups. The computer acquires the time elapsed between transmitting the processing group transmitted to the worker calculator and receiving the execution result of the processing group from the worker calculator. The master calculates the communication time required for communication with the worker calculator on the basis of the information received and the elapsed time acquired. The master calculates the number of processings to be assigned to the worker calculator on the basis of the communication time calculated. The master generates a processing group to be assigned to the worker calculator on the basis of the number of processings calculated, and transmits the processing group generated to the worker calculator.07-23-2009
20090125907SYSTEM AND METHOD FOR THREAD HANDLING IN MULTITHREADED PARALLEL COMPUTING OF NESTED THREADS - An Explicit Multi-Threading (XMT) system and method is provided for processing multiple spawned threads associated with SPAWN-type commands of an XMT program. The method includes executing a plurality of child threads by a plurality of TCUs including a first TCU executing a child thread which is allocated to it; completing execution of the child thread by the first TCU; announcing that the first TCU is available to execute another child thread; executing by a second TCU a parent child thread that includes a nested spawn-type command for spawning additional child threads of the plurality of child threads, wherein the parent child thread is related in a parent-child relationship to the child threads that are spawned in conjunction with the nested spawn-type command; assigning a thread ID (TID) to each child thread, wherein the TID is unique with respect to the other TIDs; and allocating a new child thread to the first TCU.05-14-2009
20080276239RECOVERY AND RESTART OF A BATCH APPLICATION - A method of operating a data processing system comprises executing a batch application, the executing comprising reading one or more inputs from one or more data files, performing updates on one or more records according to the or each input read from a data file, and issuing a syncpoint when said updates are completed. During the execution of the batch application, syncpoints are periodically issued and checkpoints are less frequently issued. Following detection of a failure of the batch application, the batch application is restarted with the last issued checkpoint, and the batch application is executed by reading one or more inputs from one or more data files, but not performing updates on said records, until the last issued syncpoint is reached.11-06-2008
20080301682Inserting New Transactions Into a Transaction Stream - In an embodiment, a selection of an original transaction is received. In response to the selection of the original transaction, a call stack of the application that sends the original transaction during a learn mode of the application is saved. A specification of a new transaction and a location of the new transaction with respect to the original transaction in an transaction stream is received. During a production mode of the application, the original transaction is received from the application. A determination is made that the call stack of the application during the production mode matches the saved call stack of the application during the learn mode. In response to the determination, the new transaction is inserted at the location into a transaction stream that is sent to a database.12-04-2008
20120311588FAULT TOLERANT BATCH PROCESSING - Among other aspects disclosed are a method and system for processing a batch of input data in a fault tolerant manner. The method includes reading a batch of input data including a plurality of records from one or more data sources and passing the batch through a dataflow graph. The dataflow graph includes two or more nodes representing components connected by links representing flows of data between the components. At least one but fewer than all of the components includes a checkpoint process for an action performed for each of multiple units of work associated with one or more of the records. The checkpoint process includes opening a checkpoint buffer stored in non-volatile memory at the start of processing for the batch.12-06-2012
20110055835AIDING RESOLUTION OF A TRANSACTION - A method for aiding resolution of a transaction for use with a transactional processing system comprising a transaction coordinator and a plurality of grouped and inter-connected resource managers, the method comprising the steps of: in response to a communications failure between the transaction coordinator and a first resource manager causing a transaction to have an in-doubt state, connecting, by the transaction coordinator, to a second resource manager; in response to the connecting step, sending by the transaction coordinator to the second resource manager, a resolve request comprising a resolution for the in-doubt transaction; in response to the resolve request, obtaining at the first resource manager, by the second resource manager, a lock to data associated with the in-doubt transaction; and in response to the obtaining step, determining, by the second resource manager, whether the transaction is associated with the first resource manager.03-03-2011
20100325630PARALLEL NESTED TRANSACTIONS - A system for managing transactions, including a first reference cell associated with a starting value for a first variable, a first thread having an outer atomic transaction including a first instruction to write a first value to the first variable, a second thread, executing in parallel with the first thread, having an inner atomic transaction including a second instruction to write a second value to the first variable, where the inner atomic transaction is nested within the outer atomic transaction, a first value node created by the outer atomic transaction and storing the first value in response to execution of the first instruction, and a second value node created by the inner atomic transaction, storing the second value in response to execution of the second instruction, and having a previous node pointer referencing the first value node.12-23-2010
20090199187CONCURRENT EXECUTION OF MULTIPLE PRIMITIVE COMMANDS IN COMMAND LINE INTERFACE - A method to concurrently execute multiple primitive commands in a command line interface (CLI) is provided. Each of a plurality of signal parameters is designated for each of a plurality of primitive commands. The plurality of primitive commands is encapsulated into a header CLI command. The CLI command is executed.08-06-2009
20100115519METHOD AND SYSTEM FOR SCHEDULING IMAGE ACQUISITION EVENTS BASED ON DYNAMIC PROGRAMMING - A method and system for scheduling events into a set of opportunities is presented. The method includes 1) dividing a path of an image acquisition device so that there is at least a first portion and a second portion at any given moment, wherein each of the first portion and the second portion includes at least one state and the first portion includes a null state in which no image is taken; 2) combining each state in the first portion with at least one state in the second portion one by one to generate a series of updated sequences; and 3) selecting at least one of the updated sequences based on a merit value associated with each of the updated sequences. The invention uses only two groups out of all the relevant opportunities for most calculations, and is especially applicable to situations like satellite pass scheduling.05-06-2010
20110093855MULTI-THREAD REPLICATION ACROSS A NETWORK - A replicated set of data is processed by receiving at a target, from one of a plurality of replication processing threads, a received batch of one or more non-synchronization tasks. It is determined that the received batch comprises a next batch to be performed at the target and the non-synchronization tasks included in the batch are performed in a task order.04-21-2011
20100100882INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF - When a plurality of objects are subjected to a batch processing by an object selection unit and a batch processing execution unit, if an input is made to an object included in the plurality of objects, an information processing apparatus controls the processing execution unit so as to execute a processing on the object based on the input, thereby executing a processing of moving all of the selected plurality of objects simultaneously with a processing of moving an arbitrary object separately from other objects among the selected plurality of objects.04-22-2010
20100023945EARLY ISSUE OF TRANSACTION ID - Early issue of transaction ID is disclosed. An apparatus comprising decoder to generate a first node ID indicative of the destination of a cache transaction from a caching agent, a transaction ID allocation logic coupled to and operating in parallel to the decoder to select a transaction ID (TID) for the transaction based on the first node ID, a packet creation unit to create a packet that includes the transaction, the first node ID, the TID and a second node ID corresponding to the requestor.01-28-2010
20090106758File system reliability using journaling on a storage medium - Improving file system reliability in storage mediums after a data corrupting event using file system journaling is described. In one embodiment, a method, which includes scanning beyond an active transactions region within the file system journal to locate additional valid transactions for replay to bring the storage medium into a consistent state; the scanning performed until an invalid transaction is reached.04-23-2009
20080235686Method and apparatus for improving thread posting efficiency in a multiprocessor data processing system - A computer implemented method, a data processing system, and computer usable program code for improving thread posting efficiency in a multiprocessor data processing system are provided. Aspects of the present invention first receive a set of threads from an application. The aspects of the present invention then group the set of threads with a plurality of processors based on a last execution of the set of threads on the plurality of processors to form a plurality of groups. The threads in each group in the plurality of groups are all last executed on a same processor. The aspects of the present invention then wake up the threads in the plurality of groups in any order.09-25-2008
20110154342METHOD AND APPARATUS FOR PROVIDING REMINDERS - A method and computing device for providing task reminder data associated with event data stored in a database is provided. The computing device comprises a processing unit interconnected with a memory device. A list of tasks associated with the event data is received, each respective task in the list of tasks associated with task data. Respective reminder times for each task are determined at the processing unit, such that a display device can be controlled to provide respective representations of the task data, in association with the event data, at respective times substantially similar to each respective reminder time. The list of tasks is stored in the database in association with the event data. Input data is received, indicative that at least one of a start time and an end time of an event associated with the event data has changed to a respective new start time and new end time. For each task in the list of tasks, a given respective reminder time is changed to a new given respective reminder time based on at least one of the new start time and the new end time when the given respective reminder time comprises a time relative to at least one of the start time and the end time.06-23-2011
20110154341SYSTEM AND METHOD FOR A TASK MANAGEMENT LIBRARY TO EXECUTE MAP-REDUCE APPLICATIONS IN A MAP-REDUCE FRAMEWORK - An improved system and method for a task management library to execute map-reduce applications is provided. A map-reduce application may be operably coupled to a task manager library and a map-reduce library on a client device. The task manager library may include a wrapper application programming interface that provides application programming interfaces invoked by a wrapper to parse data input values of the map-reduce application. The task manager library may also include a configurator that extracts data and parameters of the map-reduce application from a configuration file to configure the map-reduce application for execution, a scheduler that determines an execution plan based on input and output data dependencies of mappers and reducers, a launcher that iteratively launches the mappers and reducers according to the execution plan, and a task executor that requests the map-reduce library to invoke execution of mappers on mapper servers and reducers on reducer servers.06-23-2011
20110307893ROLE-BASED AUTOMATION SCRIPTS - A computer performs an action called for by a script. The computer determines how to perform the action based in part on a role template not included in the script and based in part on a role-template extension included in the script.12-15-2011
20090172678Method And System For Controlling The Functionality Of A Transaction Device - A method and system of controlling the functionality of a transaction device, the method includes providing a computing device for accessing an account corresponding to the transaction device. The computing device generates a list of a plurality of transaction functions associated with the transaction device. The method includes providing an option to disable and enable one or more of the transaction functions in response to a user input. In response to the user input disabling one of the transaction functions, an instruction is generated preventing the transaction device from being used for the disabled transaction function.07-02-2009
20090172676Conditional batch buffer execution - A batch computer or batch processor may implement conditional execution at the command level of the batch processor or higher. Conditional execution may involve execution of one batch buffer depending on the results achieved upon execution by another batch buffer.07-02-2009
20090172675Re-Entrant Atomic Signaling - Systems for context switching a requestor engine during an atomic process without corrupting the atomic process. Typically an atomic process cannot be interrupted prior to completion and if it is interrupted, the process will terminated abnormally resulting in a corrupted transaction. Systems that allow for a controlled interruption of an atomic process without corruption with subsequent context switching are presented. The system consists of a context-switchable requester engine, a context switch controller, shared resource synchronizer, and a shared resource system. The system may also containing multiple local and remote context-switchable requestor engines as well as multiple local and remote shared resource systems. A method for context switching a requestor engine during an atomic process without corrupting the atomic process is also presented.07-02-2009
20090172674MANAGING THE COMPUTER COLLECTION OF INFORMATION IN AN INFORMATION TECHNOLOGY ENVIRONMENT - The collection of information in an Information Technology environment is dynamically managed. Processing associated with a batch of requests executed to obtain information is adjusted in real-time based on whether responses to the requests executed within an allotted time frame were received. The adjustments may include adjusting the time allotted to execute a batch of requests, adjusting the number of requests in a batch, and/or adjusting the execution priority of the requests within a batch.07-02-2009
20090172673METHOD AND SYSTEM FOR MANAGING TRANSACTIONS - A method and system for managing transactions is provided. A transaction is initiated on a first data by a first entity with the first data being comprised in a basis memory. A change in the first data is moved as a second data to a transaction memory. The second data is read from the transaction memory if a request for reading the first data is received from the first entity. The first data is read from the basis memory if the request for reading the first data is received from a second entity. The write access of the second entity to the first data is locked.07-02-2009
20120042314METHOD AND DEVICE ENABLING THE EXECUTION OF HETEROGENEOUS TRANSACTION COMPONENTS - The invention especially relates to the execution of at least one transaction in a transaction processing system comprising a transaction-oriented monitor (02-16-2012
20120005680PROCESSING A BATCHED UNIT OF WORK - A batched unit of work is associated with a plurality of messages for use with a data store. A backout count, associated with a number of instances that work in association with the batched unit of work, is backed out. A backout threshold is associated with the backout count. A commit count is associated with committing the batched unit of work in response to successful commits for a predefined number of the plurality of messages. A checker checks whether the backout count is greater than zero and less than the backout threshold. An override component, responsive to the backout count being greater than zero and less than the backout threshold, overrides the commit count and commits the batched unit of work for a subset of the plurality of messages.01-05-2012
20120072915Shared Request Grouping in a Computing System - A queuing module is configured to determine the presence of at least one shared request in a request queue, and in the event at least one shared request is determined to be present in the queue; determine the presence of a waiting exclusive request located in the queue after the at least one shared request, and in the event a waiting exclusive request is determined to be located in the queue after the at least one shared request: determine whether grouping a new shared request with the at least one shared request violates a deferral limit of the waiting exclusive request; and, in the event grouping the new shared request with the at least one shared request does not violate the deferral limit of the waiting exclusive request, group the new shared request with the at least one shared request.03-22-2012
20120159492REDUCING PROCESSING OVERHEAD AND STORAGE COST BY BATCHING TASK RECORDS AND CONVERTING TO AUDIT RECORDS - Systems, methods and articles of manufacture are disclosed for processing documents for electronic discovery. A request may be received to perform a task on documents, each document having a distinct document identifier. A task record may be generated to represent the requested task. The task record may include information specific to the request task. However, the task record need not include any document identifiers. At least one batch record may be generated that includes the document identifier for each of the documents. The task record may be associated with the at least one batch record. The requested task may be performed according to the task record and the at least one batch record. An audit record may be generated for the performed task. The audit record may be associated with the at least one batch record.06-21-2012
20110093854SYSTEM COMPRISING A PLURALITY OF PROCESSING UNITS MAKING IT POSSIBLE TO EXECUTE TASKS IN PARALLEL, BY MIXING THE MODE OF EXECUTION OF CONTROL TYPE AND THE MODE OF EXECUTION OF DATA FLOW TYPE - A system including a plurality of processing units for executing tasks in parallel and a communication network. The processing units are organized into clusters of units, each cluster comprising a local memory. The system includes means for statically allocating tasks to each cluster of units, so that a task of an application is processed by the same cluster of units from one execution to another. Each cluster includes cluster management means for allocating tasks to each of its processing units and space in the local memory for executing them, so that a given task of an application may not be processed by the same processing unit from one execution to another. The cluster management means includes means for managing the tasks, means for managing the processing units, means for managing the local memory and means for managing the communications involving its processing units. The management means operate simultaneously and cooperatively.04-21-2011
20120222032MONITORING REAL-TIME COMPUTING RESOURCES - Techniques used to enhance the execution of long-running or complex software application instances and jobs on computing systems. In one embodiment, inadequate system resources and failure of a job execution on the computing system may be predicted. A determination may be made as to whether inadequate resources exist prior to execution of the job, and resource requirements may be monitored to detect in real time if inadequate resources will be encountered during the job execution for cases where, for example, resource availability has unexpectedly decreased. If a resource deficiency is predicted on the executing computer system, the job may be paused and corrective action may be taken or a user may be alerted. The job may resume after the resource deficiency is met. Additional embodiments may integrate resource monitoring with the adaptive selection of a computer system or application execution environment based on resource capability predictions and benchmarks.08-30-2012
20120131582System and Method for Real-Time Batch Account Processing - The present disclosure discloses a technique for real-time batch account processing. In one aspect, a method includes: (1) receiving, by an account processing center, a marked request for batch processing; (2) caching the marked request; (3) pre-processing sub-requests of a type relating to an account that are in the marked request, including merging operations of a type for processing for the account; and (4) processing the marked request, including the pre-processed sub-requests, to provide a processing result to a corresponding client. The request for batch processing can be directly submitted at the client or submitted by a client through an interface that is provided to the client for submitting a request including the request for batch processing. When submitting the request for batch processing, the client can wait for the processing result online, and obtain the processing result at real-time. Further, when receiving the request for batching processing, the account processing center can pre-process it, e.g., merging operations for the same account, and thus increase efficiency of batch processing.05-24-2012
20120167098Distributed Transaction Management Using Optimization Of Local Transactions - A computer-implemented method, a computer program product, and a system are provided. A transaction master for each of a plurality of transactions of a database is provided. Each transaction master is configured to communicate with at least one transaction slave to manage execution a transaction in the plurality of transactions. A transaction token that specifies data to be visible for the transaction on the database is generated. The transaction token includes a transaction identifier for identifying whether the transaction is a committed transaction or an uncommitted transaction. The transaction master is configured to update the transaction token after execution of the transaction. A determination whether the transaction can be executed on the at least one transaction slave without accessing data specified by the transaction token is made. The transaction is executed on the at least one transaction slave using a transaction token stored at the at least one transaction slave.06-28-2012
20120167097ADAPTIVE CHANNEL FOR ALGORITHMS WITH DIFFERENT LATENCY AND PERFORMANCE POINTS - A method for processing requests in a channel can include receiving a first request in the channel, running calculations on the first request in a processing time T06-28-2012
20120137297MODIFYING SCHEDULED EXECUTION OF OBJECT MODIFICATION METHODS ASSOCIATED WITH DATABASE OBJECTS - An original schedule module configured to receive an original schedule configured to trigger execution of a first original batch of entries including a set of object modification methods and a corresponding set of database objects before triggering execution of a second original batch of entries including a set of object modification methods and a corresponding set of database objects. An analysis module can be configured to determine logic for execution of each entry from the first original batch of entries based on the original schedule. A schedule generator can be configured to define, based on the logic for execution and based on the original schedule, a modified schedule configured to trigger parallel execution of a first modified batch of entries including less than all of the first original batch of entries, and a second modified batch of entries including less than all of the second original batch of entries.05-31-2012
20110185360MULTIPROCESSING TRANSACTION RECOVERY MANAGER - A multiprocessing transaction recovery manager, operable with a transactional application manager and a resource manager, comprises a threadsafety indicator for receiving and storing positive and non-positive threadsafety data of at least one transactional component managed by one of the transactional application manager and the resource manager; a commit protocol component for performing commit processing for the at least one transactional component; and a thread selector responsive to positive threadsafety data for selecting a single thread for the commit processing to be performed by the commit protocol component. The thread selector is further operable to select plural threads for the commit processing to be performed by the commit protocol component responsive to non-positive threadsafety data.07-28-2011
20110185359Determining A Conflict in Accessing Shared Resources Using a Reduced Number of Cycles - Illustrated is a system and method for identifying a potential conflict, using a conflict determination engine, between a first transaction and a second transaction stored in a conflict hash map, the potential conflict based upon a potential accessing of a shared resource common to both the first transaction and the second transaction. The system and method further includes determining an actual conflict, using the conflict determination engine to access the combination of the conflict hash map and the read set hash map, between the first transaction and the second transaction, where a time stamp value of only selected shared locations has changed relative to a previous time stamp value, the time stamp value stored in the read set hash map and accessed using the first transaction.07-28-2011
20100186014DATA MOVER FOR COMPUTER SYSTEM - In a computer system with a disk array that has physical storage devices arranged as logical storage units and is capable of carrying out hardware storage operations on a per logical storage unit basis, data movement operations can be carried out on a per-file basis. A data mover software component for use in a computer or storage system enables cloning and initialization of data to provide high data throughput without moving the data between the kernel and application levels.07-22-2010
20120174109PROCESSING A BATCHED UNIT OF WORK - A batched unit of work is associated with a plurality of messages for use with a data store. A backout count, associated with a number of instances that work in association with the batched unit of work, is backed out. A backout threshold is associated with the backout count. A commit count is associated with committing the batched unit of work in response to successful commits for a predefined number of the plurality of messages. A checker checks whether the backout count is greater than zero and less than the backout threshold. An override component, responsive to the backout count being greater than zero and less than the backout threshold, overrides the commit count and commits the batched unit of work for a subset of the plurality of messages.07-05-2012
20090125906METHODS AND APPARATUS TO EXECUTE AN AUXILIARY RECIPE AND A BATCH RECIPE ASSOCIATED WITH A PROCESS CONTROL SYSTEM - Example methods and apparatus to execute an auxiliary recipe and a batch recipe execution are disclosed. A disclosed example method involves executing a first recipe, and before completion of execution of the first recipe, receiving an auxiliary recipe. The example method also involves determining whether the first recipe has reached an entry point at which the auxiliary recipe can be executed. The auxiliary recipe is then executed in response to determining that the first recipe has reached the entry point.05-14-2009
20120284723TRANSACTIONAL UPDATING IN DYNAMIC DISTRIBUTED WORKLOADS - A workload manager is operable with a distributed transaction processor having a plurality of processing regions and comprises: a transaction initiator region for initiating a transaction; a transaction router component for routing an initiated transaction to one of the plurality of processing regions; an affinity controller component for restricting transaction routing operations to maintain affinities; the affinity controller component characterised in comprising a unit of work affinity component operable with a resource manager at the one of the plurality of processing regions to activate an affinity responsive to completion of a recoverable data operation at the one of the plurality of processing regions.11-08-2012
20120180053CALL STACK AGGREGATION AND DISPLAY - A call stack aggregation mechanism aggregates call stacks from multiple threads of execution and displays the aggregated call stack to a user in a manner that visually distinguishes between the different call stacks in the aggregated call stack. The multiple threads of execution may be on the same computer system or on separate computer systems.07-12-2012
20120284721SYSTEMS AND METHOD FOR DYNAMICALLY THROTTLING TRANSACTIONAL WORKLOADS11-08-2012
20110131579BATCH JOB MULTIPLEX PROCESSING METHOD - A batch job multiplex processing method which solves the problem that a system which performs multiplex processing including parallel processing on plural nodes cannot cope with a sudden increase in the volume of data to be batch-processed using a predetermined value of multiplicity, for example, in securities trading in which the number of transactions may suddenly increase on a particular day. The method dynamically determines the value of multiplicity of processing including parallel processing in execution of a batch job on plural nodes. More specifically, in the method, multiplicity is determined depending on the node status (node performance and workload) and the status of an input file for the batch job.06-02-2011
20110055837HYBRID HARDWARE AND SOFTWARE IMPLEMENTATION OF TRANSACTIONAL MEMORY ACCESS - Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations.03-03-2011
20120102493ORDERED SCHEDULING OF SUSPENDED PROCESSES BASED ON RESUMPTION EVENTS - A method includes receiving a plurality of resumption events associated with a plurality of suspended processes. Each resumption event is associated with a suspended process. Each resumption event also includes an execution time and a resumption time window. The method includes determining resumption deadlines for the suspended processes and determining a resumption order based on the resumption deadlines. The resumption deadline for a suspended process is based on the execution time and the resumption time window of the corresponding resumption event. The suspended processes are scheduled for execution in accordance with the resumption order.04-26-2012
20100153953UNIFIED OPTIMISTIC AND PESSIMISTIC CONCURRENCY CONTROL FOR A SOFTWARE TRANSACTIONAL MEMORY (STM) SYSTEM - A method and apparatus for unified concurrency control in a Software Transactional Memory (STM) is herein described. A transaction record associated with a memory address referenced by a transactional memory access operation includes optimistic and pessimistic concurrency control fields. Access barriers and other transactional operations/functions are utilized to maintain both fields of the transaction record, appropriately. Consequently, concurrent execution of optimistic and pessimistic transactions is enabled.06-17-2010
20120151488Measuring Transaction Performance Across Application Asynchronous Flows - A mechanism modifies a deployment descriptor of each application component including at least one producer application component or consumer application component, by adding, for each producer application component or consumer application component, an application component identifier, a producer or consumer type, and a recipient identifier of a recipient the application component uses. Responsive to determining a match exists and the given application component is of producer type, the application server virtual machine logs an identifier of a recipient containing a message sent by the given application component, a correlation identifier of the given application component, and an execution start time. Responsive to determining a match exists and the given application component is of consumer type, the application server virtual machine logs an identifier of the recipient resource containing a message processed by the given application component, a correlation identifier of the given application component, and an execution end time.06-14-2012
20110161959 Batch Job Flow Management - Systems and methods for improved batch flow management are described. At least some embodiments include a computer system for managing a job flow including a memory storing a plurality of batch queue jobs grouped into Services each including a job and a predecessor job. A time difference is the difference between a scheduled job start time and an estimated predecessor job end time. Jobs with a preceding time gap include jobs immediately preceded only by non-zero time differences. The job start depends upon the predecessor job completion. The computer system further includes a processing unit that identifies jobs preceded by a time gap, selects one of the Services, and traverses in reverse chronological order a critical path of dependent jobs within the Service until a latest job with a preceding time gap is identified or at least those jobs along the critical path preceded by another job are traversed.06-30-2011
20130024863SYSTEM AND METHOD FOR PROVIDING DYNAMIC TRANSACTION OPTIMIZATIONS - A system and method for providing dynamic transaction optimizations, such as dynamic XA transaction optimizations. In accordance with an embodiment, the system enables monitoring of transactional behavior in an application during runtime, in order to provide a feedback loop. The application/transaction information in the feedback loop can be analyzed by a transaction manager to determine an indication as to whether a particular optimization, such as an isSameRM optimization, will provide a benefit or not. The optimization can then be applied accordingly. In accordance with various embodiments, such determination can be made transparently, so that its enablement is not detectable to, e.g., an end-application, or a system administrator, even though the distribution and type of XA calls may be detected through system monitoring. The feature can be used to improve the performance of transaction processing in a transaction-based system.01-24-2013
20080244583CONFLICTING SUB-PROCESS IDENTIFICATION METHOD, APPARATUS AND COMPUTER PROGRAM - A technique for identifying conflicting sub-processes easily in a computer system that processes a plurality of transactions in parallel is provided.10-02-2008
20110246993System and Method for Executing a Transaction Using Parallel Co-Transactions - The transactional memory system described herein may implement parallel co-transactions that access a shared memory such that at most one of the co-transactions in a set will succeed and all others will fail (e.g., be aborted). Co-transactions may improve the performance of programs that use transactional memory by attempting to perform the same high-level operation using multiple algorithmic approaches, transactional memory implementations and/or speculation options in parallel, and allowing only the first to complete to commit its results. If none of the co-transactions succeeds, one or more may be retried, possibly using a different approach and/or transactional memory implementation. The at-most-one property may be managed through the use of a shared “done” flag. Conflicts between co-transactions in a set and accesses made by transactions or activities outside the set may be managed using lazy write ownership acquisition and/or a priority-based approach. Each co-transaction may execute on a different processor resource.10-06-2011
20080222639Method and System Configured for Facilitating Management of International Trade Receivables Transactions - A receivables transaction management platform is configured for facilitating management of international trade receivables transactions. The platform includes a task manager layer and a platform functionality layer. The task manager layer is configured for facilitating management of transaction information workflow tasks and export receivables tasks. The platform functionality layer is accessible by at least a portion of the managers and is configured for enabling facilitation of the transaction information workflow tasks and the export receivables tasks. Managing the transaction information workflow tasks and export receivables tasks includes facilitating preparation of a document and data portfolio required for settlement of an international trade receivables transaction, facilitating electronic submission of the document and data portfolio to a designated recipient and facilitating acceptance of the document and data portfolio. The platform functional components are configured for enabling user workflow functionality, data mapping functionality, data analysis functionality, data storage functionality and third party access functionality.09-11-2008
20130179889MANAGING JOB EXECUTION - A method for managing jobs scheduled for execution on a target system in which some jobs may spawn additional jobs scheduled for execution on the target system including intercepting jobs scheduled for execution in the target system, determining whether there is resource sufficiency in the target system for executing jobs, responsive to an affirmative determination of resource sufficiency, releasing previously intercepted jobs for execution in the target system, computing a limit of a number of jobs which can be concurrently scheduled by an external system to the target system, and transmitting the computed limit to the external system.07-11-2013
20130179888Application Load Balancing Utility - Methods, computer readable media, and apparatuses for balancing the number of transaction requests with the number of applications running and processing information for those transaction requests are presented. According to one or more aspects, a message queue receives one or more messages, each including a transaction request, from a computing device. The message queue sends a trigger message to a trigger queue. The load balancing utility monitors the number of messages in the message queue and determines a number of transaction requests to process and starts a number of additional applications to process the additional transaction requests. The applications process the transaction requests and send a response for each of the transaction requests to the message queue. The message queue sends the response back to the computing device.07-11-2013
20130104131LOAD CONTROL DEVICE - A load control device 04-25-2013
20130132960USB REDIRECTION FOR READ TRANSACTIONS - Methods and systems for conducting a transaction between a virtual USB device driver and a USB device are provided. A virtual USB manager of a hypervisor receives a one or more data packets from a client. The virtual USB manager stores of the one or more data packets in a buffer. The virtual USB manager dequeues a data packet from the buffer. The virtual USB manager transmits the data packet to the virtual USB device driver for processing.05-23-2013
20130145371BATCH PROCESSING OF BUSINESS OBJECTS - A service consumer may define batch jobs (batch containers) in which business object methods can be invoked on business object instances. The invocations may be recorded. The service consumer may trigger batch execution to cause the business object instances to be modified in accordance with the recorded invocations. The batch job can be executed as a single transaction in a single process. The batch job can be partitioned into multiple transactions and processed by respective multiple processes.06-06-2013
20080209421SYSTEM AND METHOD FOR SUSPENDING TRANSACTIONS BEING EXECUTED ON DATABASES - A database management system managing one or more databases to suspend access to at least one selected database by one or more processes or applications (e.g., message processing programs, batch messaging programs, etc.). In some instances, the one or more databases may include one or more IMS databases. Access to the at least one selected database may be suspended to enable one or more operations to be performed on the at least one selected database by the database management system and/or an outside entity (e.g., a user, an external application, etc.). For example, the one or more operations may include an imaging operation, a loading operation, an unloading operation, a start operation, a stop operation, and/or other operations. In some instances, access to the at least one selected database may be suspended without canceling transactions being executed by the one or more processes or applications on the selected at least one database.08-28-2008
20080201712Method and System for Concurrent Message Processing - A method and system are provided for concurrent message processing. The system includes: an input queue capable of receiving multiple messages in a given order; an intermediary for processing the messages; and an output queue for releasing the messages from the intermediary. Means are provided for retrieving a message from an input queue for processing at the intermediary and starting a transaction under which the message is to be processed. The intermediate logic processes the transactions in parallel and a transaction management means ensures that the messages are released to the output queue in the order of the messages in the input queue.08-21-2008
20100287554PROCESSING SERIALIZED TRANSACTIONS IN PARALLEL WHILE PRESERVING TRANSACTION INTEGRITY - A method, system, and apparatus are disclosed for processing serialized transactions in parallel while preserving transaction integrity. The method includes receiving a transaction comprising at least two keys and accessing a serialization-independent key (“SI-Key”) and a serialization-dependent key (“SD-Key”) from the transaction. A value for the SI-Key identifies the transaction as independent of transactions having a different value for the SI-Key. Furthermore, a value for the SD-Key governs a transaction execution order for each transaction having a SI-Key value that matches the SI-Key value associated with the SD-Key value. The method also includes assigning the transaction to an execution group based on a value for the SI-Key. The method also includes scheduling the one or more transactions in the execution group in an order defined by the SD-Key. The execution group may execute in parallel with one or more additional execution groups.11-11-2010
20100287553SYSTEM, METHOD, AND SOFTWARE FOR CONTROLLED INTERRUPTION OF BATCH JOB PROCESSING - This disclosure provides various embodiments of software, systems, and techniques for controlled interruption of batch job processing. In one instance, a tangible computer readable medium stores instructions for managing batch jobs, where the instructions are operable when executed by a processor to identify an interruption event associated with a batch job queue. The instructions trigger an interruption of an executing batch job within the job queue such that the executed portion of the job is marked by a restart point embedded within the executable code. The instructions then restart the interrupted batch job at the restart point.11-11-2010
20120284722METHOD FOR DYNAMICALLY THROTTLING TRANSACTIONAL WORKLOADS11-08-2012
20120284720HARDWARE ASSISTED SCHEDULING IN COMPUTER SYSTEM - Apparatus and methods for hardware assisted scheduling of software tasks in a computer system are disclosed. For example, a computer system comprises a first pool for maintaining a set of executable software threads, a first scheduler, a second pool for maintaining a set of active software threads, and a second scheduler. The first scheduler assigns a subset of the set of executable software threads to the set of active software threads and the second scheduler dispatches one or more threads from the set of active software threads to a set of hardware threads for execution. In one embodiment, the first scheduler is implemented as part of the operating system of the computer system, and the second scheduler is implemented in hardware.11-08-2012
20120284719DISTRIBUTED MULTI-PHASE BATCH JOB PROCESSING - A distributed job-processing environment including a server, or servers, capable of receiving and processing user-submitted job queries for data sets on backend storage servers. The server identifies computational tasks to be completed on the job as well as a time frame to complete some of the computational tasks. Computational tasks may include, without limitation, preprocessing, parsing, importing, verifying dependencies, retrieving relevant metadata, checking syntax and semantics, optimizing, compiling, and running. The server performs the computational tasks, and once the time frame expires, a message is transmitted to the user indicating which tasks have been completed. The rest of the computational tasks are subsequently performed, and eventually, job results are transmitted to the user.11-08-2012
20120030679Resource Allocator With Knowledge-Based Optimization - An automated resource allocation technique for scheduling a batch computer job in a multi-computer system environment. According to example embodiments, resource allocation processing may be performed when receiving a batch computer job that needs to be run by a software application executable on more than one computing system in the multi-computer system environment. The job may be submitted for pre-processing analysis by the software application. A pre-processing analysis result comprising job evaluation information may be received from the software application and the result may be evaluated to select a computing system in the multi-computer system environment that is capable of executing the application to run the job. The job may be submitted to the selected computing system to have the software application run the job to completion.02-02-2012
20120030678Method and Apparatus for Tracking Documents - A method and apparatus are provided for tracking documents. The documents are tracked by simultaneously monitoring each document's electronic processing status and physical location. Determinations are made whether specific combinations of electronic processing states and physical locations are valid and whether specific movements of documents are permitted. Invalid combinations or movements are reported to a reporting station. The preparation of batches of documents prior to scanning may be monitored and operator metrics related to the batch prep process may be tracked. Exception documents rejected during document processing may be monitored to enable retrieval of such documents.02-02-2012
20130198749SPECULATIVE THREAD EXECUTION WITH HARDWARE TRANSACTIONAL MEMORY - In an embodiment, if a self thread has more than one conflict, a transaction of the self thread is aborted and restarted. If the self thread has only one conflict and an enemy thread of the self thread has more than one conflict, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread and the enemy thread only conflicts with the self thread and the self thread has a key that has a higher priority than a key of the enemy thread, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread, the enemy thread only conflicts with the self thread, and the self thread has a key that has a lower priority than the key of the enemy thread, the transaction of the self thread is aborted.08-01-2013

Patent applications in class Batch or transaction processing