Patent application number | Description | Published |
20080209179 | Low-Impact Performance Sampling Within a Massively Parallel Computer - An apparatus, program product and method sample at different times nodes that are performing similar work. Performance data associated with first and second node subsets performing the similar work are sampled at different times, e.g., in a round-robin fashion, and in accordance with a given sampling rate. The performance data is analyzed. Nodes whose performance suffers as a result of a sampling operation may be identified and removed from a subsequent operation. | 08-28-2008 |
20080209278 | INITIALIZING DIAGNOSTIC FUNCTIONS WHEN SPECIFIED RUN-TIME ERROR CRITERIA ARE SATISFIED - A run-time monitor allows defining sets of run-time error criteria and corresponding diagnostic action to take when the run-time error criteria is satisfied. One way to define the run-time error criteria is to take a baseline measurement of run-time errors that occur during normal processing conditions. A run-time error criteria may then be defined that is based on the baseline measurement. In this manner, a rate of run-time errors that normally occur may be ignored, while a rate of run-time errors in excess of the run-time error criteria could automatically initiate diagnostic action. In this manner, the ability of a programmer to debug run-time errors is significantly enhanced. | 08-28-2008 |
20080215532 | DATABASE OPTIMIZATION THROUGH SCHEMA MODIFICATION - A database optimizer collects statistics regarding applications accessing a database, and makes one or more changes to the database schema to optimize performance according to the collected statistics. In a first embodiment, the optimizer detects when a certain type of application accesses the database a percentage of time that exceeds a predefined threshold level, and if the data in the database is stored in a less-than-optimal format for the application, the data type of one or more columns in the database is changed to a more optimal format for the application. In a second embodiment, the optimizer detects when one type of application accesses a column a percentage of time that exceeds a first predefined threshold level and is less than a second predefined threshold level, and creates a new column in the database so the data is present in both formats. | 09-04-2008 |
20080215537 | DATA ORDERING FOR DERIVED COLUMNS IN A DATABASE SYSTEM - Optimized query execution for queries that return data sorted by a derived column. The query optimizer is used to determine if the data is already sorted or if existing database metadata can be utilized to provide the sort. The optimizer will examine the query field being derived and attempt to determine if there are existing index structures that can be used to sort the data. The optimizer can also look at the values of the data in the column to determine what existing structures can be used to sort the data. | 09-04-2008 |
20080215538 | DATA ORDERING FOR DERIVED COLUMNS IN A DATABASE SYSTEM - Optimized query execution is disclosed for queries that return data sorted by a derived column. The query optimizer is used to determine if the data is already sorted or if existing database metadata can be utilized to provide the sort. The optimizer will examine the query field being derived and attempt to determine if there are existing index structures that can be used to sort the data. The optimizer can also look at the values of the data in the column to determine what existing structures can be used to sort the data. | 09-04-2008 |
20080215539 | DATA ORDERING FOR DERIVED COLUMNS IN A DATABASE SYSTEM - Optimized query execution is disclosed for queries that return data sorted by a derived column. The query optimizer is used to determine if the data is already sorted or if existing database metadata can be utilized to provide the sort. The optimizer will examine the query field being derived and attempt to determine if there are existing index structures that can be used to sort the data. The optimizer can also look at the values of the data in the column to determine what existing structures can be used to sort the data. | 09-04-2008 |
20080216054 | Storing and Restoring Snapshots of a Computer Process - A method to trace a variable or other expression through a computer program is disclosed. A user determines the variable and the conditions upon which activity of the variable will be monitored. As a result of the invention, every time that variable is referenced in a memory operation or other activity by the program and the conditions set forth by the user are satisfied, the state of that variable is saved as a snapshot without interrupting or stopping execution of the program. The snapshots are accumulated in a history table. The history table can be retrieved and the state of the variable in any given snapshot can be restored. Other variables and expressions can be attached to the trigger variable and the states of these other variables at the time of the activity of the trigger variable may also be saved in the snapshot. The method may be incorporated into a program as a tracing device or a program product separate from the logical processing device executing the program. | 09-04-2008 |
20080275970 | AUTONOMICALLY ADJUSTING CONFIGURATION PARAMETERS FOR A SERVER WHEN A DIFFERENT SERVER FAILS - A load balancer detects a server failure, and sends a failure notification message to the remaining servers. In response, one or more of the remaining servers may autonomically adjust their configuration parameters, thereby allowing the remaining servers to better handle the increased load caused by the server failure. One or more of the servers may also include a performance measurement mechanism that measures performance before and after an autonomic adjustment of the configuration parameters to determine whether and how much the autonomic adjustments improved the system performance. In this manner server computer systems may autonomically compensate for the failure of another server computer system that was sharing the workload. | 11-06-2008 |
20080276054 | MONITORING PERFORMANCE OF A STORAGE AREA NETWORK - A performance monitor reports SAN performance so that issues within the SAN are not masked from the client. Accesses to the SAN may be grouped into the categories of SAN logical or SAN physical. In one specific embodiment, the ranges of service times for accesses to the SAN are determined by monitoring service times of accesses to the SAN from the client perspective. In another specific embodiment, the ranges of service times for the SAN are determined by the SAN returning data with each request that indicates the service time from the SAN perspective. This allows reporting not only SAN logical and SAN physical accesses, but also allows reporting SAN service time. By specifying SAN service time, the client is able to better determine network delays. In yet another embodiment, information is returned by the SAN to indicate whether the access is SAN logical or SAN physical. | 11-06-2008 |
20080276118 | AUTONOMICALLY ADJUSTING CONFIGURATION PARAMETERS FOR A SERVER WHEN A DIFFERENT SERVER FAILS - A load balancer detects a server failure, and sends a failure notification message to the remaining servers. In response, one or more of the remaining servers may autonomically adjust their configuration parameters, thereby allowing the remaining servers to better handle the increased load caused by the server failure. One or more of the servers may also include a performance measurement mechanism that measures performance before and after an autonomic adjustment of the configuration parameters to determine whether and how much the autonomic adjustments improved the system performance. In this manner server computer systems may autonomically compensate for the failure of another server computer system that was sharing the workload. | 11-06-2008 |
20080276119 | AUTONOMICALLY ADJUSTING CONFIGURATION PARAMETERS FOR A SERVER WHEN A DIFFERENT SERVER FAILS - A load balancer detects a server failure, and sends a failure notification message to the remaining servers. In response, one or more of the remaining servers may autonomically adjust their configuration parameters, thereby allowing the remaining servers to better handle the increased load caused by the server failure. One or more of the servers may also include a performance measurement mechanism that measures performance before and after an autonomic adjustment of the configuration parameters to determine whether and how much the autonomic adjustments improved the system performance. In this manner server computer systems may autonomically compensate for the failure of another server computer system that was sharing the workload. | 11-06-2008 |
20080281934 | Assisting the response to an electronic mail message - A method, article of manufacture and apparatus for assisting an electronic mail (e-mail) response message by providing e-mail messages related to an open e-mail message. Specifically, the method determines whether an available e-mail message is related to the open e-mail message. Available e-mail messages may include unopened, previously opened, or incoming e-mail messages. As such, the user is warned of all relevant e-mail messages before responding with a reply message or a forward message. | 11-13-2008 |
20080282176 | Dynamic web page arrangement - A browser renders a page for display according to user habits. When a user interacts with a page associated with a network address, an entry is made in a file that associates the element on the page of the user interaction with the network address. When the page is visited again, the file is checked to see if any entry exists. If an entry exists and the stored user interaction is still relevant for that page, the page is rendered so that the location the user interacted with is provided at the top of the display, or the element is re-arranged, as in the case of a table, or both re-positioning and re-arranging occurs. Such page rendering reduces the need for the user to scroll through the page to view the desired information. | 11-13-2008 |
20080288942 | Monitoring performance of a logically-partitioned computer - An apparatus, system, and storage medium that in an embodiment collect a performance metric of a first partition in a logically-partitioned computer. If the difference between the performance metric and an expected performance metric exceeds a threshold, then a job or another partition is shut down or suspended. The expected performance metric is calculated based on the performance that is expected if the first partition is the only partition. | 11-20-2008 |
20080301802 | Trust-Based Link Access Control - An apparatus, program product and method control access to linked documents on a computer based on a calculated determination of the trustworthiness of such linked documents, so that user navigation to untrusted documents from a document with which such untrusted documents are linked can be deterred. Basing link access control on document trustworthiness permits owners, authors, developers, publishers, etc. of documents, for example, to avoid potential difficulties such as embarrassment, confusion or legal liability as a result of the content of linked-to documents under the control of third parties. | 12-04-2008 |
20090006425 | Method, Apparatus, and Computer Program Product for Dynamically Allocating Space for a Fixed Length Part of a Variable Length Field in a Database Table - An enhanced space allocation mechanism (ESAM) for dynamically allocating space for a fixed length part of variable length fields, such as VARCHAR fields, in database tables. Each record in such a variable length field has a fixed length part, a variable length part, and a pointer to the variable length part. The ESAM determines how much space to allocate based on the data that was historically put into these tables. In one embodiment, a database management system (DBMS) maintains a historical record that includes fields identifying the table, column and application ID, as well as fields that track a count and a total length. For each variable length field in a Structured Query Language (SQL) statement such as CREATE table or ALTER table, the DBMS finds a matching historical record, determines an estimated optimal fixed portion length based on the matching historical record, and sets a space allocation length for the fixed length part of the variable length field based on the estimated optimal fixed portion length. This dynamic space allocation approach is especially advantageous in situations where an empty table will be loaded with a massive amount of data. | 01-01-2009 |
20090007125 | Resource Allocation Based on Anticipated Resource Underutilization in a Logically Partitioned Multi-Processor Environment - A method, apparatus and program product for allocating resources in a logically partitioned multiprocessor environment. Resource usage is monitored in a first logical partition in the logically partitioned multiprocessor environment to predict a future underutilization of a resource in the first logical partition. An application executing in a second logical partition in the logically partitioned multiprocessor environment is configured for execution in the second logical partition with an assumption made that at least a portion of the underutilized resource is allocated to the second logical partition during at least a portion of the predicted future underutilization of the resource. | 01-01-2009 |
20090029717 | NOTIFYING A USER OF A PORTABLE WIRELESS DEVICE - A method, apparatus and system for notifying a user of a portable communication device. In one embodiment, a location of a first portable communication device is determined for a first user and the location of a second portable communication device is determined for a second user. A determination is made as to whether the location of the second portable communication device is within a same region containing the first portable communication device. If the second portable communication device is within the same region as the first portable communication device, then the first user is notified of the presence of the second user. | 01-29-2009 |
20090037372 | CREATING PROFILING INDICES - A database engine and optimizer framework support creation of a series of profiling indices over a column having character string data, such as a traditional “varchar” data type. The profiling indices result in a reduction of the number of records that are searched when searching for a sub-string match within that column. In some embodiments, the series of indices are created over a column that is typically searched using the LIKE predicate or some similar technique; these indices indicate for each record whether certain sub-strings may exist in that record's value in the column. Thus, the indices are used to find the rows that may match one or more portions of the particular term being queried or, in other words, eliminate those records that do not have at least a portion of the term to be matched. The number of records actually retrieved and searched for the query sub-string is thereby reduced. | 02-05-2009 |
20090037512 | MULTI-NODAL COMPRESSION TECHNIQUES FOR AN IN-MEMORY DATABASE - Embodiments of the invention may be used to distribute a database across a plurality of compute nodes of a parallel computing system. That is, to a method for creating a fully in-memory database on the parallel computing system. Further, data compression techniques may be used increase the performance of the in-memory database by compressing some portions of the database to fit within a single node or a logically or physically related group of nodes. | 02-05-2009 |
20090043734 | Dynamic Partial Uncompression of a Database Table - A database dynamic partial uncompression mechanism determines when to dynamically uncompress one or more compressed portions of a database table that also includes uncompressed portions. A query may include an express term that specifies whether or not to skip compressed portions. In addition, a query may include associated information that specifies whether or not to skip compressed portions, and one or more thresholds that may be used to determine if the system is too busy to perform uncompression. A display mechanism may also determine whether or not to display compressed portions. The uncompression may occur at the database server or at a client. The database dynamic partial uncompression mechanism thus performs dynamic uncompression in a way that preferably uncompresses one or more compressed portions of a partially compressed database table only when the compressed portions satisfy a query and/or need to be displayed. | 02-12-2009 |
20090043792 | Partial Compression of a Database Table Based on Historical Information - A database partial compression mechanism compresses only part of a database table based on historical information regarding how the database table has been accessed in the past. The function of the database partial compression mechanism may also be governed by a user-specified partial compression policy. When the historical information indicates a portion of a table is not frequently used, the portion of the table is compressed without compressing other portions of the table. The result is a table that is uncompressed for portions that are accessed often and compressed for portions that are accessed less often. | 02-12-2009 |
20090043793 | Parallel Uncompression of a Partially Compressed Database Table - A multiprocessing uncompression mechanism takes advantage of existing multiprocessing capability within a database to perform dynamic uncompression of portions of a partially compressed database table that satisfy a query using processes that may be executed in parallel. Historical information is gathered for each query. Uncompression advice includes user-specified parameters that determine how the multiprocessing uncompression mechanism functions. The multiprocessing uncompression mechanism processes the historical information and uncompression advice to determine an appropriate task count for performing uncompression in parallel processes. The existing multiprocessing capability within the database then processes the tasks that perform the uncompression in parallel. | 02-12-2009 |
20090083276 | INSERTING DATA INTO AN IN-MEMORY DISTRIBUTED NODAL DATABASE - A method and apparatus loads data to an in-memory database across multiple nodes in a parallel computing system. A database loader uses SQL flags, historical information gained from monitoring prior query execution times and patterns, and node and network configuration to determine how to effectively cluster data attributes across multiple nodes. The database loader may also allow a system administrator to force placement of database structures in particular nodes. | 03-26-2009 |
20090083277 | NODAL DATA NORMALIZATION - Embodiments of the invention may be used to normalize data stored in an in-memory database on a parallel computer system. The data normalization may be used to achieve memory savings, thereby reducing the number of compute nodes required to store an in-memory database. Thus, as a result, faster response times may be achieved when querying the data. In one embodiment, normalization may be performed in a manner to avoid datasets that cross physical or logical boundaries of the compute nodes of a parallel system. | 03-26-2009 |
20090100114 | Preserving a Query Plan Cache - A method, apparatus, and program product are provided for preserving a query plan cache on a backup system having a dataspace containing a copy of data and a copy of a query plan cache from a production system. An update is initiated of at least a portion of the copy of the data on the backup system with a portion of the data on the production system. A merge is initiated of updated query plans in the query plan cache from the production system with corresponding query plans in the copy of the query plan cache on the backup system. Objects are correlated in the updated query plans in the copy of the query plan cache with the updated copy of the data on the backup system. | 04-16-2009 |
20090112792 | Generating Statistics for Optimizing Database Queries Containing User-Defined Functions - Embodiments of the invention provide techniques for generating statistics for optimizing database queries containing user-defined functions (UDFs). In general, the statistics may be generated based on output values produced during past executions of a UDF. The statistics may also be generated based on input values received during past executions of the UDF. Additionally, the statistics may include input and output value pairs, such that a UDF output may be determined based on a UDF input. The generated statistics may be used by a query optimizer to determine an efficient query plan for executing the database query. | 04-30-2009 |
20090112799 | Database Statistics for Optimization of Database Queries Containing User-Defined Functions - Embodiments of the invention provide techniques for generating statistics for optimizing database queries containing user-defined functions (UDFs). In general, the statistics may be generated based on output values produced during past executions of a UDF. The statistics may also be generated based on input values received during past executions of the UDF. Additionally, the statistics may include input and output value pairs, such that a UDF output may be determined based on a UDF input. The generated statistics may be used by a query optimizer to determine an efficient query plan for executing the database query. | 04-30-2009 |
20090112953 | ENHANCED GARBAGE COLLECTION IN A MULTI-NODE ENVIRONMENT - Embodiments of the invention enhance a garbage collection process running on a parallel system or distributed computing environment. Using a garbage collector in such an environment allows a more in-depth analysis to be performed than is possible on other systems. This is because the number of compute nodes present in many parallel systems, and the connections between them, allows the overhead of doing advanced analysis to be spread across the nodes and the results of that analysis to be shared among the nodes. | 04-30-2009 |
20090113438 | OPTIMIZATION OF JOB DISTRIBUTION ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus optimizes job and data distribution on a multi-node computing system. A job scheduler distributes jobs and data to compute nodes according to priority and other resource attributes to ensure the most critical work is done on the nodes that are quickest to access and with less possibility of node communication failure. In a tree network configuration, the job scheduler distributes critical jobs and data to compute nodes that are located closest to the I/O nodes. Other resource attributes include network utilization, constant data state, and class routing. | 04-30-2009 |
20090125616 | OPTIMIZED PEER-TO-PEER FILE TRANSFERS ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus performs peer-to-peer file transfers on a High Performance Computing (HPC) cluster such as a Beowulf cluster. A peer-to-peer file tracker (PPFT) allows operating system, application and data files to be moved from a pre-loaded node to another node of the HPC cluster. A peer-to-peer (PTP) client is loaded into the nodes to facilitate PTP file transfers to reduce loading on networks, network switches and file servers to reduce the time needed to load the nodes with these files to increase overall efficiency of the multi-node computing system. The selection of the nodes participating in file transfers can be based on network topology, network utilization, job status and predicted network/computer utilization. This selection can be dynamic, changing during the file transfers as resource conditions change. The policies used to choose resources can be configured by an administrator. | 05-14-2009 |
20090132541 | MANAGING DATABASE RESOURCES USED FOR OPTIMIZING QUERY EXECUTION ON A PARALLEL COMPUTER SYSTEM - Embodiments of the invention may be used to increase query processing parallelism of an in-memory database stored on a parallel computing system. A group of compute nodes each store a portion of data as part of the in-memory database. Further, a pool of compute nodes may be reserved to create copies of data from the compute nodes of the in-memory database as part of query processing. When a query is received for execution, the query may be evaluated to determine whether portions of in-memory should be duplicated to allow multiple elements of the query (e.g., multiple query predicates) to be evaluated in parallel. | 05-21-2009 |
20090132609 | REAL TIME DATA REPLICATION FOR QUERY EXECUTION IN A MASSIVELY PARALLEL COMPUTER - Embodiments of the invention may be used to increase query processing parallelism of an in-memory database stored on a parallel computing system. A group of compute nodes each store a portion of data as part of the in-memory database. Further, a pool of compute nodes may be reserved to create copies of data from the compute nodes of the in-memory database as part of query processing. When a query is received for execution, the query may be evaluated to determine whether portions of in-memory should be duplicated to allow multiple elements of the query (e.g., multiple query predicates) to be evaluated in parallel. | 05-21-2009 |
20090138764 | Billing Adjustment for Power On Demand - An apparatus, program product and method for determining a cost for using a standby resource that accounts for the cause for the resource=s usage. A standby resource, such as a processor, is activated in response to a resource requirement. The cause of the resource requirement is automatically determined. The result of that automatic determination is used to determine a charge indicator for using the standby resource. For instance, performance code associated with a failure may be associated with a charge indicator. A user may later be billed according to the determined charge indicator, i.e., according to their actual use of the standby resource and/or their usage status. | 05-28-2009 |
20090144337 | COMMITMENT CONTROL FOR LESS THAN AN ENTIRE RECORD IN AN IN-MEMORY DATABASE IN A PARALLEL COMPUTER SYSTEM - In a networked computer system that includes multiple interconnected nodes, a commitment control mechanism allows designating certain portions of a record in an in-memory database as mandatory and other portions of the record as secondary, and performs mandatory commitment control once all the mandatory portions are available even if one or more secondary portions are not yet available. The secondary portions may be under separate commitment control that is asynchronous to the commitment control for the mandatory portions, or may be under no commitment control at all. The result is a commitment control mechanism that performs commitment control for portions of a record that are marked mandatory even when one or more of the portions marked secondary are not available. | 06-04-2009 |
20090158276 | DYNAMIC DISTRIBUTION OF NODES ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus dynamically distribute I/O nodes on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjusts the I/O node distribution based on the I/O performance of the executing job. | 06-18-2009 |
20090182711 | String Searches in a Computer Database - An apparatus and method for a query optimizer improves string searches in a computer database that sequentially search for a string in a database record. The query optimizer optimizes the query to search records of a database from a specified start position other than the beginning of the record. The specified start position of the search may be determined by from historical information stored from previous searches. Alternatively, the query optimizer determines the specified start position of the search based on an overriding starting position provided by a system administrator. The query optimizer may also direct that the database record be reorganized to more efficiently search for strings in the record. | 07-16-2009 |
20090204566 | Processing of Deterministic User-Defined Functions Using Multiple Corresponding Hash Tables - A deterministic UDF processing mechanism processes user-defined functions (UDFs) using multiple hash tables. Data access patterns for a UDF are collected, and an appropriate hash table set is then determined for the UDF from the data access patterns. If a UDF accesses some similar columns and some disjoint columns, the similar columns are grouped together, and one or more hash tables are allocated to the similar columns. Disjoint columns are allocated their own hash tables. In addition, the allocation of hash tables may be adjusted based on historical access patterns collected over time. By dynamically allocating and adjusting sets of hash tables to a deterministic UDF, the performance of the UDF is greatly increased. | 08-13-2009 |
20090222859 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR IMPLEMENTING AUTOMATIC UPDATE OF TIME SHIFT CONTENT - A method, apparatus, and computer program product implement automatic update of time shift content. Time sensitive information recorded on a client recording device is automatically updated responsive to updated content becoming available. Updating time sensitive information is enabled by a remote broadcast signal. The remote broadcast signal includes an embedded signal indicating sensitive information content. | 09-03-2009 |
20090300752 | UTILIZING VIRTUAL PRIVATE NETWORKS TO PROVIDE OBJECT LEVEL SECURITY ON A MULTI-NODE COMPUTER SYSTEM - The disclosure herein provides data security on a parallel computer system using virtual private networks connecting the nodes of the system. A mechanism sets up access control data in the nodes that describes a number of security classes. Each security class is associated with a virtual network. Each user on the system is associated with one of the security classes. Each database object to be protected is given an attribute of a security class. Database objects are loaded into the system nodes that match the security class of the database object. When a query executes on the system, the query is sent to a particular class or set of classes such that the query is only seen by those nodes that are authorized by the equivalent security class. In this way, the network is used to isolate data from users that do not have proper authorization to access the data. | 12-03-2009 |
20090307287 | Database Journaling in a Multi-Node Environment - A database spread over multiple nodes allows each node to store a journal recording changes made to the database and also allows a journaling component to manage the memory space available for journaling. Two threshold size values may be specified for the journal. The first threshold value specifies a journal size at which to being pruning the journal on a given node. A journal pruning algorithm may be used to identify journal entries that may be removed. For example, once a given transaction completes (i.e., commits) the journal entries related to that transaction may be pruned from the journal. The second threshold value specifies the maximum size of the journal. After reaching this size, journal entries may be written to disk instead of the in-memory journal. | 12-10-2009 |
20090307290 | Database Journaling in a Multi-Node Environment - A database spread over multiple nodes allows each node to store a journal recording changes made to the database and also allows a journaling component to manage the memory space available for journaling. Two threshold size values may be specified for the journal. The first threshold value specifies a journal size at which to being pruning the journal on a given node. A journal pruning algorithm may be used to identify journal entries that may be removed. For example, once a given transaction completes (i.e., commits) the journal entries related to that transaction may be pruned from the journal. The second threshold value specifies the maximum size of the journal. After reaching this size, journal entries may be written to disk instead of the in-memory journal. | 12-10-2009 |
20090307466 | Resource Sharing Techniques in a Parallel Processing Computing System - A method, apparatus, and program product share a resource in a computing system that includes a plurality of computing cores. A request from a second execution context (“EC”) to lock the resource currently locked by a first EC on a first core causes replication of the second EC as a third EC on a third core. The first and third ECs are executed substantially concurrently. When the first EC modifies the resource, the third EC is restarted after the resource has been modified. Alternately, a first EC is configured in a first core and shadowed as a second EC in a second core. In response to a blocked lock request, the first EC is halted and the second EC continues. After granting a lock, it is determined whether a conflict has occurred and the first and second EC are particularly synchronized to each other in response to that determination. | 12-10-2009 |
20090313452 | MANAGEMENT OF PERSISTENT MEMORY IN A MULTI-NODE COMPUTER SYSTEM - A method and apparatus creates and manages persistent memory (PM) in a multi-node computing system. A PM Manager in the service node creates and manages pools of nodes with various sizes of PM. A node manager uses the pools of nodes to load applications to the nodes according to the size of the available PM. The PM Manager can dynamically adjust the size of the PM according to the needs of the applications based on historical use or as determined by a system administrator. The PM Manager works with an operating system kernel on the nodes to provide persistent memory for application data and system metadata. The PM Manager uses the persistent memory to load applications to preserve data from one application to the next. Also, the data preserved in persistent memory may be system metadata such as file system data that will be available to subsequent applications. | 12-17-2009 |
20090320003 | Sharing Compiler Optimizations in a Multi-Node System - Embodiments of the invention enable application programs running across multiple compute nodes of a highly-parallel system to compile source code into native instructions, and subsequently share the optimizations used to compile the source code with other nodes. For example, determining what optimizations to use may consume significant processing power and memory on a node. In cases where multiple nodes exhibit similar characteristics, it is possible that these nodes may use the same set of optimizations when compiling similar pieces of code. Therefore, when one node compiles source code into native instructions, it may share the optimizations used with other similar nodes, thereby removing the burden for the other nodes to figure out which optimizations to use. Thus, while one node may suffer a performance hit for determining the necessary optimizations, other nodes may be saved from this burden by simply using the optimizations provided to them. | 12-24-2009 |
20090320008 | Sharing Compiler Optimizations in a Multi-Node System - Embodiments of the invention enable application programs running across multiple compute nodes of a highly-parallel system to compile source code into native instructions, and subsequently share the optimizations used to compile the source code with other nodes. For example, determining what optimizations to use may consume significant processing power and memory on a node. In cases where multiple nodes exhibit similar characteristics, it is possible that these nodes may use the same set of optimizations when compiling similar pieces of code. Therefore, when one node compiles source code into native instructions, it may share the optimizations used with other similar nodes, thereby removing the burden for the other nodes to figure out which optimizations to use. Thus, while one node may suffer a performance hit for determining the necessary optimizations, other nodes may be saved from this burden by simply using the optimizations provided to them. | 12-24-2009 |
20100186019 | DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application. | 07-22-2010 |
20100205137 | Optimizing Power Consumption and Performance in a Hybrid Computer Evironment - A method for optimizing efficiency and power consumption in a hybrid computer system is disclosed. The hybrid computer system may comprise one or more front-end nodes connected to a multi-node computer system. Portions of an application may be offloaded from the front-end nodes to the multi-node computer system. By building historical profiles of the applications running on the multi-node computer system, the system can analyze the trade offs between power consumption and performance. For example, if running the application on the multi-node computer system cuts the run time by 5% but increases power consumption by 20% it may be more advantageous to simply run the entire application on the front-end. | 08-12-2010 |
20100205170 | Distribution of Join Operations on a Multi-Node Computer System - A method and apparatus distributes database query joins on a multi-node computing system. In the illustrated examples, a join execution unit utilizes various factors to determine where to best perform the query join. The factors include user controls in a hints record set up by a system user and properties of the system such as database configuration and system resources. The user controls in the hints record include a location flag and a determinicity flag. The properties of the system include the free space on the node and the size join, the data traffic on the networks and the data traffic generated by the join, the time to execute the join and nodes that already have code optimization. The join execution unit also determines whether to use collector nodes to optimize the query join. | 08-12-2010 |
20100205323 | Timestamp Synchronization for Queries to Database Portions in Nodes That Have Independent Clocks in a Parallel Computer System - A parallel computer system has multiple nodes that have independent clocks, where the different nodes may include different database portions that are referenced by a query. A timestamp parameter in a query is synchronized across the different nodes that are referenced by the query to assure the timestamps in the different nodes are consistent with each other notwithstanding the independent clocks used in each node. As a result, a database may be scaled to a parallel computer system with multiple nodes in a way that assures the timestamps for different nodes referenced during a query have identical values. | 08-12-2010 |
20100241881 | Environment Based Node Selection for Work Scheduling in a Parallel Computing System - A method, apparatus, and program product manage scheduling of a plurality of jobs in a parallel computing system of the type that includes a plurality of computing nodes and is disposed in a data center. The plurality of jobs are scheduled for execution on a group of computing nodes from the plurality of computing nodes based on the physical locations of the plurality of computing nodes in the data center. The group of computing nodes is further selected so as to distribute at least one of a heat load and an energy load within the data center. The plurality of jobs may be additionally scheduled based upon an estimated processing requirement for each job of the plurality of jobs. | 09-23-2010 |
20100241884 | Power Adjustment Based on Completion Times in a Parallel Computing System - A method, apparatus, and program product optimize power consumption in a parallel computing system that includes a plurality of computing nodes by selectively throttling performance of selected nodes to effectively slow down the completion of quicker executing parts of a workload of the computing system when those parts are dependent upon or otherwise associated with the completion of other, slower executing parts of the same workload. Parts of the workload are executed on the computing nodes, including concurrently executing a first part on a first computing node and a second part on a second computing node. The first node is selectively throttled during execution of the first part to decrease power consumption of the first node and conform a completion time of for the first node in completing the first part of the workload with a completion time for the second node in completing the second part. | 09-23-2010 |
20110022585 | MULTI-PARTITION QUERY GOVERNOR IN A COMPUTER DATABASE SYSTEM - An apparatus and method for a multi-partition query governor in a partitioned computer database system. In preferred embodiments a query governor uses data of a query governor file that is associated with multiple partitions to determine how the query governor manages access to the database across multiple partitions. Also, in preferred embodiments, the query governor in a local partition that receives a query request communicates with a query governor in a target partition to accumulate the total resource demands of the query on the local and target partitions. In preferred embodiments, a query governor estimates whether resources to execute a query will exceed a threshold over all or a combination of database partitions. | 01-27-2011 |
20120144132 | MANAGEMENT OF PERSISTENT MEMORY IN A MULTI-NODE COMPUTER SYSTEM - A method and apparatus creates and manages persistent memory (PM) in a multi-node computing system. A PM Manager in the service node creates and manages pools of nodes with various sizes of PM. A node manager uses the pools of nodes to load applications to the nodes according to the size of the available PM. The PM Manager can dynamically adjust the size of the PM according to the needs of the applications based on historical use or as determined by a system administrator. The PM Manager works with an operating system kernel on the nodes to provide persistent memory for application data and system metadata. The PM Manager uses the persistent memory to load applications to preserve data from one application to the next. Also, the data preserved in persistent memory may be system metadata such as file system data that will be available to subsequent applications. | 06-07-2012 |
20120151573 | UTILIZING VIRTUAL PRIVATE NETWORKS TO PROVIDE OBJECT LEVEL SECURITY ON A MULTI-NODE COMPUTER SYSTEM - The disclosure herein provides data security on a parallel computer system using virtual private networks connecting the nodes of the system. A mechanism sets up access control data in the nodes that describes a number of security classes. Each security class is associated with a virtual network. Each user on the system is associated with one of the security classes. Each database object to be protected is given an attribute of a security class. Database objects are loaded into the system nodes that match the security class of the database object. When a query executes on the system, the query is sent to a particular class or set of classes such that the query is only seen by those nodes that are authorized by the equivalent security class. In this way, the network is used to isolate data from users that do not have proper authorization to access the data. | 06-14-2012 |
20120154412 | RUN-TIME ALLOCATION OF FUNCTIONS TO A HARDWARE ACCELERATOR - An accelerator work allocation mechanism determines at run-time which functions to allocate to a hardware accelerator based on a defined accelerator policy, and based on an analysis performed at run-time. The analysis includes reading the accelerator policy, and determining whether a particular function satisfies the accelerator policy. If so, the function is allocated to the hardware accelerator. If not, the function is allocated to the processor. | 06-21-2012 |
20120180053 | CALL STACK AGGREGATION AND DISPLAY - A call stack aggregation mechanism aggregates call stacks from multiple threads of execution and displays the aggregated call stack to a user in a manner that visually distinguishes between the different call stacks in the aggregated call stack. The multiple threads of execution may be on the same computer system or on separate computer systems. | 07-12-2012 |
20120290618 | METHODS AND APPARATUS FOR PROCESSING A DATABASE QUERY - In a first aspect, a method is provided that includes the steps of (1) pre-computing a query result for each of a plurality of whole segments of data included in a database; (2) receiving a query specifying a defined range of data in the database; (3) determining if any of the whole segments are within the defined range; (4) performing the query on any partial segments of data within the defined range; and (5) determining the result of the query based on the pre-computed query results for any whole segments determined to be within the defined range and the result of the query on any partial segments within the defined range. Numerous other aspects are provided. | 11-15-2012 |
20130013586 | METHOD AND SYSTEM FOR DATA MINING FOR AUTOMATIC QUERY OPTIMIZATION - A database monitor tracks performance statistics and information about the execution of different SQL statements. A query optimizer benefits from these statistics when generating an access plan. In particular, the query optimizer, upon receiving an SQL statement, searches the records of the database monitor for similar SQL statements that have previously been executed. As part of determining the best access plan for the current SQL statement, the query optimizer considers the information retrieved from the database monitor. In this way, the access plan that is generated can automatically be tuned based on empirical performance evidence. | 01-10-2013 |