Patent application number | Description | Published |
20090013017 | Methods, Systems, and Computer Program Products for Optimizing Virtual Machine Memory Consumption - A method, system, and computer program product for optimizing virtual machine (VM) memory consumption are provided. The method includes monitoring VM accesses to a plurality of objects in a heap, and identifying a dead object among the objects in the heap. The method also includes copying the dead object to a data storage device as a serialized object, and replacing the dead object in the heap with a loader object. The loader object is smaller than the dead object and includes a reference to the serialized object. | 01-08-2009 |
20090031202 | Methods, Systems, and Computer Program Products for Class Verification - A method, system, and computer program product for class verification are provided. The method includes initiating loading of a class, and searching for the class in verification caches. A record from the verification caches, including a checksum, is returned upon locating the class. The method further includes comparing the checksum in the record to a checksum of the class being loaded, and completing the loading of the class when the checksums match. The method additionally includes performing bytecode verification of the class upon one of: a checksum comparison mismatch, and a failure to locate the class in the verification caches. The method also includes calculating a new checksum of the class upon a successful bytecode verification, and storing the new checksum in the verification caches. | 01-29-2009 |
20090043557 | Method, System, and Apparatus for Emulating Functionality of a Network Appliance in a Logically Partitioned Environment - A network appliance is emulated in a logically partitioned environment. Activity of a logical partition (LPAR) acting as a network appliance is monitored. When a change in activity occurs in the LPAR acting as the network appliance, a set of business logic partitions served by the LPAR acting as the network appliance is determined, and resource utilization of each business logic partition served by the LPAR acting as the network appliance is determined. A determination is also made whether each business logic partition served by the LPAR acting as the network appliance needs more or less resources. Availability of resources is determined, and resources are allocated or deallocated to or from the business logic partitions served by the LPAR acting as the network appliance based on the need for resources and the availability of resources for each business logic partition. | 02-12-2009 |
20090187890 | Method and System for Associating Profiler Data With a Reference Clock - A computer implemented method, apparatus and program product for analyzing performance data particular to an algorithm using a profiler algorithm, and automatically associates the performance data with a reference clock time. The performance data may be automatically associated with a tag, also associated with the reference clock time. Using the tag, the performance data may be associated with a portion of the algorithm. For instance, the tag may be associated with a corresponding tag associated with the algorithm. User input may be received that designates both the tag and an additional tag associated with the program code. Aspects may identify tags in the performance data that correspond to both the tag and additional tag of program code. The portion of the performance data bounded by the identified tags in the performance data may be retrieved and displayed to a user. In this manner, the performance data may be automatically associated with a portion of algorithm. | 07-23-2009 |
20090204956 | MULTIPLE-MODE SOFTWARE LICENSE ENFORCEMENT - A computer implemented method, for multiple-mode software license enforcement on a client, including encoding in the software at least one predetermined event that occurs prior to a validation of the software program and encoding it with different functional states. The software's resulting modification may be of reduced or increased functionality or both reduced and increased functionality. The predetermined events may be the elapsing of a predetermined length of time, the entry of a valid registration key or an act of validating. Each of these events may take place a multiple number of times. | 08-13-2009 |
20090216784 | System and Method of Storing Probabilistic Data - A method of storing probabilistic data in accordance with an exemplary embodiment of the present invention includes capturing a first instance of a probabilistic data sample, storing the first instance of the probabilistic data sample as a probabilistic data record, collecting a second instance of the probabilistic data sample, refining the probabilistic data record with the second instance of the probabilistic data sample to establish a refined probabilistic data record, and saving the refined probabilistic data record in a probabilistic data record database. | 08-27-2009 |
20090254918 | Mechanism for Performance Optimization of Hypertext Preprocessor (PHP) Page Processing Via Processor Pinning - A method, system, and computer program product for optimizing “Hypertext Preprocessor” (PHP) processes by identifying the PHP pages which are active on a server and forwarding requests for specific pages to a processor which has recently processed that page. A request processing optimization (RPO) utility assigns an initial request received at the server for a PHP page based on a number of factors which may include a relative usage level of a processor within a pool of available processors on a server. The RPO utility assigns a request to additional processors based on: (1) a threshold frequency of page requests; and (2) a resource intensive factor of a page request measured by average response time of the page request. The assignment of PHP pages to a particular processor(s) enhances cache performance since the requisite code for a specific PHP page is loaded into the processor's cache. | 10-08-2009 |
20090265419 | Executing Applications at Servers With Low Energy Costs - Embodiments of the invention provide methods, systems, and articles of manufacture for managing and executing applications in a clustered server system. In one embodiment, an application may be installed at an application server having the associated lowest energy cost of maintenance, thereby lowering the cost of operating the system. In another embodiment, requests for services from the system may be routed to application servers having the lowest energy cost, thereby lowering the cost of operating the system. | 10-22-2009 |
20090265704 | Application Management for Reducing Energy Costs - Embodiments of the invention provide methods, systems, and articles of manufacture for managing and executing applications in a clustered server system. In one embodiment, an application may be installed at an application server having the associated lowest energy cost of maintenance, thereby lowering the cost of operating the system. In another embodiment, requests for services from the system may be routed to application servers having the lowest energy cost, thereby lowering the cost of operating the system. | 10-22-2009 |
20090282414 | Prioritized Resource Access Management - Middleware may dynamically restrict or otherwise allocate computer resources in response to changing demand and based on prioritized user access levels. Users associated with a relatively low priority may have their resource access delayed in response to high demand, e.g., processor usage. Users having a higher priority may experience uninterrupted access during the same period and until demand subsides. | 11-12-2009 |
20100037244 | Method for Providing Inline Service-Oriented Architecture Application Fragments - A method for providing inline service-oriented architecture application fragments is disclosed. A remote procedure call is initially from a client application executing on a first data processing system by an application server executing on a second data processing system. The remote procedure call is a call to execute a service in a service-oriented architecture hosted by the application server. The remote procedure call includes a metadata tag indicating a preference for having computer-executable code corresponding to the service transmitted from the second data processing system to the first data processing system for execution on the first data processing system. A determination is made whether or not the service supports transmitting computer-executable code. If the service supports the transmitting computer-executable code, a service unit of work is transmitted to the first data processing system. If the service does not support transmitting executable code, the service is executed by the second data processing system to generate a result. | 02-11-2010 |
20110099542 | Controlling Compiler Optimizations - In an embodiment, a conditional branch is detected that selects between execution of a first alternative block and a second alternative block. A first count and a second count are saved, where the first count is a number of times the first alternative block was executed, and the second count is a number of times the second alternative block was executed. If the first count is greater than a threshold and the second count equals zero, the first alternative block is compiled into first alternative block object code and the second alternative block is not compiled. If the first count is not greater than the threshold, the first alternative block is compiled into the first alternative block object code and the second alternative block is compiled into second alternative block object code. | 04-28-2011 |
20110307871 | Distributed Debugging - In an embodiment, a first debug agent at a first computer receives a packet. The first debug agent adds a debug command and an identifier of the first debug agent to the packet and sends the packet to a receiving computer. A second debug agent at the receiving computer removes the debug command and the identifier of the first debug agent from the packet and sends the packet to a second program that executes at the receiving computer. The second debug agent further executes the debug command, which causes the second program that executes on the receiving computer to halt execution at a breakpoint or address watch memory location. The second debug agent sends the state of the second program to the first debug agent, which presents, at the first computer, the state and a listing of the second program. | 12-15-2011 |