Patent application number | Description | Published |
20090254893 | COMPILER OPTIMIZED FUNCTION VARIANTS FOR USE WHEN RETURN CODES ARE IGNORED - A mechanism and functionality are provided for generating and using compiler optimized function variants. These variants may be used, for example, in situations where return values of functions called by code are not thereafter used by the code calling the functions. In particular, for a function called by computer code, at least two variants for the function may be generated. A function call, for calling the function, within original computer code may be analyzed to determine which variant of the at least two variants to use for the function call. The function call may be modified in the original computer code, to generate modified computer code, based on results of the analysis identifying which variant of the at least two variants to use for the function call. | 10-08-2009 |
20090282217 | Horizontal Scaling of Stream Processing - A computer implemented method, data processing system, and computer program product for dynamically scheduling algorithms in a pipeline which operate on a stream of data. The illustrative embodiments determine a computational cost of each algorithm in a plurality of algorithms in a pipeline. The plurality of algorithms in the pipeline processes an incoming data stream in a first sequential algorithm order. The illustrative embodiments reorder the plurality of algorithms in the pipeline to form a second sequential algorithm order based on the computational cost of each algorithm. The plurality of algorithms may then be executed in the second sequential algorithm order. When the illustrative embodiments assign a spare processing unit to an algorithm at an end of the pipeline, the computational cost of each algorithm in the plurality of algorithms in the pipeline is redetermined. | 11-12-2009 |
20100031269 | Lock Contention Reduction - Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for lock contention reduction. In one illustrative embodiment, the computer implemented method provides a lock to an active thread, increments a lock counter, receives a request to de-schedule the active thread, and determines whether the lock is held by the active thread. The computer implemented method, responsive to a determination that the lock is held by the active thread, adds a first pre-determined amount to a time slice of the active thread. | 02-04-2010 |
20100217949 | Dynamic Logical Partition Management For NUMA Machines And Clusters - A partitioned NUMA machine is managed to dynamically transform its partition layout state based on NUMA considerations. The NUMA machine includes two or more NUMA nodes that are operatively interconnected by one or more internodal communication links. Each node includes one or more CPUs and associated memory circuitry. Two or more logical partitions each comprise at a CPU and memory circuit allocation on at least one NUMA node. Each partition respectively runs at least one associated data processing application. The partitions are dynamically managed at runtime to transform the distributed data processing machine from a first partition layout state to a second partition layout state that is optimized for the data processing applications according to whether a given partition will most efficiently execute within a single NUMA node or by spanning across a node boundary. The optimization is based on access latency and bandwidth in the NUMA machine. | 08-26-2010 |
20100229181 | NSMART SCHEDULING OF AUTOMATIC PARTITION MIGRATION BY THE USER OF TIMERS - Partition migrations are scheduled between virtual partitions of a virtually partitioned data processing system. The virtually partitioned data processing system is a tickless system in which a periodic timer interrupt is not guaranteed to be sent to the processor at a defined time interval. A request is received for a partition migration. Gaps between scheduled timer interrupts are identified. The partition migration is then scheduled to occur within the largest gap. | 09-09-2010 |
20110302372 | SMT/ECO MODE BASED ON CACHE MISS RATE - A computer implemented method for managing an execution mode for a parallel processor is provided. A monitor identifies a first efficiency rate for a first contested resource of the parallel processor operating in a first operating mode. Responsive to identifying the first efficiency rate for the first contested resource, the monitor identifies whether the first efficiency rate for the contested resource of the parallel processor operating in the first operating mode exceeds a threshold. Responsive to identifying that the efficiency rate for the contested resource exceeds the threshold, an operation of the parallel processor is changed to a second operating mode. | 12-08-2011 |
20120137062 | LEVERAGING COALESCED MEMORY - Embodiments of the invention relate to efficiently processing read transactions in a shared file system having multiple virtual machines. Each virtual machine in the file system has access to disk storage and local disk cache. At the same time, each virtual machine in the file system has access to remote disk cache of a remote virtual machine. For each read transaction, the local and/or remote disk cache employed for data blocks to support the transaction. Disk storage is employed to support the transaction in the event that the data blocks are not available in the local and/or remote disk cache. | 05-31-2012 |
20120216030 | SMT/ECO MODE BASED ON CACHE MISS RATE - A computer implemented method for managing an execution mode for a parallel processor is provided. A monitor identifies a first efficiency rate for a first contested resource of the parallel processor operating in a first operating mode. Responsive to identifying the first efficiency rate for the first contested resource, the monitor identifies whether the first efficiency rate for the contested resource of the parallel processor operating in the first operating mode exceeds a threshold. Responsive to identifying that the efficiency rate for the contested resource exceeds the threshold, an operation of the parallel processor is changed to a second operating mode. | 08-23-2012 |
20130007121 | PREDICTIVE COLLABORATION MANAGEMENT - A method and apparatus for managing collaborations. Requests are received by a computer for collaboration on a topic. A set of experts is identified by the computer having expertise in the topic for the collaboration and having activity prior to the collaboration relating to the topic to predict a likelihood of participation by the respective expert in the collaboration. The set of experts are identified from searching a number of collections of information. | 01-03-2013 |
20130007322 | Hardware Enabled Lock Mediation - A tangible storage medium and data processing system build a runtime environment of a system. A profile manager receives a service request containing a profile identifier. The profile identifier specifies a required version of at least one software component. The profile manager identifies a complete installation of the software component, and at least one delta file. The profile manager dynamically constructs a classpath for the required version by preferentially utilizing files from the at least one delta file followed by files from the complete installation. The runtime environment is then built utilizing the classpath. | 01-03-2013 |
20130007323 | Hardware Enabled Lock Mediation - A computer implemented method for control access to a contested resource. When a lock acquisition request is received from a virtual machine, the partition management firmware determines whether the lock acquisition request is received within a preemption period of a time slice allocated to the virtual machine. If the lock acquisition request is received within the preemption period, the partition management firmware ends the time slice early, and performs a context switch. | 01-03-2013 |
20130097354 | PROTECTING MEMORY OF A VIRTUAL GUEST - The method for protecting memory of a virtual guest includes initializing a virtual guest on a host computing system. The host computing system includes a virtual machine manager that manages operation of the virtual guest. The virtual guest includes a distinct operating environment executing in a virtual operation platform provided by the virtual machine manager. The method includes receiving an allocation of run-time memory for the virtual guest, the allocation of run-time memory comprising a portion of run-time memory of the host computing system. The method includes setting, by the virtual guest, at least a portion of the allocation of run-time memory to be inaccessible by the virtual machine manager. | 04-18-2013 |
20130097392 | PROTECTING MEMORY OF A VIRTUAL GUEST - An apparatus and system for protecting memory of a virtual guest includes initializing a virtual guest on a host computing system. The host computing system includes a virtual machine manager that manages operation of the virtual guest. The virtual guest includes a distinct operating environment executing in a virtual operation platform provided by the virtual machine manager. The method includes receiving an allocation of run-time memory for the virtual guest, the allocation of run-time memory comprising a portion of run-time memory of the host computing system. The method includes setting, by the virtual guest, at least a portion of the allocation of run-time memory to be inaccessible by the virtual machine manager. | 04-18-2013 |
20140006745 | COMPRESSED MEMORY PAGE SELECTION | 01-02-2014 |
20140007091 | Maintaining hardware resource bandwidth quality-of-service via hardware counter | 01-02-2014 |
20140007096 | Maintaining hardware resource bandwidth quality-of-service via hardware counter | 01-02-2014 |
20140279985 | Extending Platform Trust During Program Updates - An approach is provided in which a computer system generates a current hash value of a computer program in response to receiving a request to execute the computer program. Next, the computer system determines that the current hash value fails to match a reference hash value that was previously generated subsequent to installing the computer program on the computer system. Since the two hash values do not match each other, the computer system matches the current hash value to an updated hash value that was previously generated in response to modifying the computer program on the computer system. In turn, the computer system executes the computer program when the current hash value matches the updated hash value. | 09-18-2014 |
Patent application number | Description | Published |
20080320487 | SCHEDULING TASKS ACROSS MULTIPLE PROCESSOR UNITS OF DIFFERING CAPACITY - A mechanism is provided for scheduling tasks across multiple processor units of differing capacity. In a multiple processor unit system with processor units of disparate speeds, it is advantageous to have the most processing-intensive tasks run on the processor units with the highest capacity. All tasks are initially scheduled on the lowest capacity processor units. Because processor units with higher capacity are more likely to have idle time, these higher capacity processor units may pull one or more tasks onto themselves from the same or lower capacity processor units. A processor unit will attempt to pull tasks that utilize a larger percentage of the timeslice. When a higher capacity processor unit is overloaded or near capacity, the higher capacity processor unit may push tasks to processor units with the same or lower capacity. A processor unit will attempt to push tasks that utilize a smaller percentage of the timeslice. | 12-25-2008 |
20090024793 | METHOD AND APPARATUS FOR MANAGING DATA IN A HYBRID DRIVE SYSTEM - The illustrative embodiments described herein provide an apparatus and method for managing data in a hybrid drive system. In one embodiment, a process determines whether the detachable memory contains clean data in response to identifying that a cache portion of a detachable memory is unavailable. The clean data does not require a disk to be in a spin state to be removed from the detachable memory. The process removes the clean data from the detachable memory in response to determining that the detachable memory contains the clean data. The process stores the data on the detachable memory. | 01-22-2009 |
20090164765 | Determining Thermal Characteristics Of Instruction Sets - Methods, apparatus, and products for determining thermal characteristics of instruction sets comprising one or more computer program instructions executed by a computer processor are disclosed that include tracking, in a performance counter, a number of classes of instructions run during execution of a plurality of instruction sets; identifying, for each instruction set, from the performance counter, a number of each class of instructions run during execution of the instruction set; and ranking the instruction sets in dependence upon the number of each class of instructions run during execution of each instruction set and a profile of thermal characteristics of classes of instructions. | 06-25-2009 |