Patent application number | Description | Published |
20080225724 | Method and Apparatus for Improved Data Transmission Through a Data Connection - A computer implemented method, apparatus, and computer usable code for receiving data from a sender across a network connection for the data transfer. An expected size for a congestion window for the sender is identified. An amount of the data received from the sender is tracked. An acknowledgment is sent in response to the amount of data received from the sender meet in the expected size of the congestion window for the sender. | 09-18-2008 |
20080256324 | IMPLEMENTING A FAST FILE SYNCHRONIZATION IN A DATA PROCESSING SYSTEM - A system and method for implementing a fast file synchronization in a data processing system. A memory management unit divides a file stored in system memory into a collection of data block groups. In response to a master (e.g., processing unit, peripheral, etc.) modifying a first data block group among the collection of data block groups, the memory management unit writes a first block group number associated with the first data block group to system memory. In response to a master modifying a second data block group, the memory management unit writes the first data block group to a hard disk drive and writes a second data block group number associated with the second data block group to system memory. In response to a request to update modified data block groups of the file stored in the system memory to the hard disk drive, the memory management unit writes the second data block to the hard disk drive. | 10-16-2008 |
20090217276 | METHOD AND APPARATUS FOR MOVING THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 08-27-2009 |
20110113214 | INFORMATION HANDLING SYSTEM MEMORY MANAGEMENT - An information handling system (IHS) loads an application that may include startup code and steady state operation code. The IHS allocates one region of system memory to the startup code and another region of system memory to the steady state operation code. A programmer inserts a memory release call command at a location that marks the end of execution of the startup code. After executing the startup code, the operation system receives the memory release call command. In response to the memory release call command, the operating system releases or de-allocates the region of memory to which the IHS previously assigned to the startup code. This enables the released memory for use by code other than the startup code, such as other code pages, library pages and other code. | 05-12-2011 |
20110153975 | METHOD FOR PRIORITIZING VIRTUAL REAL MEMORY PAGING BASED ON DISK CAPABILITIES - A method manages memory paging operations. Responsive to a request to page out a memory page from a shared memory pool, the method identifies whether a physical space within one of a number of paging space devices has been allocated for the memory page. If physical space within the paging space device has not been allocated for the memory page, a page priority indicator for the memory page is identified. The memory page is then allocated to one of a number of memory pools within one of the number of paging space devices. The memory page is allocated one of the memory pools according to the page priority indicator of the memory page. The memory page is then written to the allocated memory pools. | 06-23-2011 |
20110161539 | OPPORTUNISTIC USE OF LOCK MECHANISM TO REDUCE WAITING TIME OF THREADS TO ACCESS A SHARED RESOURCE - Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource. | 06-30-2011 |
20110246800 | OPTIMIZING POWER MANAGEMENT IN MULTICORE VIRTUAL MACHINE PLATFORMS BY DYNAMICALLY VARIABLE DELAY BEFORE SWITCHING PROCESSOR CORES INTO A LOW POWER STATE - Distributing a thread for running on a physical processor and enabling the physical processor to be switched into a low power snooze state when said running thread is IDLE. However, this switching into said low power state is enabled to be delayed by a delay time from an IDLE dispatch from said running thread; such delay is determined by tracking the rate of the number of said IDLE dispatches per processor clock interval and dynamically varying said delay time wherein the delay time is decreased when said rate of IDLE dispatches increases and the delay time is increased when said rate of IDLE dispatches decreases. | 10-06-2011 |
20120005448 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 01-05-2012 |
20120072676 | SELECTIVE MEMORY COMPRESSION FOR MULTI-THREADED APPLICATIONS - A method, system, and computer usable program product for selective memory compression for multi-threaded applications are provided in the illustrative embodiments. An identification of a memory region that is shared by a plurality of threads in an application is received at a first entity in a data processing system. A request for a second entity in the data processing system to keep the memory region uncompressed when compressing at least one of a plurality of memory regions that comprise the memory region is provided from the first entity to the second entity. | 03-22-2012 |
20120204186 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-09-2012 |
20120210331 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-16-2012 |
20120278809 | LOCK BASED MOVING OF THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 11-01-2012 |
20130179616 | Partitioned Shared Processor Interrupt-intensive Task Segregator - Interrupt-intensive and interrupt-driven processes are managed among a plurality of virtual processors, wherein each virtual processor is associated with a physical processor, wherein each physical processor may be associated with a plurality of virtual processors, and wherein each virtual processor is tasked to execute one or more of the processes, by determining which of a plurality of the processes executing among a plurality of virtual processors are being or have been driven by at least a minimum count of interrupts over a period of operational time; selecting a subset of the plurality of virtual processors to form a sequestration pool; migrating the interrupt-intensive processes on to the sequestration pool of virtual processors; and commanding by a computer a bias in delivery or routing of the interrupts to the sequestration pool of virtual processors. | 07-11-2013 |
20130227549 | MANAGING UTILIZATION OF PHYSICAL PROCESSORS IN A SHARED PROCESSOR POOL - Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors. | 08-29-2013 |
20130254775 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSING SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 09-26-2013 |
20130290666 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 10-31-2013 |
20140101662 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSOR SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 04-10-2014 |
20140149672 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |
20140149675 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |