Patent application number | Description | Published |
20080256324 | IMPLEMENTING A FAST FILE SYNCHRONIZATION IN A DATA PROCESSING SYSTEM - A system and method for implementing a fast file synchronization in a data processing system. A memory management unit divides a file stored in system memory into a collection of data block groups. In response to a master (e.g., processing unit, peripheral, etc.) modifying a first data block group among the collection of data block groups, the memory management unit writes a first block group number associated with the first data block group to system memory. In response to a master modifying a second data block group, the memory management unit writes the first data block group to a hard disk drive and writes a second data block group number associated with the second data block group to system memory. In response to a request to update modified data block groups of the file stored in the system memory to the hard disk drive, the memory management unit writes the second data block to the hard disk drive. | 10-16-2008 |
20090323527 | Reducing Retransmission of Out of Order Packets - Methods and arrangements of network communications are discussed. Embodiments include transformations, code, state machines or other logic to determine an average rate of duplicate packets per connection for packets received by a node over an interface. The embodiment may involve selecting a connection from the connections established over the interface, determining that a rate of duplicate packets for the selected connection exceeds the average rate of duplicate packets by a threshold percentage, and sending a message to a transmitter of the duplicate packets over the connection to increase a timeout interval to retransmit packets. Another embodiment may provide an apparatus for increasing a timeout interval to retransmit packets. Still another embodiment may provide a computer program produce for increasing a timeout interval to retransmit packets. | 12-31-2009 |
20100031019 | SECURE APPLICATION ROUTING - Disclosed is a computer implemented method and apparatus to secure a routing path. A local node receives a request for secure route identification from an upstream node. Responsive to receiving a request for secure route identification, the local node transmits a local node security level and an authentication key to the upstream node. The local node determines whether at least one downstream node is authentic and has sufficient security level from a second-level downstream node. The local node may then establish a socket to the upstream node. | 02-04-2010 |
20110041143 | AUTOMATIC CLOSURE OF A FILE OR A DEVICE IN A DATA PROCESSING SYSTEM - A mechanism is provided for automatically closing a file or a device. A service routine monitor monitors a request received from either an application that opened the file or a device driver that readied the device. The service routine monitor determines whether the file or the device has been accessed within a predetermined time interval. Responsive to the file or the device failing to be accessed within the predetermined time interval, the service routine monitor sends a call to the application that opened the file or the application or a higher level device driver that requested that the device driver ready the device. Responsive to a response from the application or the higher level device driver indicating that the use of the file or the device is no longer needed, the service routine monitor closes the file or quiesces the device. | 02-17-2011 |
20130232502 | METHODOLOGY FOR SECURE APPLICATION PARTITIONING ENABLEMENT - A computer implemented method, data processing system, and computer program product for configuring a partition with needed system resources to enable an application to run and process in a secure environment. Upon receiving a command to create a short lived secure partition for a secure application, a short lived secure partition is created in the data processing system. This short lived secure partition is inaccessible by superusers or other applications. System resources comprising physical resources and virtual allocations of the physical resources are allocated to the short lived secure partition. Hardware and software components needed to run the secure application are loaded into the short lived secure partition. | 09-05-2013 |
Patent application number | Description | Published |
20080256324 | IMPLEMENTING A FAST FILE SYNCHRONIZATION IN A DATA PROCESSING SYSTEM - A system and method for implementing a fast file synchronization in a data processing system. A memory management unit divides a file stored in system memory into a collection of data block groups. In response to a master (e.g., processing unit, peripheral, etc.) modifying a first data block group among the collection of data block groups, the memory management unit writes a first block group number associated with the first data block group to system memory. In response to a master modifying a second data block group, the memory management unit writes the first data block group to a hard disk drive and writes a second data block group number associated with the second data block group to system memory. In response to a request to update modified data block groups of the file stored in the system memory to the hard disk drive, the memory management unit writes the second data block to the hard disk drive. | 10-16-2008 |
20110113214 | INFORMATION HANDLING SYSTEM MEMORY MANAGEMENT - An information handling system (IHS) loads an application that may include startup code and steady state operation code. The IHS allocates one region of system memory to the startup code and another region of system memory to the steady state operation code. A programmer inserts a memory release call command at a location that marks the end of execution of the startup code. After executing the startup code, the operation system receives the memory release call command. In response to the memory release call command, the operating system releases or de-allocates the region of memory to which the IHS previously assigned to the startup code. This enables the released memory for use by code other than the startup code, such as other code pages, library pages and other code. | 05-12-2011 |
20110153975 | METHOD FOR PRIORITIZING VIRTUAL REAL MEMORY PAGING BASED ON DISK CAPABILITIES - A method manages memory paging operations. Responsive to a request to page out a memory page from a shared memory pool, the method identifies whether a physical space within one of a number of paging space devices has been allocated for the memory page. If physical space within the paging space device has not been allocated for the memory page, a page priority indicator for the memory page is identified. The memory page is then allocated to one of a number of memory pools within one of the number of paging space devices. The memory page is allocated one of the memory pools according to the page priority indicator of the memory page. The memory page is then written to the allocated memory pools. | 06-23-2011 |
20120005448 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 01-05-2012 |
20120072676 | SELECTIVE MEMORY COMPRESSION FOR MULTI-THREADED APPLICATIONS - A method, system, and computer usable program product for selective memory compression for multi-threaded applications are provided in the illustrative embodiments. An identification of a memory region that is shared by a plurality of threads in an application is received at a first entity in a data processing system. A request for a second entity in the data processing system to keep the memory region uncompressed when compressing at least one of a plurality of memory regions that comprise the memory region is provided from the first entity to the second entity. | 03-22-2012 |
20120260257 | SCHEDULING THREADS IN MULTIPROCESSOR COMPUTER - A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor. | 10-11-2012 |
20130179616 | Partitioned Shared Processor Interrupt-intensive Task Segregator - Interrupt-intensive and interrupt-driven processes are managed among a plurality of virtual processors, wherein each virtual processor is associated with a physical processor, wherein each physical processor may be associated with a plurality of virtual processors, and wherein each virtual processor is tasked to execute one or more of the processes, by determining which of a plurality of the processes executing among a plurality of virtual processors are being or have been driven by at least a minimum count of interrupts over a period of operational time; selecting a subset of the plurality of virtual processors to form a sequestration pool; migrating the interrupt-intensive processes on to the sequestration pool of virtual processors; and commanding by a computer a bias in delivery or routing of the interrupts to the sequestration pool of virtual processors. | 07-11-2013 |
20130227549 | MANAGING UTILIZATION OF PHYSICAL PROCESSORS IN A SHARED PROCESSOR POOL - Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors. | 08-29-2013 |
20130290666 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 10-31-2013 |
20140149672 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |
Patent application number | Description | Published |
20090106762 | Scheduling Threads In A Multiprocessor Computer - Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register. | 04-23-2009 |
20120096240 | Application Performance with Support for Re-Initiating Unconfirmed Software-Initiated Threads in Hardware - A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine. | 04-19-2012 |
20120180052 | Application Performance with Support for Re-Initiating Unconfirmed Software-Initiated Threads in Hardware - A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine. | 07-12-2012 |
20140149675 | SELECTIVE RELEASE-BEHIND OF PAGES BASED ON REPAGING HISTORY IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred. | 05-29-2014 |