Patent application number | Description | Published |
20080244209 | METHODS AND DEVICES FOR DETERMINING QUALITY OF SERVICES OF STORAGE SYSTEMS - Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently. | 10-02-2008 |
20100180255 | PROGRAMMABLE FRAMEWORK FOR AUTOMATIC TUNING OF SOFTWARE APPLICATIONS - A target application is automatically tuned. A list of solutions for identified performance bottlenecks in a target application is retrieved from a storage device. A plurality of modules is executed to compute specific parameters for solutions contained in the list of solutions. A list of modification commands associated with specific parameters computed by the plurality of modules is generated. The list of modification commands associated with the specific parameters is appended to a command sequence list. The list of modification commands is implemented in the target application. Specific source code regions corresponding to the identified performance bottlenecks in the target application are automatically tuned using the implemented list of modification commands. Then, the tuned target application is stored in the storage device. | 07-15-2010 |
20110093638 | HARDWARE MULTI-THREADING CO-SCHEDULING FOR PARALLEL PROCESSING SYSTEMS - A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core. | 04-21-2011 |
20110247005 | Methods and Apparatus for Resource Capacity Evaluation in a System of Virtual Containers - Methods and apparatus are provided for evaluating potential resource capacity in a system where there is elasticity and competition between a plurality of containers. A dynamic potential capacity is determined for at least one container in a plurality of containers competing for a total capacity of a larger container. A current utilization by each of the plurality of competing containers is obtained, and an equilibrium capacity is determined for each of the competing containers. The equilibrium capacity indicates a capacity that the corresponding container is entitled to. The dynamic potential capacity is determined based on the total capacity, a comparison of one or more of the current utilizations to one or more of the corresponding equilibrium capacities and a relative resource weight of each of the plurality of competing containers. The dynamic potential capacity is optionally recalculated when the set of plurality of containers is changed or after the assignment of each work element. | 10-06-2011 |
20120060171 | Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity. | 03-08-2012 |
20120089794 | METHODS AND DEVICES FOR DETERMINING QUALITY OF SERVICES OF STORAGE SYSTEMS - Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently. | 04-12-2012 |
20120254305 | FACILITATING MEETING INVITATION EXTENSION - Enabling meeting extensions using an electronic meeting scheduling system may include enabling a second user invited to a meeting by a first user via an electronic meeting scheduling system to invite one or more third users to the meeting; and applying one or more meeting attributes set by the second user to said one or more third users. | 10-04-2012 |
20130024872 | Scheduling a Parallel Job in a System of Virtual Containers - Methods and apparatus are provided for scheduling parallel jobs in a system of virtual containers. At least one parallel job is assigned to a plurality of containers competing for a total capacity of a larger container, wherein the at least one parallel job comprises a plurality of tasks. The assignment method comprises determining a current utilization and a potential free capacity for each of the plurality of competing containers; and assigning the tasks to one of the plurality of containers based on the potential free capacities and at least one predefined scheduling policy. The predefined scheduling policy may comprise, for example, one or more of load balancing, server consolidation, maximizing the current utilizations, minimizing a response time of the parallel job and satisfying quality of service requirements. The load balancing can be achieved, for example, by assigning a task to a container having a highest potential free capacity. | 01-24-2013 |
20130103409 | PROVIDING eFOLIOS - Providing an electronic itemized list of purchases made by a user, in one aspect, may include receiving data associated with the user and one or more purchases made by the user, storing the data in an itemized purchase database, enabling access to the data, and providing an itemized list of purchases based on the data in accordance with one or more query criteria. | 04-25-2013 |
20130103594 | PROVIDING eFOLIOS - Providing an electronic itemized list of purchases made by a user, in one aspect, may include receiving data associated with the user and one or more purchases made by the user, storing the data in an itemized purchase database, enabling access to the data, and providing an itemized list of purchases based on the data in accordance with one or more query criteria. | 04-25-2013 |
20130235992 | PREFERENTIAL EXECUTION OF METHOD CALLS IN HYBRID SYSTEMS - Affinity-based preferential call technique, in one aspect, may improve performance of distributed applications in a hybrid system having heterogeneous platforms. A segment of code in a program being executed on a processor may be intercepted or trapped in runtime. A platform is selected in the hybrid system for executing said segment of code, the platform determined to run the segment of code with best efficiency among a plurality of platforms in the hybrid system. The segment of code is dynamically executed on the selected platform determined to run the segment of code with best efficiency. | 09-12-2013 |
20130239128 | PREFERENTIAL EXECUTION OF METHOD CALLS IN HYBRID SYSTEMS - Affinity-based preferential call technique, in one aspect, may improve performance of distributed applications in a hybrid system having heterogeneous platforms. A segment of code in a program being executed on a processor may be intercepted or trapped in runtime. A platform is selected in the hybrid system for executing said segment of code, the platform determined to run the segment of code with best efficiency among a plurality of platforms in the hybrid system. The segment of code is dynamically executed on the selected platform determined to run the segment of code with best efficiency. | 09-12-2013 |
20130263097 | IDENTIFICATION OF LOCALIZABLE FUNCTION CALLS - Detecting localizable native methods may include statically analyzing a native binary file of a native method. For each function call invoked in the native binary, it is checked whether resources accessed through the function call is locally available or not. If all resources accessed though the native method is locally available, the method is annotated as localizable. | 10-03-2013 |
20130263101 | IDENTIFICATION OF LOCALIZABLE FUNCTION CALLS - Detecting localizable native methods may include statically analyzing a native binary file of a native method. For each function call invoked in the native binary, it is checked whether resources accessed through the function call is locally available or not. If all resources accessed though the native method is locally available, the method is annotated as localizable. | 10-03-2013 |
20130275988 | HARDWARE MULTI-THREADING CO-SCHEDULING FOR PARALLEL PROCESSING SYSTEMS - A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core. | 10-17-2013 |
20140068572 | JAVA NATIVE INTERFACE ARRAY HANDLING IN A DISTRIBUTED JAVA VIRTUAL MACHINE - A method for executing native code in a distributed Java Virtual Machine (JVM) is disclosed herein. The method may include receiving, in a first thread executing in a remote execution container, a first native code-generated call, such as a Java Native Interface (JNI) call, to a second thread, the first call including a first array write request. The first call may be stored in an instruction cache and bundled with a second native code-generated call and sent to the second thread. The calls are unbundled and executed in the second thread. An opaque handle to an array returned by the second call is bundled with corresponding array data and returned to the first thread. The array data of the bundle is stored in a data cache and retrieved in response to requests for the array data addressed to the second thread. A corresponding computer program product is also disclosed. | 03-06-2014 |
20140068579 | JAVA NATIVE INTERFACE ARRAY HANDLING IN A DISTRIBUTED JAVA VIRTUAL MACHINE - A method for executing native code in a distributed Java Virtual Machine (JVM) is disclosed herein. The method may include receiving, in a first thread executing in a remote execution container, a first native code-generated call, such as a Java Native Interface (JNI) call, to a second thread, the first call including a first array write request. The first call may be stored in an instruction cache and bundled with a second native code-generated call and sent to the second thread. The calls are unbundled and executed in the second thread. An opaque handle to an array returned by the second call is bundled with corresponding array data and returned to the first thread. The array data of the bundle is stored in a data cache and retrieved in response to requests for the array data addressed to the second thread. A corresponding computer program product is also disclosed. | 03-06-2014 |
20140189171 | OPTIMIZATION OF NATIVE BUFFER ACCESSES IN JAVA APPLICATIONS ON HYBRID SYSTEMS - Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining. | 07-03-2014 |
20140189262 | OPTIMIZATION OF NATIVE BUFFER ACCESSES IN JAVA APPLICATIONS ON HYBRID SYSTEMS - Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining. | 07-03-2014 |
20140201302 | METHOD, APPARATUS AND COMPUTER PROGRAMS PROVIDING CLUSTER-WIDE PAGE MANAGEMENT - An exemplary method in accordance with embodiments of this invention includes, at a virtual machine that forms a part of a cluster of virtual machines, computing a key for an instance of a memory page that is to be swapped out to a shared memory cache that is accessible by all virtual machines of the cluster of virtual machines; determining if the computed key is already present in a global hash map that is accessible by all virtual machines of the cluster of virtual machines; and only if it is determined that the computed key is not already present in the global hash map, storing the computed key in the global hash map and the instance of the memory page in the shared memory cache. | 07-17-2014 |
20140316701 | CONTROL SYSTEM FOR INDICATING IF PEOPLE CAN REACH LOCATIONS THAT SATISFY A PREDETERMINED SET OF CONDITIONS AND REQUIREMENTS - Managing routes to meet one or more predetermined conditions, in one aspect, may comprise receiving user information associated with a user via a user's device. Based on the user information, at least a target location to where the user is traveling may be determined. Path information associated with one or more intermediary locations leading to the target location may be received. The path information may be received automatically from one or more sensors installed at the respective intermediary locations for detecting the path information. A route strategy that meets one or more conditions may be estimated by analyzing the user information and the path information. The user information may be obtained automatically from one or more of social network profile data associated with the user, electronic calendar data associated with the user, or historical data associated with the user stored in a user profile database. | 10-23-2014 |
20140316702 | CONTROL SYSTEM FOR INDICATING IF PEOPLE CAN REACH LOCATIONS THAT SATISFY A PREDETERMINED SET OF CONDITIONS AND REQUIREMENTS - Managing routes to meet one or more predetermined conditions, in one aspect, may comprise receiving user information associated with a user via a user's device. Based on the user information, at least a target location to where the user is traveling may be determined. Path information associated with one or more intermediary locations leading to the target location may be received. The path information may be received automatically from one or more sensors installed at the respective intermediary locations for detecting the path information. A route strategy that meets one or more conditions may be estimated by analyzing the user information and the path information. The user information may be obtained automatically from one or more of social network profile data associated with the user, electronic calendar data associated with the user, or historical data associated with the user stored in a user profile database. | 10-23-2014 |
20150026687 | MONITORING SYSTEM NOISES IN PARALLEL COMPUTER SYSTEMS - Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored. | 01-22-2015 |