Entries |
Document | Title | Date |
20080216089 | CHECKPOINT/RESUME/RESTART SAFE METHODS IN A DATA PROCESSING SYSTEM TO ESTABLISH, TO RESTORE AND TO RELEASE SHARED MEMORY REGIONS - A method is provided in which checkpointing operations are carried out in data processing systems running multiple processes which employ shared memory in a manner which preserves data coherence and integrity but which places no timing restrictions or constraints which require coordination of checkpointing operations. Data structures within local process memory and within shared memory provide the checkpoint operation with application level information concerning shared memory resources specific to at least two processes being checkpointed. Methods are provided for establishing, restoring and releasing shared memory regions that are accessed by multiple cooperating processes. | 09-04-2008 |
20080229325 | Method and apparatus to use unmapped cache for interprocess communication - A processing system features random access memory (RAM) and a processor. The processor features cache memory and multiple processing cores. The processor also features cache unmapping logic that can receive an unmap request calling for creation of a memory segment to be used as a shared memory segment to reside in the cache memory of the processor. The shared memory segment may facilitate interprocess communication (IPC). After receiving the unmap request, the cache unmapping logic may cause the processing system to omit the shared memory segment when writing data from the cache memory to the RAM. Other embodiments are described and claimed. | 09-18-2008 |
20080256552 | SYSTEM AND METHOD FOR A CICS APPLICATION USING A SAME PROGRAM ON A LOCAL SYSTEM AND A REMOTE SYSTEM - A system and method implemented in a Customer Information Control System (CICS) Application configured to process information residing on remote systems and display such information on a local system, using a same program residing on both the remote system(s) and the local system. The method includes, for example, sending programming functions of a local system with a request for information to a remote system. The method further includes processing the programming functions of the local system with the request for information on the remote system to obtain updated information from the remote system. The updated information is sent to the local system for display. | 10-16-2008 |
20080282255 | HIGHLY-AVAILABLE APPLICATION OPERATION METHOD AND SYSTEM, AND METHOD AND SYSTEM OF CHANGING APPLICATION VERSION ON LINE - By releasing a part of execution environment that contains a leaked resource, a failure is avoided while the remaining part of execution environment in a memory and the like prevents performance degradation that results from a cold cache. This invention provides a highly available application operation method for replacing a first application (App | 11-13-2008 |
20080282256 | APPARATUS FOR INTER PARTITION COMMUNICATION WITHIN A LOGICAL PARTITIONED DATA PROCESSING SYSTEM - A method and structure for inter partition communication within a logical partitioned data processing system are provided. Each partition is configured for an inter partition communication area (IPCA) allocated from partition's own system memory. Each partition's IPCA combined together forms a non-contiguous block of memory which is treated as a virtual shared resource (VSR). Access to VSR is controlled by hypervisor to maintain data security and coherency of the non-shared resources of a partition. Messages are written to and read from VSR under a specific partition's IPCA for inter partition communication. No physical shared or non-shared resources are involved during inter partition communication, hence no extra overhead on those resources, thus achieving optimized performance during inter partition communication. | 11-13-2008 |
20080301703 | APPARATUS AND METHODS TO ACCESS INFORMATION ASSOCIATED WITH A PROCESS CONTROL SYSTEM - Example apparatus and methods to access information associated with a process control system are disclosed. A disclosed example method involves receiving a first user-defined parameter name to reference a first datum value in a first data source. The first one of a plurality of data source interfaces is enabled to access the first datum value in the first data source. The example method also involves enabling referencing the first datum value in the first data source based on the first user-defined parameter name. In addition, data source interface software is then generated to access the first datum value in the first data source in response to receiving a first data access request including the first user-defined parameter name. | 12-04-2008 |
20080307429 | APPARATUS, SYSTEM, AND METHOD FOR AUTONOMOUSLY MAINTAINING A SINGLE SYSTEM IMAGE IN A PARALLEL SYSTEMS COMPLEX - An apparatus, system, and method for autonomously maintaining a single system image in a parallel systems complex. A computer program product causes the relevant systems in a parallel systems complex to receive requests with a global scope from a user. The request is sent to each IMS system in the sysplex, and each IMS system applies the resource information and logs the resource information for recovery. The request is written to a shared medium which IMS sysplex members can access. When an IMS member is brought online, the IMS member restores status information first from local recovery logs. The IMS member then checks the information against the global medium to determine if requests were issued while the IMS was offline. If so, the IMS inherits the information in the global medium before processing work. An IMS added into the sysplex applies the information from the global medium before processing work. | 12-11-2008 |
20080313645 | Automatic Mutual Exclusion - An automatic mutual exclusion computer programming system is disclosed which allows a programmer to produce concurrent programming code that is synchronized by default without the need to write any synchronization code. The programmer creates asynchronous methods which are not permitted make changes to shared memory that they cannot reverse, and can execute concurrently with other asynchronous methods. Changes to shared memory are committed if no other thread has accessed shared memory while the asynchronous method executed. Changes are reversed and the asynchronous method is re-executed if another thread has made changes to shared memory. The resulting program executes in a serialized order. A blocking system method is disclosed which causes the asynchronous method to re-execute until the blocking method's predicate results in an appropriate value. A yield system call is disclosed which divides asynchronous methods into atomic fragments. When a yield method call is made, shared memory changes are committed if possible or reversed and the atomic fragment is re-executed. | 12-18-2008 |
20080313646 | STORAGE-DEVICE DISCOVERY PROTOCOL - A system and method may include a first storage device supporting a first set of functions and a second storage device supporting a second set of functions different from the first set of functions. The first and second storage devices may be configured to provide a common interface to enable discovery of the first and second sets by invoking a procedure call common to the first and second storage devices. An application, which may initially be unaware of the first and second sets, may be configured to invoke the procedure call and thereby discover the first and second sets, which may be provided as XML documents. The application may be further configured to communicate with the first storage device using the first set of functions, and communicate with the second storage device using the second set of functions using the information discovered by invoking the procedure call. | 12-18-2008 |
20090019453 | Automatically Arranging Objects in a Graphical Program Block Diagram - Various embodiments of a system and method for automatically arranging or positioning objects in a block diagram of a graphical program are described. A graphical programming development environment or other software application may be operable to automatically analyze a block diagram of a graphical program, e.g., in order to determine objects present in the block diagram, as well as their initial positions within the block diagram. The graphical programming development environment may then automatically re-position various ones of the objects in the block diagram. In various embodiments, the objects may be re-positioned so as to better organize the block diagram or enable a user to more easily view or understand the block diagram. | 01-15-2009 |
20090025008 | IPMI SYSTEMS AND ELECTRONIC APPARATUS USING THE SAME - Intelligent Platform Management Interface (IPMI) systems are disclosed, in which a baseboard management controller (BMC) is coupled to a first memory device and a server system, such that the BMC accesses the first memory device to provide a first set of functions and accesses a second memory device from the server system to provide a second set of functions. | 01-22-2009 |
20090025009 | Co-execution of objects from divergent runtime environments - Systems and methods are described that permit objects from runtime environments that are incompatible with one another to be co-executed on a computing machine. Depending on which object can service the request, a generic proxy may send the request to the proxy of the particular runtime environment associated with that object. The proxy may call the appropriate methods of the object therein to service the request. Each runtime environment may be isolated from other runtime environments by a container such that catastrophic errors in one runtime environment do not disrupt the execution of objects in another runtime environment. A first object in a first runtime environment may execute methods in a second object in a second runtime environment by invoking a proxy of the second runtime environment to call the methods of the second object. | 01-22-2009 |
20090037929 | Secure Inter-Process Communications Using Mandatory Access Control Security Policies - The present invention provides secure inter-process communications, and applications thereof. In an embodiment, a shared memory and a message queue are used to provide a secure communication channel between a first computer process and a second computer process. The shared memory provides a path for high-bandwidth data transfer in a forward direction. The message queue provides a path for controlling the data transfer in the forward direction, while limiting data transfer in the reverse direction. A third computer process creates the message queue that is used by the first computer process and the second computer process to control the passage of data. Access to the shared memory and the message queue are enforced using a mandatory access control security policy. | 02-05-2009 |
20090070776 | SYSTEM AND METHOD TO IMPROVE MEMORY USAGE IN VIRTUAL MACHINES RUNNING AS HYPERVISOR GUESTS - A system and method to improve memory usage in virtual machines running as hypervisor guests. In accordance with an embodiment, the invention provides a system for changing the memory usage of a virtual machine on request from a hypervisor, comprising: a hypervisor; a guest operating system executing inside the hypervisor; a communication channel between the hypervisor and the guest operating system; a balloon driver in the guest operating system; a virtual machine for executing a software application; a communication channel between the balloon driver and the virtual machine; a memory space or heap for use by the virtual machine in storing software objects and pointers as part of the software application; and a compacting garbage collector for use by the virtual machine. | 03-12-2009 |
20090083756 | APPARATUS AND METHOD FOR COMMUNICATION INTERFACE BETWEEN APPLICATION PROGRAMS ON VIRTUAL MACHINES USING SHARED MEMORY - Provided are an apparatus and a method for communication interface between application programs on virtual machines using a shared memory. The apparatus, includes: a request dividing unit for checking a type of socket request information transmitted from a first socket application program on a first virtual machine through a socket interface and dividing the socket request information based on the checked information; a Transmission Control Protocol (TCP) socket connecting unit setting up TCP socket connection with a second socket application program on a second virtual machine based on the divided socket request information for control request; and a shared memory connecting unit for setting up shared memory connection through the set up TCP socket connection and transmitting/receiving data with the second socket application program through the set up shared memory connection based on the divided socket request information for data transmission/reception. | 03-26-2009 |
20090083757 | COMPUTER SYSTEM AND PROGRAM PLUG-IN MANAGEMENT METHOD THEREOF - A computer system with window-based OS (Operating System) and a program plug-in management method is provided. Even though a non-plug-in application installed in the computer system is originally incapable of plugging into a target program, through the method, certain non-plug-in application will still be plugged into the desired target program. A plug-in management table is constructed in advance to store frame information of an assigned frame within the window of the target program and plug-in information of the non-plug-in application and the target program. The non-plug-in application will be plugged into the assigned frame within the window of the target program according to the plug-in management table. After activated, the non-plug-in application and the target program will be executed in parallel so that the non-plug-in application will remain in the window of the target program without shrinking into a bottom toolbar or disappearing from the window. | 03-26-2009 |
20090113444 | Application Management - The subject matter of this specification can be embodied in, among other things, a method that includes executing one or more computer applications and ranking the applications according to one or more criteria that change in response to a user's interaction with the applications. State information for certain of the one or more applications is saved and one or more applications are terminated in response to a memory condition. Subsequently, one of the terminated applications is revived using the saved state information. | 04-30-2009 |
20090119676 | Virtual heterogeneous channel for message passing - A technique includes using a virtual channel between a first process and a second process to communicate messages between the processes. Each message contains protocol data and user data. All of the protocol data is communicated over a first channel associated with the virtual channel, and the user data is selectively communicated over at least one other channel associated with the virtual channel. | 05-07-2009 |
20090210883 | Network On Chip Low Latency, High Bandwidth Application Messaging Interconnect - Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, each network interface controller controlling inter-IP block communications through routers, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox. | 08-20-2009 |
20090265716 | SYSTEM AND METHOD FOR FEATURE ADDITION TO AN APPLICATION | 10-22-2009 |
20090288098 | Separate Plug-In Processes In Browsers and Applications Thereof - Embodiments of the present invention relate to browser plug-ins. In one embodiment, a system browses web content using a plug-in. The system includes at least one renderer process that detects plug-in content in the web content. Separate from the at least one renderer process, the system also includes a plug-in process that includes the plug-in and communicates with the at least one renderer process to interpret the plug-in content using an inter-process communication channel. | 11-19-2009 |
20090300646 | ADAPTING BETWEEN COUPLED AND DECOUPLED PROVIDER INTERFACES - Adapters are provided to convert a decoupled provider interface to a coupled provider interface and/or to convert a coupled provider interface to a decoupled provider interface. A decoupled provider may indirectly expose a data model by providing one or more of a sequence of unchanging views of data via snapshots and snapshot update events. A coupled provider may directly expose a dynamic data model or view and model update events. A decoupled consumer of data may consume data that is provided in snapshots and snapshot update events while a coupled consumer may consume data in the form of a dynamic data model and model update events. | 12-03-2009 |
20090307710 | EFFICIENT MECHANISM FOR TERMINATING APPLICATIONS - An efficient mechanism for terminating applications of a data processing system is described herein. In one embodiment, in response to a request for exiting from an operating environment of a data processing system, an operating system examines an operating state associated with an application running within the operating environment, where the operating state is stored at a predetermined memory location shared between the operating system and the application. The operating system immediately terminates the application if the operating state associated with the application indicates that the application is safe for a sudden termination. Otherwise, the operating system defers terminating the application if the operating state associated with the application indicates that the application is unsafe for the sudden termination. Other methods and apparatuses are also described. | 12-10-2009 |
20090320042 | SYSTEM AND METHOD FOR ACHIEVING HIGH PERFORMANCE DATA FLOW AMONG USER SPACE PROCESSES IN STORAGE SYSTEM - Fault isolation capabilities made available by user space can be provided for a embedded network storage system without sacrificing efficiency. By giving user space processes direct access to specific devices (e.g., network interface cards and storage adapters), processes in a user space can initiate Input/Output requests without issuing system calls (and entering kernel mode). The multiple user spaces processes can initiate requests serviced by a user space device driver by sharing a read-only address space that maps the entire physical memory one-to-one. In addition, a user space process can initiate communication with another user space process by use of transmit and receive queues similar to transmit and receiver queues used by hardware devices. And, a mechanism of ensuring that virtual addresses that work in one address space reference the same physical page in another address space is used. | 12-24-2009 |
20090328059 | Synchronizing Communication Over Shared Memory - Two threads may communicate via shared memory using two different modes. In a polling mode, a receiving thread may poll an indicator set by the sending thread to determine if a message is present. In a blocking mode, the receiving thread may wait until a synchronization object is set by the sending thread which may cause the receiving thread to return to the polling mode. The polling mode may have low latency buy may use processor activity of the receiving thread to repetitively check the indictor. The blocking mode may have a higher latency but may allow the receiving thread to enter a sleep mode or perform other activities. | 12-31-2009 |
20100031271 | DETECTION OF DUPLICATE MEMORY PAGES ACROSS GUEST OPERATING SYSTEMS ON A SHARED HOST - A hypervisor receives a memory page checksum from a guest operating system, which corresponds to a page of memory utilized by the guest. Next, the hypervisor proceeds through a series of steps to detect that the memory page checksum matches a checksum value included in a checksum entry item, which includes an identifier of a different guest. In turn, the hypervisor shares the page of memory between the guest and the different guest in response to detecting that the memory page checksum matches the checksum value included the checksum entry item. | 02-04-2010 |
20100037235 | METHOD AND SYSTEM FOR VIRTUALIZATION OF SOFTWARE APPLICATIONS - A method of virtualizing an application to execute on a plurality of operating systems without installation. The method includes creating an input configuration file for each operating system. The templates each include a collection of configurations that were made by the application during installation on a computing device executing the operating system. The templates are combined into a single application template having a layer including the collection of configurations for each operating system. The collection of configurations includes files and registry entries. The collections also identifies and configures environmental variables, systems, and the like. Files in the collection of configurations and references to those files may be replaced with references to files stored on installation media. The application template is used to build an executable of the virtualized application. The application template may be incorporated into a manifest listing other application templates and made available to users from a website. | 02-11-2010 |
20100037236 | INFORMATION PROCESSING METHOD, APPARATUS, AND SYSTEM FOR CONTROLLING COMPUTER RESOURCES, CONTROL METHOD THEREFOR, STORAGE MEDIUM, AND PROGRAM - An operation request from a process or OS for computer resource(s) managed by the OS, such as a file, network, storage device, display screen, or external device, is trapped before access to the computer resource. It is determined whether an access right for the computer resource designated by the trapped operation request is present. If the access right is present, the operation request is transferred to the operating system, and a result from the OS is returned to the request source process. If no access right is present, the operation request is denied, or the request is granted by charging in accordance with the contents of the computer resource. | 02-11-2010 |
20100043012 | ELECTRONIC DEVICE SYSTEM AND SHARING METHOD THEREOF - An electronic system comprises a memory, a parser, and a device driver. A plurality of applications and a document are stored in a user space of the memory, the document storing configuration parameters. The parser module parses the document to retrieve the parameters in response to invocation from at least one application. The device driver creates data structure for the parameters in the kernel space of the memory, thus to facilitate a plurality of programs to execute different functions of the system by commonly utilizing the parameters through the device driver. | 02-18-2010 |
20100064294 | Maintaining Vitality of Data In Safety-Critical Systems - A mechanism for maintaining configuration or other vital data outside of source code is disclosed. In accordance with the illustrative embodiment of the present invention, a data manager software component serves as an interface between an external configuration data store and one or more applications, processes, and threads. In contrast with techniques of the prior art, the illustrative embodiment does not suffer from the risk of undetected corruption of vital data, and therefore is especially advantageous in safety-critical systems. | 03-11-2010 |
20100122264 | Language level support for shared virtual memory - Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU. | 05-13-2010 |
20100169895 | Method and System for Inter-Thread Communication Using Processor Messaging - In shared-memory computer systems, threads may communicate with one another using shared memory. A receiving thread may poll a message target location repeatedly to detect the delivery of a message. Such polling may cause excessive cache coherency traffic and/or congestion on various system buses and/or other interconnects. A method for inter-processor communication may reduce such bus traffic by reducing the number of reads performed and/or the number of cache coherency messages necessary to pass messages. The method may include a thread reading the value of a message target location once, and determining that this value has been modified by detecting inter-processor messages, such as cache coherence messages, indicative of such modification. In systems that support transactional memory, a thread may use transactional memory primitives to detect the cache coherence messages. This may be done by starting a transaction, reading the target memory location, and spinning until the transaction is aborted. | 07-01-2010 |
20100192159 | SEPARATION KERNEL WITH MEMORY ALLOCATION, REMOTE PROCEDURE CALL AND EXCEPTION HANDLING MECHANISMS - A computer-implemented system ( | 07-29-2010 |
20100199289 | Method for Guaranteeing a Single Copy of A Shared Assembly Per Process In a Unix Environment - A computer implemented method, computer program product, and a data processing system access a version of shared assembly in a componentized environment, wherein multiple versions of the shared assembly exist concurrently in a single process, and wherein each version of the shared assembly comprises an assembly stub and an assembly implementation. A call to an assembly stub of the shared assembly is received. The call is then forwarded from the assembly stub to an identified assembly implementation using a proxy pointer. A function table structure is then retuned from the identified assembly implementation, wherein the function table structure contains implementation symbols from the identified assembly implementation. | 08-05-2010 |
20100242051 | ADMINISTRATION MODULE, PRODUCER AND CONSUMER PROCESSOR, ARRANGEMENT THEREOF AND METHOD FOR INTER-PROCESSOR COMMUNICATION VIA A SHARED MEMORY - Administration module, producer and consumer processor, arrangement thereof and method for inter-processor communication via a shared memory, wherein the module includes: a device for storing and administering the states of triple-buffers, each buffer having a read-, a write- and an idle-sub-buffer; a device for communicating with at least one producer and at least one consumer processor, and wherein the administration device is formed to determine a targeted read- or write-sub-buffer from the triple-buffers in response to a producer or consumer processor access. | 09-23-2010 |
20100306782 | METHOD AND SYSTEM FOR DATA REPORTING AND ANALYSIS - Described are methods and systems related to data report and analysis. A first business intelligence (BI) block is imported to a host analytics user interface (UI). The first BI block includes synchronizable dimensions to synchronize values of the first BI block with other BI blocks, and propagatable dimensions to propagate values of the first BI block to other BI blocks. A host data context of the host analytics UI is updated by propagating the propagatable dimensions of the first BI block. A second BI block is imported to the host analytics UI. The second BI block includes at least one synchronizable dimension in common with at least one propagatable dimension of the first BI block. The synchronizable dimensions of the second BI block are synchronized to the updated host data context. The first BI block and the synchronized second BI block are rendered on the host analytics UI. | 12-02-2010 |
20100306783 | SHARED MEMORY REUSABLE IPC LIBRARY - An apparatus and a method for a shared reusable (IPC) library. The shared reusable IPC library includes a client IPC library and a server IPC library. The client IPC library communicates with a client application. The server IPC library communicates with the sender application. The client IPC library includes instructions for creating, destroying, sending, or receiving IPC messages to and from the client application. The server IPC library includes an initialization function for the server application. | 12-02-2010 |
20110010723 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus has a communication unit to perform an inter-process communication via a kernel space among a plurality of processes existing in a user space, and a log recording unit to record a log of the inter-process communication within the kernel space. | 01-13-2011 |
20110061059 | INFORMATION PROCESSING PROGRAM AND INFORMATION PROCESSING APPARATUS - In an information processing apparatus stores, save data shared among first-third applications is stored in a memory for saved data by bringing it into correspondence with the first-third applications as first-third data. A computer integrates the first-third data stored in the memory for saved data within a main memory when the first application is activated, for example, and updates the integration data in accordance with execution of the first application and overwrites the first data and second data stored in the memory for saved data with the updated integration data in response to an automatic saving instruction or a saving instruction by a user at the same time. | 03-10-2011 |
20110099556 | UPDATING SYSTEM FOR A MICROCONTROLLER AND ASSOCIATED METHODS - A system to update portions of a microcontroller may include a microcontroller and read-write memory carried by the microcontroller. The system may also include non-volatile memory carried by the microcontroller and an application carried by the non-volatile memory. The system may further include a second application carried by the non-volatile memory that substantially mirrors the application. | 04-28-2011 |
20110145835 | Lockless Queues - A method for passing data from a first processing thread to a second processing thread, wherein the first processing thread produces data to be processed by the second processing thread. The data from the first processing thread may be inserted into objects that in turn are inserted into a queue ob objects to be processed by the second thread. The queue may be a circular array, wherein the array includes a pointer to a head and a pointer to a tail, wherein only the first processing thread modifies the tail pointer and only the second processing thread modifies the head pointer. | 06-16-2011 |
20110173635 | System, Processor, Apparatus and Method for Inter-Processor Communication - A multi-processor system comprises a sending processor adapted to send a data message, a receiving processor adapted to receive the date message, and a memory unit associated with the receiving processor. The multi-processor system has a size-index table associated with the sending processor, and the sending processor is adapted to map a size of a payload portion of the data message to an index of the size-index table, and to send the data message containing the size, the index and the payload portion to the receiving processor. The multi-processor system also has mapping circuitry associated with the receiving processor. The mapping circuitry is adapted to the map the index contained in the data message received from the sending processor to a pointer, wherein the pointer is associated with a buffer of the memory unit. The receiving processor is adapted to write the payload portion of the received data message to the buffer as indicated by the pointer. A receiving processor adapted to be comprised in a multi-processor system, an electronic apparatus comprising a multi-processor system and/or a receiving processor are also described as well as a method of receiving a data message at a processor. | 07-14-2011 |
20110225596 | METHODS AND SYSTEMS FOR AUTHORIZING AN EFFECTOR COMMAND IN AN INTEGRATED MODULAR ENVIRONMENT - Methods and systems are provided for authorizing a command of an integrated modular environment in which a plurality of partitions control actions of a plurality of effectors is provided. A first identifier, a second identifier, and a third identifier are determined. The first identifier identifies a first partition of the plurality of partitions from which the command originated. The second identifier identifies a first effector of the plurality of effectors for which the command is intended. The third identifier identifies a second partition of the plurality of partitions that is responsible for controlling the first effector. The first identifier and the third identifier are compared to determine whether the first partition is the same as the second partition for authorization of the command. | 09-15-2011 |
20110296432 | PROGRAMMING MODEL FOR COLLABORATIVE DISTRIBUTED SYSTEMS - Described are methods of providing data sharing between applications. The applications run on different computers, communicate via a network, and share a same distributed object. Each application maintains on its computer an invariant copy of the distributed object and a variant copy of the distributed object. Each application performs update operations to the distributed object, where such an update operation issued by a given one of the applications is performed by: executing the update operation on the variant copy maintained by the given application (i) without the given application waiting for the other applications to perform the operation (each invariant copy is guaranteed to converge to a same state) and (ii) at each of the applications, including the given application, executing the update operation on the corresponding invariant copies. | 12-01-2011 |
20110296433 | FUNCTION SECURING UNIT FOR COMMUNICATION SYSTEMS - Disclosed herein is a communication system having at least one first and a second communication unit, wherein the first communication unit has a counter memory unit which stores a counter value (MSG_CNT), wherein the first communication unit is designed such that at least the occurrence of a first defined communication event prompts the counter value in the counter memory unit to be changed in at least one defined first manner, wherein at least the occurrence of a defined reference event is followed by the counter value in the counter memory unit being changed in at least one defined second manner, wherein at least in the course of a second defined communication event the first communication unit transmits the current counter value in the counter memory unit directly or indirectly to the second communication unit. | 12-01-2011 |
20120030687 | EFFICIENT DATA TRANSFER ON LOCAL NETWORK CONNECTIONS USING A PSEUDO SOCKET LAYER - A method, system and computer program product for transferring data between two applications over a local network connection. The invention establishes a socket connection between the applications and transfers data through the socket connection using a pseudo socket layer interface when the two endpoints of the socket connection are on the same host. Socket application program interface comprises socket buffers for sending and receiving data. A connecting application identifies and establishes a connection with a listening socket, and places data directly in the socket receive buffer of the receiving socket. If the other end of the socket connection is on a remote host, then data is transferred using underlying network facilities. | 02-02-2012 |
20120066691 | PRIVATE APPLICATION CLIPBOARD - In one embodiment, a non-transitory processor-readable medium stores code representing instructions that when executed cause a processor operating in an operating system environment that includes a clipboard function that stores information at a first memory location, to receive, from an application, a first request to store content. The code further represents instructions to store, at a second memory location, a content portion indicated by the first request, and receive, from a trusted application, a second request to retrieve the content portion. The code further represents instructions to send, to the trusted application, the content portion. | 03-15-2012 |
20120072921 | Laptop Computer for Processing Original High Resolution Images and Image-data-processing device thereof - An image-data-processing device for processing first image data includes an image processing chip with a first memory and a second memory. The image processing chip further includes a data managing unit and a coding module, wherein the memory space of the second memory is greater than the memory space of the first memory. The data managing unit receives the first image data from an image sensor and transmits the image data to the encoding module, wherein the encoding module generates a second image data based on the first image data received. The data managing unit then selectively stores the second image data in the first memory or the second memory. | 03-22-2012 |
20120089987 | METHODS, APPARATUS, AND SYSTEMS TO ACCESS RUNTIME VALUES OF OBJECT INSTANCES - In one embodiment, a plurality of executable instructions is stored at a first software module. The plurality of executable instructions are collectively configured to provide an identifier of a first object instance to a second software module stored at a memory and executed at the processor. The identifier of the first object instance is received at the second software module in response to execution of the plurality of executable instructions and a textual object element identifier is selected from a plurality of textual object element identifiers. Each textual object element identifier from the plurality of textual object element identifiers uniquely associated with an object element. An identifier of a second object instance is accessed and the object element uniquely associated with the textual object element identifier is reflectively accessed at the second object instance. The first object instance derived from the second object instance. | 04-12-2012 |
20120192203 | Detection of Duplicate Memory Pages Across Guest Operating Systems on a Shared Host - A hypervisor receives a memory page checksum from a guest operating system, which corresponds to a page of memory utilized by the guest. Next, the hypervisor proceeds through a series of steps to detect that the memory page checksum matches a checksum value included in a checksum entry item, which includes an identifier of a different guest. In turn, the hypervisor shares the page of memory between the guest and the different guest in response to detecting that the memory page checksum matches the checksum value included the checksum entry item. | 07-26-2012 |
20120227056 | METHOD AND SYSTEM FOR ENABLING ACCESS TO FUNCTIONALITY PROVIDED BY RESOURCES OUTSIDE OF AN OPERATING SYSTEM ENVIRONMENT - A method for enabling access to functionality provided by resources outside of an operating system environment is provided. The method includes: receiving a call for functionality provided by resources outside of the operating system environment; and copying function parameters from within the received call to an area of memory accessible to the resources outside of the operating system environment that provide the called functionality. | 09-06-2012 |
20120266183 | Efficient Network and Memory Architecture for Multi-core Data Processing System - The invention provides hardware logic based techniques for a set of processing tasks of a software program to efficiently communicate with each other while running in parallel on an array of processing cores of a multi-core data processing system dynamically shared among a group of software programs. These inter-task communication techniques comprise, by one or more task of the set, writing their inter-task communication information to a memory segment of other tasks of the set at the system memories, as well as reading inter-task communication information from their own segments at the system memories. The invention facilitates efficient inter-task communication on a multi-core fabric, without any of the communications tasks needing to know whether and at which core in the fabric any other task is executing at any given time. The invention thus enables flexibly and efficiently running any task of any program at any core of the fabric. | 10-18-2012 |
20120278814 | Shared Drivers in Multi-Core Processor - A method for sharing a resource between multiple processors within a single integrated circuit that share a memory is described. A command structure is built in shared memory by a client on a first processor for a service offered by a second processor, wherein the first processor and second processor have access to the shared memory. Attention from the second processor is requested. The command in shared memory is decoded by a host on the second processor in response to the request for attention. The service is performed on the second processor according to the command. The client on the first processor is notified when the service is complete. | 11-01-2012 |
20120324476 | PASTING DATA - A method of pasting data from a source application to a destination application, where the source and destination applications are not the same; the method comprising the steps of: identifying a data type for the data and an appropriate input handler for the data type; converting the data using the appropriate input handle to a standard format based on the data type; in an output module determining the context of the data in the standard format to identify an appropriate output handler; obtaining a suggested paste operation from a suggestion engine based on the type and context of the data; and instructing a paste operation on the basis of the suggested paste operation. | 12-20-2012 |
20120331480 | PROGRAMMING INTERFACE FOR DATA COMMUNICATIONS - In embodiments of a programming interface for data communications, a request queue and a completion queue can be allocated from a user-mode virtual memory buffer that corresponds to an application. The request queue and the completion queue can be pinned to physical memory and then mapped to kernel-mode system addresses so that the request queue and the completion queue can be accessed by a kernel-mode execution thread. A request can be received from an application for the kernel to handle data in the request queue, and a system issued to the kernel for the kernel-mode execution thread to handle the request. The kernel-mode execution thread can then handle additional requests from the application without additional system calls being issued. | 12-27-2012 |
20130014125 | MANAGING APPLICATION INTERACTIONS USING DISTRIBUTED MODALITY COMPONENT - A method for managing multimodal interactions can include the step of registering a multitude of modality components with a modality component server, wherein each modality component handles an interface modality for an application. The modality component can be connected to a device. A user interaction can be conveyed from the device to the modality component for processing. Results from the user interaction can be placed on a shared memory are of the modality component server. | 01-10-2013 |
20130036427 | MESSAGE QUEUING WITH FLEXIBLE CONSISTENCY OPTIONS - Embodiments of the invention relate to message queuing. In one embodiment, a request from an application for retrieving a message from a queue is received. The queue is stored across multiple nodes of a distributed storage system. A preference with respect to message order and message duplication associated with the queue is identified. A message sequence index associated with the queue is sampled based on the preference that has been identified. The message is selected in response to the sampling. The message that has been selected is made unavailable to other applications for a given interval of time, while maintaining the message in the queue. The message is sent to the application. | 02-07-2013 |
20130047167 | EFFICIENT MECHANISM FOR TERMINATING APPLICATIONS - An efficient mechanism for terminating applications of a data processing system is described herein. In one embodiment, in response to a request for exiting from an operating environment of a data processing system, an operating system examines an operating state associated with an application running within the operating environment, where the operating state is stored at a predetermined memory location shared between the operating system and the application. The operating system immediately terminates the application if the operating state associated with the application indicates that the application is safe for a sudden termination. Otherwise, the operating system defers terminating the application if the operating state associated with the application indicates that the application is unsafe for the sudden termination. Other methods and apparatuses are also described. | 02-21-2013 |
20130061240 | TWO WAY COMMUNICATION SUPPORT FOR HETEROGENOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU, for example. The GPU may be coupled to a GPU compiler and a GPU linker/loader and the CPU may be coupled to a CPU compiler and a CPU linker/loader. The user may create a shared object in an object oriented language and the shared object may include virtual functions. The shared object may be fine grain partitioned between the heterogeneous processors. The GPU compiler may allocate the shared object to the CPU and may create a first and a second enabling path to allow the GPU to invoke virtual functions of the shared object. Thus, the shared object that may include virtual functions may be shared seamlessly between the CPU and the GPU. | 03-07-2013 |
20130061241 | MANAGING SHARED DATA OBJECTS TO PROVIDE VISIBILITY TO SHARED MEMORY - Managing shared data objects to share data between computer processes, including a method for executing a plurality of independent processes on an application server, the processes including a first process and a second process. A shared memory utilized by the plurality of independent processes is provided. A single copy of the data and metadata are stored in the shared memory. The metadata includes an address of the data. The first process initiates the storing of the data in the shared memory. An address of the metadata is transferred from the first process to the second process to notify the second process about the data. The second process determines the address of the shared memory by reading the metadata. The data in the shared memory is accessed by the second process. | 03-07-2013 |
20130117761 | Intranode Data Communications In A Parallel Computer - Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process. | 05-09-2013 |
20130125135 | INTRANODE DATA COMMUNICATIONS IN A PARALLEL COMPUTER - Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process. | 05-16-2013 |
20130167155 | FILE SYSTEM INDEPENDENT CONTENT AWARE CACHE - A server supporting the implementation of virtual machines includes a local memory used for caching, such as a solid state device drive. During I/O intensive processes, such as a boot storm, a “content aware” cache filter component of the server first accesses a cache structure in a content cache device to determine whether data blocks have been stored in the cache structure prior to requesting the data blocks from a networked disk array via a standard I/O stack of the hypervisor. | 06-27-2013 |
20130179898 | SYSTEMS AND METHODS FOR REMOTE STORAGE MANAGEMENT - A system comprises a first storage resource, a second storage resource, a hosted application, a proxy engine, and a proxy interface. The first storage resource stores first data and uses a first program interface for communicating the first data. The second storage resource stores second data and uses a second program interface for communicating the second data. The hosted application uses application data, the first data and/or the second data including the application data. The proxy engine directs application data requests by the hosted application to the first storage resource or to the second storage resource. The proxy interface uses the first program interface to communicate with the first storage device and the second program interface to communicate with the second storage device to respond to the application data requests. | 07-11-2013 |
20130191848 | Distributed Function Execution for Hybrid Systems - A system for distributed function execution, the system includes a host in operable communication with an accelerator. The system is configured to perform a method including processing an application by the host and distributing at least a portion of the application to the accelerator for execution. The method also includes instructing the accelerator to create a buffer on the accelerator, instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer and instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host. | 07-25-2013 |
20130191849 | DISTRIBUTED FUNCTION EXECUTION FOR HYBRID SYSTEMS - A method includes processing an application by a host including one or more processors and distributing at least a portion of the application to an accelerator for execution. The method includes instructing the accelerator to create a buffer on the accelerator and instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer. The method also includes instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host. | 07-25-2013 |
20130239122 | Efficient Network and Memory Architecture for Multi-core Data Processing System - The invention provides hardware logic based techniques for a set of processing tasks of a software program to efficiently communicate with each other while running in parallel on an array of processing cores of a multi-core data processing system dynamically shared among a group of software programs. These inter-task communication techniques comprise, by one or more task of the set, writing their inter-task communication information to a memory segment of other tasks of the set at the system memories, as well as reading inter-task communication information from their own segments at the system memories. The invention facilitates efficient inter-task communication on a multi-core fabric, without any of the communications tasks needing to know whether and at which core in the fabric any other task is executing at any given time. The invention thus enables flexibly and efficiently running any task of any program at any core of the fabric. | 09-12-2013 |
20130247070 | METHOD AND SYSTEM FOR VIRTUALIZATION OF SOFTWARE APPLICATIONS - A method of virtualizing an application to execute on a plurality of operating systems without installation. The method includes creating an input configuration file for each operating system. The templates each include a collection of configurations that were made by the application during installation on a computing device executing the operating system. The templates are combined into a single application template having a layer including the collection of configurations for each operating system. The collection of configurations includes files and registry entries. The collections also identifies and configures environmental variables, systems, and the like. Files in the collection of configurations and references to those files may be replaced with references to files stored on installation media. The application template is used to build an executable of the virtualized application. The application template may be incorporated into a manifest listing other application templates and made available to users from a website. | 09-19-2013 |
20130275997 | Method and System For Exception-Less System Calls In An Operating System - A method and system is disclosed which can enhance the performance of computer systems by altering the operation of the operating system of those computer systems. The invention provides a system and method for making exception-less system calls, decoupling the invocation and execution of system calls, thus avoiding or reducing the direct and indirect overheads associated with making a conventional exception-based system call. The invention can be employed with single core processor systems and with multi-core processor systems, both affording improved temporal execution locality and the later also providing improved spatial execution locality. The system and method can be employed in a wide range of operating systems. | 10-17-2013 |
20130290980 | Graphical Programming System enabling Data Sharing from a Producer to a Consumer via a Memory Buffer - A graphical program execution environment that facilitates communication between a producer program and a consumer program is disclosed. The producer program may store data in a memory block allocated by the producer program. A graphical program may communicate with the producer program to obtain a reference to the memory block. The graphical program may asynchronously pass the reference to the consumer program, e.g., may pass the reference without blocking or waiting while the consumer program accesses the data in the memory block. After the consumer program is finished accessing the data, the consumer program may asynchronously notify the graphical program execution environment to release the memory block. The graphical program execution environment may then notify the producer program that the block of memory is no longer in use so that the producer program can de-allocate or re-use the memory block. | 10-31-2013 |
20130312009 | MULTI-PROCESS INTERACTIVE SYSTEMS AND METHODS - A multi-process interactive system is described. The system includes numerous processes running on a processing device. The processes include separable program execution contexts of application programs, such that each application program comprises at least one process. The system translates events of each process into data capsules. A data capsule includes an application-independent representation of event data of an event and state information of the process originating the content of the data capsule. The system transfers the data messages into pools or repositories. Each process operates as a recognizing process, where the recognizing process recognizes in the pools data capsules comprising content that corresponds to an interactive function of the recognizing process and/or an identification of the recognizing process. The recognizing process retrieves recognized data capsules from the pools and executes processing appropriate to contents of the recognized data capsules. | 11-21-2013 |
20130332940 | Application Management - The subject matter of this specification can be embodied in, among other things, a method that includes executing one or more computer applications and ranking the applications according to one or more criteria that change in response to a user's interaction with the applications. State information for certain of the one or more applications is saved and one or more applications are terminated in response to a memory condition. Subsequently, one of the terminated applications is revived using the saved state information. | 12-12-2013 |
20130339980 | COMPOSITE APPLICATION ENABLING WORKFLOW BETWEEN UNMODIFIED CONSTITUENT APPLICATIONS - Embodiments described herein enable information sharing between multiple software applications in a way that supports seamless workflow when a user interacts with these applications, even when these applications were not originally designed to coexist within the same workflows. The embodiments enable each application to initiate processes, create notifications, and automate actions based on information from all the connected applications. The application programming interface (API) of each application communicates with a dedicated delegate, and the delegates of the different applications interact with each other by reading and writing into a shared hardware and software environment. The delegates, along with the applications and the shared environment, form a composite application. | 12-19-2013 |
20130339981 | NODE - To facilitate changing a system configuration and allow having high redundancy in a computer system connecting a plurality of nodes. A node includes a CPU and constitutes a computer system. The node executes one or more processes and including predetermined functions. The node includes a shared memory that stores system information including process information related to each process executed by each node, in a state accessible from each process of its own node. In the node, the system information including the process information related to each process of its own node is multicast to the other nodes. A shared memory control process of the node receives the system information multicast from the other nodes and stores the system information in the shared memory. | 12-19-2013 |
20140047456 | Synchronizing Communication Over Shared Memory - Two threads may communicate via shared memory using two different modes. In a polling mode, a receiving thread may poll an indicator set by the sending thread to determine if a message is present. In a blocking mode, the receiving thread may wait until a synchronization object is set by the sending thread which may cause the receiving thread to return to the polling mode. The polling mode may have low latency buy may use processor activity of the receiving thread to repetitively check the indictor. The blocking mode may have a higher latency but may allow the receiving thread to enter a sleep mode or perform other activities. | 02-13-2014 |
20140053165 | CONFIGURATION TECHNIQUE FOR AN ELECTRONIC CONTROL UNIT WITH INTERCOMMUNICATING APPLICATIONS - A technique is specified for configuring an electronic control unit having intercommunicating applications which have been arranged in various partitions and to which differing safety integrity levels have been assigned. According to one method aspect, the communications behaviour of the applications assigned to the differing partitions amongst themselves is analysed, in order to identify data-writing and data-reading applications that are not located in the same partition. Subsequently, a shared memory area for the intercommunicating applications is configured, and a to communications data structure for the applications is generated. The communications data structure is at least partially arranged in the shared memory area. | 02-20-2014 |
20140089940 | METHOD AND SYSTEM FOR COMMUNICATION BETWEEN APPLICATION AND WEB-PAGE EMBEDDED CODE - One embodiment of the present invention provides a system that facilitates communication between an embedded code in a web page and a stand-alone application. During operation, the system first embeds a code within a web page that is displayed in a browser. Next, the embedded code receives information indicating a communication method provided by a stand-alone application, via a first communication channel. The embedded code subsequently sends the contextual information associated with a user browser session by calling the communication method, via a second communication channel, thereby allowing the stand-alone application to inherit the contextual information from the web browser. | 03-27-2014 |
20140165078 | APPARATUS AND CIRCUIT FOR PROCESSING DATA - A circuit for processing data is provided. The circuit includes an Application Processor (AP), a Communication Processor (CP), and a storage unit including at least a first region which the AP and the CP access and from/to which data related to at least one of the AP and the CP is read/written, and a second region which the CP accesses and from/to which data related to the CP is read/written. | 06-12-2014 |
20140223447 | Method and System For Exception-Less System Calls In An Operating System - A method and system is disclosed which can enhance the performance of computer systems by altering the operation of the operating system of those computer systems. The invention provides a system and method for making exception-less system calls, decoupling the invocation and execution of system calls, thus avoiding or reducing the direct and indirect overheads associated with making a conventional exception-based system call. The invention can be employed with single core processor systems and with multi-core processor systems, both affording improved temporal execution locality and the later also providing improved spatial execution locality. The system and method can be employed in a wide range of operating systems. | 08-07-2014 |
20140237483 | Graphical Programming System for Data Sharing between Programs via a Memory Buffer - A graphical program execution environment that facilitates communication between a producer program and a consumer program is disclosed. The producer program may store data in a memory block allocated by the producer program. A graphical program may communicate with the producer program to obtain a reference to the memory block. The graphical program may asynchronously pass the reference to the consumer program, e.g., may pass the reference without blocking or waiting while the consumer program accesses the data in the memory block. After the consumer program is finished accessing the data, the consumer program may asynchronously notify the graphical program execution environment to release the memory block. The graphical program execution environment may then notify the producer program that the block of memory is no longer in use so that the producer program can de-allocate or re-use the memory block. | 08-21-2014 |
20140282608 | MOBILE APPLICATIONS ARCHITECTURE - A system and method for sharing data and resources among a plurality of applications on a mobile device is disclosed. Embodiments provide a mobile applications architecture that is able to link applications and share the linked applications simultaneously on an Android (or other operating system) mobile device such as a smart phone or table computer. The mobile applications architecture creates a framework that provides an easy interface for third-party applications to quickly integrate and leverage already constructed components and sharing of data among multiple third-party applications thereby reducing the complexity of newly developed capabilities for mobile applications architecture on not just a single device, but multiple devices. | 09-18-2014 |
20140289739 | ALLOCATING AND SHARING A DATA OBJECT AMONG PROGRAM INSTANCES - A memory has a shared data object containing shared data for a plurality of program instances. An allocation routine allocates a respective memory region corresponding to the shared data object to each of the plurality of program instances, where each of the memory regions contains a header part and a data part, where the data part corresponds to the shared data and the header part contains information relating to the data part, and the header part is private to the corresponding program instance. The allocation routine maps the shared data to the memory regions using a mapping technique that avoids copying the shared data to each of the data parts as part of allocating the corresponding memory region. | 09-25-2014 |
20140325526 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM - An information processing system includes an operation acceptance unit that accepts operations, a discrimination unit that distinguishes between an operation to be recorded and an operation not to be recorded among the operations accepted by the operation acceptance unit, and a recording unit that records the operations that the discrimination unit identifies as an operation to be recorded among the operations accepted by the operation acceptance unit in a memory. | 10-30-2014 |
20140337859 | PROCESSING LOAD WITH NORMAL OR FAST OPERATION MODE - A data processing apparatus includes a processing unit having first and second modes of operation for processing data, including receiving data packets from a sender and sending acknowledgements to the sender, the second mode of operation requires more power than the first mode, and the processing unit switches between the first and second modes of operation based on a processing load; a metric module for determining a metric indicative of the processing load; an acknowledgement module for sending one acknowledgement in respect of n received data packets; and an acknowledgement configuration module for setting n to be a value m greater than a first predetermined value if the metric lies in a predetermined range that includes a value that the metric assumes when the processing unit switches between the first mode of operation and the second mode of operation, and to the first predetermined value otherwise. | 11-13-2014 |
20150058867 | METHOD, AN ELECTRONIC DEVICE, AND A STORAGE MEDIUM FOR AUTO-CLEANING UP APPLICATIONS IN A BACKGROUND - A method for auto-cleaning up applications in a background of an electronic device system is provided, comprising the steps of: acquiring an occupancy rate of a CPU or a memory, and determining periodically whether the occupancy rate exceeds a preset threshold; querying all applications running in the background of the electronic device system, and identifying all non-system applications if the occupancy rate exceeds the preset threshold; and calling a common interface to close the non-system applications. An electronic device and a storage medium for auto-cleaning up applications in a background are also provided. | 02-26-2015 |
20150067701 | CREATING A CUSTOM SERIES OF COMMANDS - For creating a custom series of commands, a method is disclosed that includes maintaining a record of executed commands, determining a time to select a subset of executed commands, selecting a subset of the executed commands for execution, and creating a shortcut to execute the selected commands. | 03-05-2015 |
20150074683 | File-System Requests Supported in User Space for Enhanced Efficiency - Systems and methods are disclosed for interacting with a file system. The file system is operable to reside in user space of a computing system. A module, also within user space, may provide a messaging service supporting requests from an application to the file system. By bypassing a System-Call Interface (SCI) of the computing system's kernel space, the module may support requests from the application to the file system with enhanced efficiency and/or customizable features not provided by the SCI. In some examples, the module may include a library in an independent layer within user space and below the application, allowing the library to provide an application-independent messaging service for different applications. Furthermore, in some examples, the module may include a segment of memory, within user space, shared between the application and the file system for passing data involved in requests and/or responses to and/or from the file system. | 03-12-2015 |
20150150025 | MANAGING CONTAINERIZED APPLICATIONS ON A MOBILE DEVICE WHILE BYPASSING OPERATING SYSTEM IMPLEMENTED INTER PROCESS COMMUNICATION - A method of on-device access using a container application to manage a sub application provisioned on a computer device by set of stored instructions executed by a computer processor to implement the steps of: receive a communication for the sub application by a first service programming interface (SPI) of the container application, the communication sent by a on-device process over a first communication pathway of a device infrastructure of the computer device utilizing interprocess communication (IPC) framework of the device infrastructure, the first communication pathway provided external to the first SPI; retransmit the communication by the first SPI to a second SPI of the sub application over a second communication pathway that bypasses the IPC framework, the second communication pathway internal to the first SPI; receiving a response to the communication by the first SPU from the second SPI over the second communication pathway; and directing the response to the on-device process over the first communication pathway. | 05-28-2015 |
20160041855 | METHOD AND APPARATUS FOR TRANSMITTING DATA ELEMENTS BETWEEN THREADS OF A PARALLEL COMPUTER SYSTEM - Transmitting data elements from source threads to sink threads, which are executed on a plurality of processor cores of a parallel computer system, by using at least one global logical queue, the at least one global logical queue including an associated physical queue for each of the plurality of processor cores and a data element management table that stores, for each source thread executed on a processor core, a count that specifies a total number of data elements that are enqueued by the respective source thread and that are located in one of the physical queues of the at least one global logical queue, and a processor core index that specifies a specific processor core associated with a physical queue that contains the data elements enqueued by the respective source thread. | 02-11-2016 |
20160070607 | SHARING A PARTITIONED DATA SET ACROSS PARALLEL APPLICATIONS - Provided are techniques for sharing a partitioned data set across parallel applications. Under control of a producing application, a partitioned data set is generated; a descriptor that describes the partitioned data set is generated; and the descriptor is registered in a registry. Under control of a consuming application, the registry is accessed to obtain the descriptor of the partitioned data set; and the descriptor is uses to determine how to process the partitioned data set. | 03-10-2016 |
20160070608 | SHARING A PARTITIONED DATA SET ACROSS PARALLEL APPLICATIONS - Provided are techniques for sharing a partitioned data set across parallel applications. Under control of a producing application, a partitioned data set is generated; a descriptor that describes the partitioned data set is generated; and the descriptor is registered in a registry. Under control of a consuming application, the registry is accessed to obtain the descriptor of the partitioned data set; and the descriptor is uses to determine how to process the partitioned data set. | 03-10-2016 |
20160170815 | DELEGATING A STATUS VISUALIZATION TASK TO A SOURCE APPLICATION BY A TARGET APPLICATION | 06-16-2016 |
20160188390 | HIGH-PERFORMACE VIRTUAL MACHINE NETWORKING - A virtual machine (VM) runs on system hardware, which includes a physical network interface device that enables transfer of packets between the VM and a destination over a network. A virtual machine monitor (VMM) exports a hardware interface to the VM and runs on a kernel, which forms a system software layer between the VMM and the system hardware. Pending packets (both transmit and receive) issued by the VM are stored in a memory region that is shared by, that is, addressable by, the VM, the VMM, and the kernel. Rather than always transferring each packet as it is issued, packets are clustered in the shared memory region until a trigger event occurs, whereupon the cluster of packets is passed as a group to the physical network interface device. Optional mechanisms are included to prevent packets from waiting too long in the shared memory space before being transferred to the network. An interrupt offloading mechanism is also disclosed for use in multiprocessor systems such that it is in most cases unnecessary to interrupt the VM in order to request a VMM action, and the need for VMM-to-kernel context transitions is reduced. | 06-30-2016 |
20160378576 | FIRMWARE-RELATED EVENT NOTIFICATION - This disclosure is directed to firmware-related event notification. A device may comprise an operating system (OS) configured to operate on a platform. During initialization of the device a firmware module in the platform may load at least one globally unique identifier (GUID) into a firmware configuration table. When the platform notifies the OS, the firmware module may load at least one GUID into a platform notification table and may set a platform notification bit in a platform notification table status field. Upon detecting the notification, an OS management module may establish a source of the notification by querying the platform notification table. The platform notification bit may cause the OS management module to compare GUIDs in the platform notification table and the firmware configuration table. Services may be called based on any matching GUIDs. If no GUIDs match, the services may be called based on firmware variables in the device. | 12-29-2016 |