Patent application number | Description | Published |
20120033880 | IMAGE PROCESSING SYSTEMS EMPLOYING IMAGE COMPRESSION AND ACCELERATED IMAGE DECOMPRESSION - A system for processing an image includes a non-transitory memory component storing a set of executable instructions, and a scalable tile processing device. The executable instructions cause the system to receive image data, partition the image data into tiles, transmit a tile to the scalable tile processing device, receive an encoded bit stream corresponding to the transmitted tile from the tile processing device, output compressed image data including the encoded bit stream, receive the compressed image data, decode the compressed image data to generate a plurality of decoded code blocks, and output decompressed image data including the plurality of decoded code blocks. The scalable tile processing device receives the tile including tile image data, wavelet transforms, quantizes, segments, and encodes the tile image data to generate a plurality of encoded code blocks, and transmits an encoded bit stream including the plurality of encoded code blocks to the system. | 02-09-2012 |
20120033881 | IMAGE PROCESSING SYSTEMS EMPLOYING IMAGE COMPRESSION AND ACCELERATED DECOMPRESSION - A system for processing an image includes a non-transitory memory component storing a set of executable instructions, and a scalable tile processing device. The executable instructions cause the system to receive image data, partition the image data into tiles, transmit a tile to the scalable tile processing device, receive an encoded bit stream corresponding to the transmitted tile from the tile processing device, output compressed image data including the encoded bit stream, receive the compressed image data, decode the compressed image data to generate a plurality of decoded code blocks, and output decompressed image data including the plurality of decoded code blocks. The scalable tile processing device receives the tile including tile image data, wavelet transforms, quantizes, segments, and encodes the tile image data to generate a plurality of encoded code blocks, and transmits an encoded bit stream including the plurality of encoded code blocks to the system. | 02-09-2012 |
20120033886 | IMAGE PROCESSING SYSTEMS EMPLOYING IMAGE COMPRESSION - A system for processing an image includes a an image data input port, a compressed image data output port or a compressed image data storage node, a non-transitory memory component storing a set of executable instructions, and a scalable tile processing device. The executable instructions cause the system to receive image data, partition the image data into tiles, transmit a tile to the scalable tile processing device, receive an encoded bit stream corresponding to the transmitted tile from the tile processing device, and output compressed image data including the encoded bit stream. The scalable tile processing device receives the tile including tile image data, wavelet transforms, quantizes, segments, and encodes the tile image data to generate a plurality of encoded code blocks, and transmits an encoded bit stream including the plurality of encoded code blocks to the system. | 02-09-2012 |
Patent application number | Description | Published |
20090123053 | METHODS AND APPARATUS FOR MODEL-BASED DETECTION OF STRUCTURE IN VIEW DATA - In one aspect, a method and apparatus for determining a value for at least one parameter of a configuration of a model associated with structure of which view data has been obtained including detecting at least one feature in the view data, and determining the value for the at least one parameter of the configuration of the model based at least in part on the at least one feature. In another aspect, a method and apparatus for detecting at least one blood vessel from object view data obtained from a scan of the at least one blood vessel including generating a model of the at least one blood vessel, the model having a plurality of parameters describing a model configuration, determining a hypothesis for the model configuration based, at least in part, on at least one feature detected in the object view data, and updating the model configuration according to a comparison with the object view data to arrive at a final model configuration, so that the final model configuration represents the at least one blood vessel. | 05-14-2009 |
20120114214 | METHODS AND APPARATUS FOR IDENTIFYING SUBJECT MATTER IN VIEW DATA - In one aspect, a method and apparatus for detecting subject matter of interest in view data obtained by scanning an object including generating a filter adapted to respond to the subject matter of interest, splatting the filter onto a portion of the view data to provide a filter splat, and performing at least one operation on the portion of the view data using the filter splat to facilitate determining whether the subject matter of interest is present in the portion of the view data. | 05-10-2012 |
20150139525 | METHODS AND APPARATUS FOR MODEL-BASED DETECTION OF STRUCTURE IN VIEW DATA - In one aspect, a method for determining a value for at least one parameter of a configuration of a model, the model associated with structure of which view data has been obtained from at least one x-ray scanning device capable of producing x-ray radiation, the view data being obtained, at least in part, by scanning at least a portion of the structure, wherein the view data is attenuation data of the x-ray radiation attenuated by the structure as a function of view angle about the structure is provided. The method comprises acts of operating on the view data to detect at least one feature in the view data, and determining the value for the at least one parameter of the configuration of the model based, at least in part, on the at least one feature. | 05-21-2015 |
Patent application number | Description | Published |
20090271300 | Ad-hoc updates to source transactions - Systems, methods, and other embodiments associated with handling a change to a transaction at an application level are described. One exemplary method includes receiving, in a sub-ledger accounting (SLA) logic, from a sub-ledger logic, data that characterizes a transaction(s) receivable from a sub-ledger. The data includes a field of interest identifier and a downstream column impact identifier that identifies a column that is affected by a change to the field of interest. The method includes storing the data and processing transactions from the sub-ledger in light of the stored data. The method includes selectively storing a difference between a value associated with a previously processed version of a transaction and a value provided in a changed transaction. The value may be stored in a new transaction to reconcile the difference between the stored value and the changed transaction value. | 10-29-2009 |
20100063910 | PROVIDING A UNIFIED VIEW OF CONTRACT REVENUE AND INVOICE DETAILS - Systems and methods are provided that provide a unified view of invoice and revenue information for a contract. One embodiment includes receiving a request to display information about a contract, and displaying, in response to the request, a financial summary interface including invoice and revenue information for the contract in the same financial summary interface. The invoice and revenue information for the contract may include contract value, invoiced amount, accrued revenue, and backlog amount. | 03-11-2010 |
20110016084 | DATA INTEGRATION BETWEEN PROJECT SYSTEMS - A project systems integrator integrates a financial planning system and an operational planning system for a project. The integrator loads a work breakdown structure (“WBS”) from the financial planning system and another WBS from the operational planning system. The integrator then records links between corresponding nodes of the financial WBS and operational WBS. When data is entered, updated, or otherwise changed, the data is propagated between the nodes in accordance with the links. | 01-20-2011 |
20110016387 | DOCUMENT COLLABORATION SYSTEM WITH ALTERNATIVE VIEWS - A system provides document collaboration for a plurality of users. The system divides a central document into a plurality of sections. The system then assigns edit rights for a user for one or more sections, and read-only rights for the user for one or more sections. The system then generates a customized document for the user that includes the edit rights sections and the read-only rights sections. | 01-20-2011 |
20110022437 | ENABLING COLLABORATION ON A PROJECT PLAN - Systems, methods, and software applications for enabling the collaboration on a project plan are described in the present disclosure. A computer readable medium is configured to store instructions that are executable by a processing device. According to one embodiment, among many, the computer readable medium includes logic adapted to enable a member of a project team to submit a proposal for modifying a current project plan to a project manager. The computer readable medium also includes logic adapted to enable the project manager to accept or reject the proposal for modifying the current project plan. Various team members make changes to a single shared copy of the project plan. The changes can be to a respective team member's section of the plan | 01-27-2011 |
20110099432 | MAPPING USAGE PATTERNS USING CODE FEATURES - A usage pattern detector includes a determining module configured to determine that a monitored code feature of a software application has been executed on a first computer. The usage pattern detector also includes a recording module configured to record an indication that the monitored code feature has been used and an indication providing module configured to provide the indication that the monitored code feature has been used to a second computer. | 04-28-2011 |
20130151421 | REAL-TIME PROJECT PROGRESS ENTRY: APPLYING PROJECT TEAM MEMBER-ENTERED PROGRESS IMMEDIATELY TO THE PROJECT PLAN - Embodiments of this invention relate generally to updating a project plan in accordance with an input. A user may have a limited set of privileges to update the project plan compared to a manager. The manager may provide a threshold value relating to a type of change that may be made to a master version project plan. Next, the user may access the master project plan, and provide an input relating to a proposed change. From the change, a change value may be derived, and the change value may be compared to the threshold value to determine whether the change value violates the threshold value. If the change value violates the threshold value, a change exception may be generated, and the manager may be notified that the proposed change requires review. If the change value does not violate the threshold value, then the master project plan may be immediately updated. | 06-13-2013 |
Patent application number | Description | Published |
20100169778 | System and method for browsing, selecting and/or controlling rendering of media with a mobile device - A system and a method enable browsing, selecting and/or controlling rendering of media with a mobile device. Content from multiple content sources is aggregated, and a default content source and/or a default rendering device used for subsequent media selection are established. Individual media playback shortcuts are associated with specific default rendering devices. A user may navigate within a collection of content without a need to repeat full content browsing and selection. A user interface of a multimedia player and/or playback controller on the mobile device displays active metadata tags. By selecting an active metadata tag, the user may access a list of associated metadata tag values. By selecting a tag value, the user may select a set of content associated with the tag value for rendering. | 07-01-2010 |
20110183651 | System and method for requesting, retrieving and/or associating contact images on a mobile device - A system and a method request, retrieve and/or associate contact images on a mobile device. An image retrieval application executed by a first mobile device may enable a first user to request images corresponding to one or more contact entries stored by a contact database of the first mobile device and/or associated with second users. The image retrieval application may query one or more image databases accessible to the first mobile device to obtain a first image and/or to associate the first image with a corresponding contact entry in the contact database of the first mobile device. If the image databases do not provide the first image requested by the image retrieval application, the image retrieval application may create and/or may send a request message to the second user, such as, for example, by sending the request message to a second mobile device associated with the second user. | 07-28-2011 |
20120232681 | System and method for using a list of audio media to create a list of audiovisual media - A system and a method use a list of audio media to create a list of audiovisual media. A user of a computing device may create, may access, may edit and/or may use a list of audio media objects, such as, for example, an audio playlist. The the user may request generation of a list of audiovisual media objects which correspond to the audio media objects in the list of audio media objects. The user may request generation of the list of audiovisual media objects using a user interface on the computing device. The list of audio media objects may be provided to a list conversion engine which may discover, create, and/or obtain audiovisual media objects which correspond to the audio media objects in the list of audio media objects. | 09-13-2012 |
Patent application number | Description | Published |
20090024830 | Executing Multiple Instructions Multiple Data ('MIMD') Programs on a Single Instruction Multiple Data ('SIMD') Machine - Executing Multiple Instructions Multiple Data (‘MIMD’) programs on a Single Instruction Multiple Data (‘SIMD’) machine, the SIMD machine including a plurality of compute nodes, each compute node capable of executing only a single thread of execution, the compute nodes initially configured exclusively for SIMD operations, the SIMD machine further comprising a data communications network, the network comprising synchronous data communications links among the compute nodes, including establishing a SIMD partition comprising a plurality of the compute nodes; booting the SIMD partition in MIMD mode; executing by launcher programs a plurality of MIMD programs on compute nodes in the SIMD partition; and re-executing a launcher program by an operating system on a compute node in the SIMD partition upon termination of the MIMD program executed by the launcher program. | 01-22-2009 |
20100017655 | Error Recovery During Execution Of An Application On A Parallel Computer - Methods, apparatus, and products are disclosed for error recovery during execution of an application on a parallel computer that includes a plurality of compute nodes. Such error recovery includes: storing, by the application during execution on the nodes, application restore data in a restore buffer at predetermined points during execution of the application, the restore data specifying an execution state of the application at one or more points during application execution; encountering, by at least one of the nodes executing the application, a recoverable error during application execution; determining, by the application, the nodes affected by the recoverable error; restarting, by each of the affected nodes, execution of the application; retrieving, by the restarted application executing on each of the affected nodes, the restore data from the restore buffer; and continuing, by each affected node, execution of the application with the execution state specified by the retrieved restore data. | 01-21-2010 |
20110219208 | MULTI-PETASCALE HIGHLY EFFICIENT PARALLEL SUPERCOMPUTER - A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency. | 09-08-2011 |
20120331065 | Messaging In A Parallel Computer Using Remote Direct Memory Access ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object. | 12-27-2012 |
20120331153 | Establishing A Data Communications Connection Between A Lightweight Kernel In A Compute Node Of A Parallel Computer And An Input-Output ('I/O') Node Of The Parallel Computer - Establishing a data communications connection between a lightweight kernel in a compute node of a parallel computer and an input-output (‘I/O’) node of the parallel computer, including: configuring the compute node with the network address and port value for data communications with the I/O node; establishing a queue pair on the compute node, the queue pair identified by a queue pair number (‘QPN’); receiving, in the I/O node on the parallel computer from the lightweight kernel, a connection request message; establishing by the I/O node on the I/O node a queue pair identified by a QPN for communications with the compute node; and establishing by the I/O node the requested connection by sending to the lightweight kernel a connection reply message. | 12-27-2012 |
20120331243 | Remote Direct Memory Access ('RDMA') In A Parallel Computer - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 12-27-2012 |
20130080564 | MESSAGING IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object. | 03-28-2013 |
20130091236 | REMOTE DIRECT MEMORY ACCESS ('RDMA') IN A PARALLEL COMPUTER - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 04-11-2013 |
20130103926 | ESTABLISHING A DATA COMMUNICATIONS CONNECTION BETWEEN A LIGHTWEIGHT KERNEL IN A COMPUTE NODE OF A PARALLEL COMPUTER AND AN INPUT-OUTPUT ('I/O') NODE OF THE PARALLEL COMPUTER - Establishing a data communications connection between a lightweight kernel in a compute node of a parallel computer and an input-output (‘I/O’) node of the parallel computer, including: configuring the compute node with the network address and port value for data communications with the I/O node; establishing a queue pair on the compute node, the queue pair identified by a queue pair number (‘QPN’); receiving, in the I/O node on the parallel computer from the lightweight kernel, a connection request message; establishing by the I/O node on the I/O node a queue pair identified by a QPN for communications with the compute node; and establishing by the I/O node the requested connection by sending to the lightweight kernel a connection reply message. | 04-25-2013 |
20130185375 | CONFIGURING COMPUTE NODES IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 07-18-2013 |
20130185381 | Configuring Compute Nodes In A Parallel Computer Using Remote Direct Memory Access ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 07-18-2013 |
20130263138 | Collectively Loading An Application In A Parallel Computer - Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job. | 10-03-2013 |
20130339805 | Aggregating Job Exit Statuses Of A Plurality Of Compute Nodes Executing A Parallel Application - Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer. | 12-19-2013 |