Asaro
Anthony Asaro, Toronto CA
Patent application number | Description | Published |
---|---|---|
20080250212 | METHOD AND APPARATUS FOR ACCESSING MEMORY USING PROGRAMMABLE MEMORY ACCESSING INTERLEAVING RATIO INFORMATION - A method and apparatus stores data representing a non 1:1 memory access interleaving ratio for accessing a plurality of memories. The method and apparatus interleaves memory accesses to at least either a first memory that is accessible via a first (and associated memory) bus having first characteristics or a second memory accessible via a second bus having different characteristics, based on the data representing the non 1:1 interleaving memory access ratio. | 10-09-2008 |
20100162256 | OPTIMIZATION OF APPLICATION POWER CONSUMPTION AND PERFORMANCE IN AN INTEGRATED SYSTEM ON A CHIP - A method for determining an operating point of a shared resource. The method includes receiving indications of access demand to a shared resource from each of a plurality of functional units and determining a maximum access demand from among the plurality of functional units based on their respective indications. The method further includes determining a required operating point of the shared resource based on the maximum access demand, wherein the shared resource is shared by each of the plurality of functional units, comparing the required operating point to a present operating point of the shared resource, and changing to the required operating point from the present operating point if the required and present operating points are different. | 06-24-2010 |
20110057939 | Reading a Local Memory of a Processing Unit - Disclosed herein are systems, apparatuses, and methods for enabling efficient reads to a local memory of a processing unit. In an embodiment, a processing unit includes an interface and a buffer. The interface is configured to (i) send a request for a portion of data in a region of a local memory of an other processing unit and (ii) receive, responsive to the request, all the data from the region. The buffer is configured to store the data from the region of the local memory of the other processing unit. | 03-10-2011 |
20110219190 | CACHE WITH RELOAD CAPABILITY AFTER POWER RESTORATION - A method and apparatus for repopulating a cache are disclosed. At least a portion of the contents of the cache are stored in a location separate from the cache. Power is removed from the cache and is restored some time later. After power has been restored to the cache, it is repopulated with the portion of the contents of the cache that were stored separately from the cache. | 09-08-2011 |
20110264934 | METHOD AND APPARATUS FOR MEMORY POWER MANAGEMENT - A method for power management is disclosed. The method may include monitoring requests for access to a memory of a memory subsystem by one or more processor cores; and monitoring requests for access to the memory conveyed by an input/output (I/O) unit. The method may further include determining if at least a first amount of time has elapsed since any one of the processor cores has asserted a memory access request and determining if at least a second amount of time has elapsed since the I/O unit has conveyed a memory access request. A first signal may be asserted if the first and second amounts of time have elapsed. A memory subsystem may be transitioned from operating in a full power state to a first low power state responsive to assertion of the first signal. | 10-27-2011 |
20130138840 | Efficient Memory and Resource Management - The present system enables passing a pointer, associated with accessing data in a memory, to an input/output (I/O) device via an input/output memory management unit (IOMMU). The I/O device accesses the data in the memory via the IOMMU without copying the data into a local I/O device memory. The I/O device can perform an operation on the data in the memory based on the pointer, such that I/O device accesses the memory without expensive copies. | 05-30-2013 |
20130145055 | Peripheral Memory Management - The present system enables an input/output (I/O) device to request memory for performing a direct memory access (DMA) of system memory. Further, the system uses an input/output memory management unit (IOMMU) to determine whether or not the system memory is available. The IOMMU notifies an operating system associated with the system memory if the system memory is not available, such that the operating system allocates non-system memory for use by the I/O device to perform the DMA. | 06-06-2013 |
20130174144 | HARDWARE BASED VIRTUALIZATION SYSTEM - A method for changing between virtual machines on a graphics processing unit (GPU) includes requesting to switch from a first virtual machine (VM) with a first global context to a second VM with a second global context; stopping taking of new commands in the first VM; saving the first global context; and switching out of the first VM. | 07-04-2013 |
20130262775 | Cache Management for Memory Operations - Embodiments of the present invention provides for the execution of threads and/or workitems on multiple processors of a heterogeneous computing system in a manner that they can share data correctly and efficiently. Disclosed method, system, and article of manufacture embodiments include, responsive to an instruction from a sequence of instructions of a work-item, determining an ordering of visibility to other work-items of one or more other data items in relation to a particular data item, and performing at least one cache operation upon at least one of the particular data item or the other data items present in any one or more cache memories in accordance with the determined ordering. The semantics of the instruction includes a memory operation upon the particular data item. | 10-03-2013 |
20130262776 | Managing Coherent Memory Between an Accelerated Processing Device and a Central Processing Unit - Existing multiprocessor computing systems often have insufficient memory coherency and, consequently, are unable to efficiently utilize separate memory systems. Specifically, a CPU cannot effectively write to a block of memory and then have a GPU access that memory unless there is explicit synchronization. In addition, because the GPU is forced to statically split memory locations between itself and the CPU, existing multiprocessor computing systems are unable to efficiently utilize the separate memory systems. Embodiments described herein overcome these deficiencies by receiving a notification within the GPU that the CPU has finished processing data that is stored in coherent memory, and invalidating data in the CPU caches that the GPU has finished processing from the coherent memory. Embodiments described herein also include dynamically partitioning a GPU memory into coherent memory and local memory through use of a probe filter. | 10-03-2013 |
20130262784 | Memory Heaps in a Memory Model for a Unified Computing System - A method and system for allocating memory to a memory operation executed by a processor in a computer arrangement having a first processor configured for unified operation with a second processor. The method includes receiving a memory operation from a processor and mapping the memory operation to one of a plurality of memory heaps. The mapping produces a mapping result. The method also includes providing the mapping result to the processor. | 10-03-2013 |
20130262814 | Mapping Memory Instructions into a Shared Memory Address Place - Embodiments of the present invention provide a method of a first processor using a memory resource associated with a second processor. The method includes receiving a memory instruction from a first processor process, wherein the memory instruction refers to a shared memory address (SMA) that maps to a second processor memory. The method also includes mapping the SMA to the second processor memory, wherein the mapping produces a mapping result and providing the mapping result to the first processor. | 10-03-2013 |
20130263141 | Visibility Ordering in a Memory Model for a Unified Computing System - Provided is a method of permitting the reordering of a visibility order of operations in a computer arrangement configured for permitting a first processor and a second processor threads to access a shared memory. The method includes receiving in a program order, a first and a second operation in a first thread and permitting the reordering of the visibility order for the operations in the shared memory based on the class of each operation. The visibility order determines the visibility in the shared memory, by a second thread, of stored results from the execution of the first and second operations. | 10-03-2013 |
20140040560 | All Invalidate Approach for Memory Management Units - An input/output memory management unit (IOMMU) having an “invalidate all” command available to clear the contents of cache memory is presented. The cache memory provides fast access to address translation data that has been previously obtained by a process. A typical cache memory includes device tables, page tables and interrupt remapping entries. Cache memory data can become stale or be compromised from security breaches or malfunctioning devices. In these circumstances, a rapid approach to clearing cache memory content is provided. | 02-06-2014 |
20140380028 | Virtualized Device Reset - In a hardware-based virtualization system, a hypervisor switches out of a first function into a second function. The first function is one of a physical function and a virtual function and the second function is one of a physical function and a virtual function. During the switching a malfunction of the first function is detected. The first function is reset without resetting the second function. The switching, detecting, and resetting operations are performed by a hypervisor of the hardware-based virtualization system. Embodiments further include a communication mechanism for the hypervisor to notify a driver of the function that was reset to enable the driver to restore the function without delay. | 12-25-2014 |
20150120978 | INPUT/OUTPUT MEMORY MAP UNIT AND NORTHBRIDGE - The present invention provides for page table access and dirty bit management in hardware via a new atomic test[0] and OR and Mask. The present invention also provides for a gasket that enables ACE to CCI translations. This gasket further provides request translation between ACE and CCI, deadlock avoidance for victim and probe collision, ARM barrier handling, and power management interactions. The present invention also provides a solution for ARM victim/probe collision handling which deadlocks the unified northbridge. These solutions includes a dedicated writeback virtual channel, probes for IO requests using 4-hop protocol, and a WrBack Reorder Ability in MCT where victims update older requests with data as they pass the requests. | 04-30-2015 |
Anthony Asaro, Scarborough CA
Patent application number | Description | Published |
---|---|---|
20090077274 | Multi-Priority Communication in a Differential Serial Communication Link - A circuit includes a high priority circuit and a non-high priority circuit. The high priority circuit is operative to communicate high priority information to a single path of a differential serial communication link. The non-high priority circuit communicates non-high priority information to the single path. The high priority information is communicated prior to the non-high priority information. In one example, the circuit includes a flow control distributor operatively coupled to the high priority circuit and the non-high priority circuit. The flow control distributor distributes a total number of flow control credits into high priority credits and non-high priority credits. The flow control distributor controls communication of the high priority information based on the high priority credits. The flow control distributor controls communication of the non-high priority information based on the non-high priority credits. | 03-19-2009 |
20090307406 | Memory Device for Providing Data in a Graphics System and Method and Apparatus Thereof - A central processor unit (CPU) is connected to a system/graphics controller generally comprising a monolithic semiconductor device. The system/graphics controller is connected to an input output (IO) controller via a high-speed PCI bus. The IO controller interfaces to the system graphics controller via the high-speed PCI bus. The IO controller includes a lower speed PCI port controlled by an arbiter within the IO controller. Generally, the low speed PCI arbiter of the IO controller will interface to standard 33 MHz PCI cards. In addition, the IO controller interfaces to an external storage device, such as a hard drive, via either a standard or a proprietary bus protocol. A unified system/graphics memory which is accessed by the system/graphics controller. The unified memory contains both system data and graphics data. In a specific embodiment, two channels, CH0 and CH1 access the unified memory. Each channel is capable of accessing a portion of memory containing graphics data or a portion of memory containing system data. | 12-10-2009 |
Anthony Asaro, Ontario CA
Patent application number | Description | Published |
---|---|---|
20130262736 | MEMORY TYPES FOR CACHING POLICIES - The present system enables receiving a request from an I/O device to translate a virtual address to a physical address to access the page in system memory. One or more memory attributes of the page defining a cacheability characteristic of the page is identified. A response including the physical address and the cacheability characteristic of the page is sent to the I/O device. | 10-03-2013 |
Anthony Asaro, Toronton CA
Patent application number | Description | Published |
---|---|---|
20140040565 | Shared Memory Space in a Unified Memory Model - Methods and systems are provided for mapping a memory instruction to a shared memory address space in a computer arrangement having a CPU and an APD. A method includes receiving a memory instruction that refers to an address in the shared memory address space, mapping the memory instruction based on the address to a memory resource associated with either the CPU or the APD, and performing the memory instruction based on the mapping. | 02-06-2014 |
Anthony Asaro, Markham CA
Patent application number | Description | Published |
---|---|---|
20150363310 | MEMORY HEAPS IN A MEMORY MODEL FOR A UNIFIED COMPUTING SYSTEM - A method and system for allocating memory to a memory operation executed by a processor in a computer arrangement having a first processor configured for unified operation with a second processor. The method includes receiving a memory operation from a processor and mapping the memory operation to one of a plurality of memory heaps. The mapping produces a mapping result. The method also includes providing the mapping result to the processor. | 12-17-2015 |
Antonio Asaro, Scarborough CA
Patent application number | Description | Published |
---|---|---|
20150154735 | MEMORY DEVICE FOR PROVIDING DATA IN A GRAPHICS SYSTEM AND METHOD AND APPARATUS THEREOF - A central processor unit (CPU) is connected to a system/graphics controller generally comprising a monolithic semiconductor device. The system/graphics controller is connected to an input output (IO) controller via a high-speed PCI bus. The IO controller interfaces to the system graphics controller via the high-speed PCI bus. The IO controller includes a lower speed PCI port controlled by an arbiter within the IO controller. Generally, the low speed PCI arbiter of the IO controller will interface to standard 33 MHz PCI cards. In addition, the IO controller interfaces to an external storage device, such as a hard drive, via either a standard or a proprietary bus protocol. A unified system/graphics memory which is accessed by the system/graphics controller. The unified memory contains both system data and graphics data. In a specific embodiment, two channels, CH0 and CH1 access the unified memory. Each channel is capable of accessing a portion of memory containing graphics data or a portion of memory containing system data. | 06-04-2015 |
Antonio Asaro, Toronto CA
Patent application number | Description | Published |
---|---|---|
20100281231 | HIERARCHICAL MEMORY ARBITRATION TECHNIQUE FOR DISPARATE SOURCES - A hierarchical memory request stream arbitration technique merges coherent memory request streams from multiple memory request sources and arbitrates the merged coherent memory request stream with requests from a non-coherent memory request stream. In at least one embodiment of the invention, a method of generating a merged memory request stream from a plurality of memory request streams includes merging coherent memory requests into a first serial memory request stream. The method includes selecting, by a memory controller circuit, a memory request for placement in the merged memory request stream from at least the first serial memory request stream and a merged non-coherent request stream. The merged non-coherent memory request stream is at least partially based on an indicator of a previous memory request selected for placement in the merged memory request stream. | 11-04-2010 |
20120331226 | HIERARCHICAL MEMORY ARBITRATION TECHNIQUE FOR DISPARATE SOURCES - A hierarchical memory request stream arbitration technique merges coherent memory request streams from multiple memory request sources and arbitrates the merged coherent memory request stream with requests from a non-coherent memory request stream. In at least one embodiment of the invention, a method of generating a merged memory request stream from a plurality of memory request streams includes merging coherent memory requests into a first serial memory request stream. The method includes selecting, by a memory controller circuit, a memory request for placement in the merged memory request stream from at least the first serial memory request stream and a merged non-coherent request stream. The merged non-coherent memory request stream is based on an indicator of a previous memory request selected for placement in the merged memory request stream. | 12-27-2012 |
Carlo Asaro, Waterbury, CT US
Patent application number | Description | Published |
---|---|---|
20130218372 | Weapons Stores Processor Panel For Aircraft - An aircraft weapons control system including a weapons stores processor panel for receiving input signals from a weapons input; a weapons interface for receiving fire signals from the weapons stores processor panel to control firing of aircraft weapons; and a flight management system in communication with the weapons stores processor panel and the weapons interface, the flight management system providing control signals to the weapons interface; wherein the weapons stores processor panel implements safety interlocks to prevent or enable firing of the aircraft weapons | 08-22-2013 |
Donna Asaro, New York, NY US
Patent application number | Description | Published |
---|---|---|
20090006163 | Method and System for Allocating Member Compensation - Compensation to be paid to one or more members of an organization is determined. The organization has members organized in a hierarchical structure comprising first-level members, second-level members, and third-level members. A pool of money to be paid to the members in the organization as compensation is identified. A first portion of the pool is assigned to a first-level member. Then the first-level member determines a second portion of the pool from the first portion to be distributed to members below the first-level member and/or an amount from the first portion to be paid to one or more members below the first-level member. The second portion is assigned to the second-level member. Then the second-level member determines a third portion of the pool to be distributed to members below the second-level member and/or an amount from the second portion to be paid to members below the second-level member. | 01-01-2009 |
Marianna F. Asaro, Belmont, CA US
Patent application number | Description | Published |
---|---|---|
20090143632 | SORBENTS AND PROCESSES FOR SEPARATION OF OLEFINS FROM PARAFFINS - In one embodiment, the present invention relates generally to a method for separating olefins from paraffins. In one embodiment, the method includes providing a mixture comprising olefins and paraffins, providing a gas separation agent to associatively, reversibly and selectively bind the olefin and dissociating the olefin from the gas separation agent. | 06-04-2009 |
20100326272 | METHOD AND APPARATUS FOR GAS REMOVAL - Aspects of the invention include a method and apparatus for reversibly sorbing a target gas. In one embodiment, an apparatus for reversibly sorbing a target gas is disclosed. The apparatus includes an inlet, a multi-channel monolith coupled to the inlet, the multi-channel monolith including a plurality of channels, each one of the plurality of channels includes one or more walls, wherein at least one of the one or more walls of at least one of the plurality of channels is porous and wherein one or more of the plurality of channels contain a sorbent and an outlet coupled to the multi-channel monolith. | 12-30-2010 |
Michael Asaro, Flagstaff, AZ US
Patent application number | Description | Published |
---|---|---|
20110117338 | OPEN PORE CERAMIC MATRIX COATED WITH METAL OR METAL ALLOYS AND METHODS OF MAKING SAME - Open pore foams are coated with metal or metal alloys by electrolytic or electroless plating. The characteristics of the plating bath are adjusted to decrease the surface tension such that the plate bath composition can pass into the pores of the foam, preferably at least two and most preferably more than five pores in depth from the surface of the foam matrix. This can be accomplished by adding a surfactant, solvent or other constituent to reduce the surface tension of the plate bath. In addition, heat and pressure can be used to drive in the plate bath composition into the passage ways of connected open pores in the foam matrix. The net result is to plate the inside surfaces of the pores in the foam matrix, while maintaining the passageways through the foam. Pretreatment of the pore surfaces can be used to promote adhesion of the metal. Particularly advantageous results are achieved when the foam matrix is a ceramic foam. | 05-19-2011 |
Salvatore A. Asaro, Sterling Heights, MI US
Patent application number | Description | Published |
---|---|---|
20150296905 | Rigid Neckwear Assemblies - Decorative neckwear assemblies are disclosed that include a rigid portion that comprises a substantially planar front face, a rear face, upper and lower faces, side faces, and a plurality of apertures. The rear face includes a contact surface that is substantially planar and orientated at a contact angle with respect to the front face. The assembly may include a retaining strap configured to engage the plurality of apertures of the rigid portion and attach the rigid portion to a user. | 10-22-2015 |
Simon Asaro, Concord CA
Patent application number | Description | Published |
---|---|---|
20100087562 | Polyurethane Foam Batt Insulation - Polyurethane foam materials are produced and used in batt form, and therefore are substitutes for insulation batts previously made of fibreglass insulation The polyurethane batts are preferably made of a flexible, and compressible foam material, such that the batts can be compressed and placed within a shipping container, and so that the compressed batt will form a friction fit in an opening, when in use An alternative insulation material and format are provided. | 04-08-2010 |
Tony Asaro, Toronto CA
Patent application number | Description | Published |
---|---|---|
20120159039 | Generalized Control Registers - Methods, systems, and computer readable media generalize control registers in the context of memory address translations for I/O devices. A method includes maintaining a table including a plurality of concurrently available control register base pointers each associated with a corresponding input/output (I/O) device, associating each control register base pointer with a first translation from a guest virtual address (GVA) to a guest physical address (GPA) and a second translation from the GPA to a system physical address (SPA), and operating the first and second translations concurrently for the plurality of I/O devices. | 06-21-2012 |
20120246381 | Input Output Memory Management Unit (IOMMU) Two-Layer Addressing - Embodiments of the present invention provide methods, systems, and computer readable media for input output memory management unit (IOMMU) two-layer addressing in the context of memory address translations for I/O devices. According to an embodiment, a method includes translating a guest virtual address (GVA) to a corresponding guest physical address (GPA) using a guest address translation table according to a process address space identifier associated with an address translation transaction associated with an I/O device, and translating the GPA to a corresponding system physical address (SPA) using a system address translation table according to a device identifier associated with the address translation transaction. | 09-27-2012 |
Vito Frank Asaro, San Diego, CA US
Patent application number | Description | Published |
---|---|---|
20130274693 | Zero-G Liquid Dispenser - Improved liquid dispenser devices configured to deliver liquid compositions to a user's eye are described, as well as methods for making and using such devices. | 10-17-2013 |