Patent application number | Description | Published |
20080307168 | Latency Reduction for Cache Coherent Bus-Based Cache - In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response. | 12-11-2008 |
20080307286 | Combined Single Error Correction/Device Kill Detection Code - In one embodiment, an apparatus comprises a check/correct circuit coupled to a control circuit. The check/correct circuit is coupled to receive a block of data and corresponding check bits. The block of data is received as N transmissions, each transmission comprising M data bits and L check bits. The check/correct circuit is configured to detect one or more errors in each of a plurality of non-overlapping windows of K bits in the M data bits, responsive to the M data bits and the L check bits. The control circuit is configured to record which of the plurality of windows have had errors detected and, if a given window of the plurality of windows has had errors detected in each of the N transmissions of the block, the control circuit is configured to signal a device failure. Each of K, L, M, and N are integers greater than one. | 12-11-2008 |
20100161905 | Latency Reduction for Cache Coherent Bus-Based Cache - In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response. | 06-24-2010 |
20100208540 | INTEGRATED CIRCUIT WITH MULTIPORTED MEMORY SUPERCELL AND DATA PATH SWITCHING CIRCUITRY - An integrated circuit. The integrated circuit includes a plurality of memory requesters and a memory supercell. The memory supercell includes a plurality of memory banks each of which forms a respective range of separately addressable storage locations, wherein the memory supercell is organized into a plurality of bank groups. Each of the plurality of bank groups includes a subset of the plurality of memory banks and a corresponding dedicated access port. The integrated circuit further includes a switch coupled between the plurality of memory requesters and the memory supercell. The switch is configured, responsive to a memory request by a given one of the plurality of memory requesters, to connect a data path between the given memory requester and the dedicated access port of a particular one of the bank groups addressed by the memory request. | 08-19-2010 |
20110197030 | Latency Reduction for Cache Coherent Bus-Based Cache - In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response. | 08-11-2011 |
20110296110 | Critical Word Forwarding with Adaptive Prediction - In an embodiment, a system includes a memory controller, processors and corresponding caches. The system may include sources of uncertainty that prevent the precise scheduling of data forwarding for a load operation that misses in the processor caches. The memory controller may provide an early response that indicates that data should be provided in a subsequent clock cycle. An interface unit between the memory controller and the caches/processors may predict a delay from a currently-received early response to the corresponding data, and may speculatively prepare to forward the data assuming that it will be available as predicted. The interface unit may monitor the delays between the early response and the forwarding of the data, or at least the portion of the delay that may vary. Based on the measured delays, the interface unit may modify the subsequently predicted delays. | 12-01-2011 |
20120017135 | Combined Single Error Correction/Device Kill Detection Code - In one embodiment, an apparatus includes a check/correct circuit coupled to a control circuit. The check/correct circuit is coupled to receive a block of data and corresponding check bits. The block of data is received as N transmissions, each transmission including M data bits and L check bits. The check/correct circuit is configured to detect one or more errors in each of a plurality of non-overlapping windows of K bits in the M data bits, responsive to the M data bits and the L check bits. The control circuit is configured to record which of the plurality of windows have had errors detected and, if a given window of the plurality of windows has had errors detected in each of the N transmissions of the block, the control circuit is configured to signal a device failure. Each of K, L, M, and N are integers greater than one. | 01-19-2012 |
20120047332 | Combining Write Buffer with Dynamically Adjustable Flush Metrics - In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness. | 02-23-2012 |
20120137078 | Multiple Critical Word Bypassing in a Memory Controller - In one embodiment, a memory controller may be configured to transmit two or more critical words (or beats) corresponding to two or more different read requests prior to returning the remaining beats of the read requests. Such an embodiment may reduce latency to the sources of the memory requests, which may be stalled awaiting the critical words. The remaining words may fill a cache block or other buffer, but may not be required by the sources as quickly as the critical words in order to support higher performance. In some embodiments, once a remaining beat of a block is transmitted, all of the remaining beats may be transmitted contiguously. In other embodiments, additional critical words may be forwarded between remaining beats of a block. | 05-31-2012 |
20130103906 | Combining Write Buffer with Dynamically Adjustable Flush Metrics - In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness. | 04-25-2013 |
20130159633 | QOS MANAGEMENT IN THE L2 CACHE - Methods and apparatuses for assigning a QoS level to memory requests based on the number of currently outstanding memory requests. One or more processors of a processor complex issue memory requests to a L2 cache. The L2 cache controller assigns a QoS level to the memory request based on whether the number of outstanding memory requests is above or below a programmable threshold. If the number is above the threshold, then new requests typically do not impair processor performance since the processor is already waiting for a large number of previous memory requests, and so the new memory request is assigned a low priority level. If the number of outstanding memory requests is below the threshold, then the new memory request is assigned a high priority level. | 06-20-2013 |
20130254485 | COORDINATED PREFETCHING IN HIERARCHICALLY CACHED PROCESSORS - Processors and methods for coordinating prefetch units at multiple cache levels. A single, unified training mechanism is utilized for training on streams generated by a processor core. Prefetch requests are sent from the core to lower level caches, and a packet is sent with each prefetch request. The packet identifies the stream ID of the prefetch request and includes relevant training information for the particular stream ID. The lower level caches generate prefetch requests based on the received training information. | 09-26-2013 |
20140119146 | Clock Gated Storage Array - A storage array and a method of operating the same are disclosed. A storage array includes a number of clocked storage circuits arranged in rows and columns. The storage array is subdivided into a number of grids each including a subset of clocked storage circuits and also includes a number of clock gating circuits, each of which is coupled to provide a clock signal to the clocked storage circuits of a corresponding subset. During an access of the storage array (i.e. a read or a write), one of the clock gating circuits is configured to provide the clock signal to the clocked storage circuits of its correspondingly coupled subset. The remaining clock gating circuits are configured to inhibit the clock signal from being provided to the flop circuits of their respectively coupled subsets. | 05-01-2014 |
20140149632 | PREFETCHING ACROSS PAGE BOUNDARIES IN HIERARCHICALLY CACHED PROCESSORS - Processors and methods for preventing lower level prefetch units from stalling at page boundaries. An upper level prefetch unit closest to the processor core issues a preemptive request for a translation of the next page in a given prefetch stream. The upper level prefetch unit sends the translation to the lower level prefetch units prior to the lower level prefetch units reaching the end of the current page for the given prefetch stream. When the lower level prefetch units reach the boundary of the current page, instead of stopping, these prefetch units can continue to prefetch by jumping to the next physical page number provided in the translation. | 05-29-2014 |
20140181403 | CACHE POLICIES FOR UNCACHEABLE MEMORY REQUESTS - Systems, processors, and methods for keeping uncacheable data coherent. A processor includes a multi-level cache hierarchy, and uncacheable load memory operations can be cached at any level of the cache hierarchy. If an uncacheable load misses in the L2 cache, then allocation of the uncacheable load will be restricted to a subset of the ways of the L2 cache. If an uncacheable store memory operation hits in the L1 cache, then the hit cache line can be updated with the data from the memory operation. If the uncacheable store misses in the L1 cache, then the uncacheable store is sent to a core interface unit. | 06-26-2014 |
20140181571 | MANAGING FAST TO SLOW LINKS IN A BUS FABRIC - Systems and methods for managing fast to slow links in a bus fabric. A pair of link interface units connect agents with a clock mismatch. Each link interface unit includes an asynchronous FIFO for storing transactions that are sent over the clock domain crossing. When the command for a new transaction is ready to be sent while data for the previous transaction is still being sent, the link interface unit prevents the last data beat of the previous transaction from being sent. Instead, after a delay of one or more clock cycles, the last data beat overlaps with the command of the new transaction. | 06-26-2014 |
20140195737 | Flush Engine - Techniques are disclosed related to flushing one or more data caches. In one embodiment an apparatus includes a processing element, a first cache associated with the processing element, and a circuit configured to copy modified data from the first cache to a second cache in response to determining an activity level of the processing element. In this embodiment, the apparatus is configured to alter a power state of the first cache after the circuit copies the modified data. The first cache may be at a lower level in a memory hierarchy relative to the second cache. In one embodiment, the circuit is also configured to copy data from the second cache to a third cache or a memory after a particular time interval. In some embodiments, the circuit is configured to copy data while one or more pipeline elements of the apparatus are in a low-power state. | 07-10-2014 |
20140237276 | Method and Apparatus for Determining Tunable Parameters to Use in Power and Performance Management - Various method and apparatus embodiments for selecting tunable operating parameters in an integrated circuit (IC) are disclosed. In one embodiment, an IC includes a number of various functional blocks each having a local management circuit. The IC also includes a global management unit coupled to each of the functional blocks having a local management circuit. The management unit is configured to determine the operational state of the IC based on the respective operating states of each of the functional blocks. Responsive to determining the operational state of the IC, the management unit may provide indications of the same to the local management circuit of each of the functional blocks. The local management circuit for each of the functional blocks may select one or more tunable parameters based on the operational state determined by the management unit. | 08-21-2014 |
20150019824 | CACHE PRE-FETCH MERGE IN PENDING REQUEST BUFFER - An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a pending request buffer and a control circuit. The pending request buffer may include a plurality of buffer entries. The control circuit may be coupled to the pending request buffer and may be configured to receive a request for a first cache line from a pre-fetch engine, and store the received request in an entry of the pending request buffer. The control circuit may be further configured to receive a request for a second cache line from a processor, and store the request received from the processor in the entry of the pending request buffer in response to a determination that the second cache line is the same as the first cache line. | 01-15-2015 |
20150026404 | Least Recently Used Mechanism for Cache Line Eviction from a Cache Memory - A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory. | 01-22-2015 |
Patent application number | Description | Published |
20130281817 | DIRECT VISUALIZATION SYSTEM FOR GLAUCOMA TREATMENT - A direct visualization (DV) system and methods for measuring one or more anatomical features of the eye, including a depth of the iridocorneal angle of the eye. The DV system can include a wire extending distally from a handle with the wire having one or more indicators for measuring anatomical features of the eye. The DV system can be deployed into the eye and used with minimal trauma to ocular tissues. Furthermore, the DV system can be used independently or alongside other ocular instruments, such as instruments having indicators corresponding to the DV system for correctly implanting ocular implants without the use of a gonio lens. | 10-24-2013 |
20130281908 | Delivery System for Ocular Implant - A delivery system is disclosed which can be used to deliver an ocular implant into a target location within the eye via an ab interno procedure. In some embodiments, the implant can provide fluid communication between the anterior chamber and the suprachoroidal or supraciliary space while in an implanted state. The delivery system can include a proximal handle component and a distal delivery component. In addition, the proximal handle component can include an actuator to control the release of the implant from the delivery component into the target location in the eye. | 10-24-2013 |
20140012279 | OCULAR IMPLANT APPLIER AND METHODS OF USE - Described herein is a delivery device and methods for delivering an ocular implant into an eye. The delivery device includes a proximal handle portion; a distal delivery portion coupled to a distal end of the handle portion and configured to releasably hold an ocular implant and includes a sheath positioned axially over a guidewire; and a metering system configured to provide visual guidance regarding depth of advancement of an implant positioned on the guidewire into an anatomic region of the eye. Also disclosed is a device and method for loading an implant onto the delivery device. | 01-09-2014 |
20140142378 | OCULAR IMPLANT DELIVERY SYSTEMS AND METHODS - Described herein are delivery devices and methods of using the devices for delivering an ocular implant into a suprachoroidal space without use of a goniolens. The delivery device includes a handle including a channel extending from a proximal end of the handle to a distal end of the handle, an applier coupled to the handle, the applier including a blunt distal tip and an elongate, flexible wire insertable through a fluid channel of an ocular implant, and a fiber optic image bundle reversibly inserted through the channel such that the fiber optic image bundle extends to a region proximal to the blunt distal tip of the applier. | 05-22-2014 |
20140155805 | Delivery System for Ocular Implant - A delivery system is disclosed which can be used to deliver an ocular implant into a target location within the eye via an ab interno procedure. In some embodiments, the implant can provide fluid communication between the anterior chamber and the suprachoroidal or supraciliary space while in an implanted state. The delivery system can include a proximal handle component and a distal delivery component. In addition, the proximal handle component can include an actuator to control the release of the implant from the delivery component into the target location in the eye. | 06-05-2014 |
20140323995 | Targeted Drug Delivery Devices and Methods - This disclosure relates generally to methods and devices for use in treating eye conditions. In some embodiments, a site-specific therapeutic agent is mixed with a releasing agent with a dual syringe apparatus in order to achieve homogeneity. Once mixed, the site-specific therapeutic agent and releasing agent can be either dispensed directly within an area of the eye or within an implant. The implant can be at least partially filled with the site-specific therapeutic agent and releasing agent either prior to or after implantation into the eye. Some ratios of site-specific therapeutic agents to releasing agents are disclosed which provide various releasing profiles of the site-specific therapeutic agent within the eye. | 10-30-2014 |
20150022780 | Gonio Lens System With Stabilization Mechanism - This disclosure relates generally to methods and devices for use in viewing and positioning an eye with a gonio lens system, such as during ocular exams and ocular surgeries. Some embodiments of the gonio lens system can include a gonio lens for viewing one or more tissues and structures of the eye. In addition, the gonio lens system can include one or more positioning features for controlling movement positioning of the eye. | 01-22-2015 |
Patent application number | Description | Published |
20090044814 | Implantable devices, systems, and methods for maintaining desired orientations in targeted tissue regions - Devices, systems, and methods are provided by maintaining tissue regions in desired orientation in or along an airway, e.g., for reducing or preventing snoring and/or sleep disordered breathing events, such as sleep apnea. | 02-19-2009 |
20100134759 | DIGITAL IMAGING SYSTEM FOR EYE PROCEDURES - Described herein is a hand-held gonioscopic imaging system that can be used to continuously display, capture and record images of the iridocorneal angle within the eye during implantation procedures. The system can be used, for example, during device implantation procedures for the treatment of glaucoma such that landmark identification continues during implantation. Intuitive real-time images viewed through the imaging systems described herein appear to the user to move in the same horizontal orientation as the instrument is actually being moved. The systems described herein also provide independent illumination sources for the camera and the surgical microscope that also have independent illumination controls. | 06-03-2010 |
20100137981 | OCULAR IMPLANT WITH SHAPE CHANGE CAPABILITIES - Disclosed are devices, methods and systems for treatment of eye disease such as glaucoma. Implants are described herein that enhance aqueous flow through the normal outflow system of the eye with minimal to no complications. The implant can be reversibly deformed to a first shape, such as a generally linear shape conducive to insertion. Upon insertion, the implant can deform to a second shape, such as a generally non-linear shape conducive to retention within the eye. The shape also improves fluid flow from the anterior chamber and prevents or reduces clogging. | 06-03-2010 |
20100280317 | OCULAR IMPLANT DELIVERY SYSTEMS AND METHODS - Described herein are delivery devices and methods of using the devices for delivering an ocular implant into a suprachoroidal space without use of a goniolens. The delivery device includes a handle including a channel extending from a proximal end of the handle to a distal end of the handle, an applier coupled to the handle, the applier including a blunt distal tip and an elongate, flexible wire insertable through a fluid channel of an ocular implant, and a fiber optic image bundle reversibly inserted through the channel such that the fiber optic image bundle extends to a region proximal to the blunt distal tip of the applier. | 11-04-2010 |
20110112546 | OCULAR IMPLANT APPLIER AND METHODS OF USE - Described herein is a delivery device and methods for delivering an ocular implant into an eye. The delivery device includes a proximal handle portion; a distal delivery portion coupled to a distal end of the handle portion and configured to releasably hold an ocular implant and includes a sheath positioned axially over a guidewire; and a metering system configured to provide visual guidance regarding depth of advancement of an implant positioned on the guidewire into an anatomic region of the eye. Also disclosed is a device and method for loading an implant onto the delivery device. | 05-12-2011 |
20130110125 | Ocular Implant Delivery Systems And Methods | 05-02-2013 |
20140207241 | NON-PLANAR ORTHOPEDIC IMPLANTS AND METHODS - Deformable joint implants with a generally hyperbolic paraboloid shape are disclosed, including configurations for delivery into the small joints of the body in the wrists, hands, ankle and feet, such as the first carpo-metacarpal joint, which comprises a double-saddle structure. The center of the implant may be supported, or supported and configured as a central opening. Implants with supported center regions may have a uniform or non-uniform thickness. A region of non-uniform, reduced thickness may be circular, oval or ring-like in shape, with a central support that may have an increased thickness relative to the perimeter region of the implant. | 07-24-2014 |