Patent application number | Description | Published |
20110235698 | SYSTEMS AND METHODS FOR INVARIANT PULSE LATENCY CODING - Image processing systems and methods extract information from an input signal representative of an element of an image and to encode the information in a pulsed output signal. A plurality of channels communicates the pulsed output signal, each of the plurality of channels being characterized by a latency. The information may be encoded as a pattern of relative pulse latencies observable in pulses communicated through the plurality of channels and the pattern of relative pulse latencies is substantially insensitive to image contrast and/or image luminance. A filter can be employed to provide a generator signal based on the input signal and pulse latencies can be determined using a logarithmic function of the generator signal. The filter may be temporally and/or spatially balanced and characterized by an integral along spatial and/or temporal dimensions of the filter that is substantially zero for all values of a temporal and/or a spatial variable. | 09-29-2011 |
20110235914 | INVARIANT PULSE LATENCY CODING SYSTEMS AND METHODS SYSTEMS AND METHODS - Systems and methods for processing image signals are described. One method comprises obtaining a generator signal based on an image signal and determining relative latencies associated with two or more pulses in a pulsed signal using a function of the generator signal that can comprise a logarithmic function. The function of the generator signal can be the absolute value of its argument. Information can be encoded in the pattern of relative latencies. Latencies can be determined using a scaling parameter that is calculated from a history of the image signal. The pulsed signal is typically received from a plurality of channels and the scaling parameter corresponds to at least one of the channels. The scaling parameter may be adaptively calculated such that the latency of the next pulse falls within one or more of a desired interval and an optimal interval. | 09-29-2011 |
20120308076 | APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION - Object recognition apparatus and methods useful for extracting information from an input signal. In one embodiment, the input signal is representative of an element of an image, and the extracted information is encoded into patterns of pulses. The patterns of pulses are directed via transmission channels to a plurality of detector nodes configured to generate an output pulse upon detecting an object of interest. Upon detecting a particular object, a given detector node elevates its sensitivity to that particular object when processing subsequent inputs. In one implementation, one or more of the detector nodes are also configured to prevent adjacent detector nodes from generating detection signals in response to the same object representation. The object recognition apparatus modulates properties of the transmission channels by promoting contributions from channels carrying information used in object recognition. | 12-06-2012 |
20130073484 | ELEMENTARY NETWORK DESCRIPTION FOR EFFICIENT MEMORY MANAGEMENT IN NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. Methods for managing memory in a processing system are described whereby memory can be allocated among a plurality of elements and rules configured for each element such that the parallel execution of the spiking networks is most optimal. | 03-21-2013 |
20130073492 | ELEMENTARY NETWORK DESCRIPTION FOR EFFICIENT IMPLEMENTATION OF EVENT-TRIGGERED PLASTICITY RULES IN NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. The software and hardware engines are optimized to take into account short-term and long-term synaptic plasticity in the form of LTD, LTP, and STDP. | 03-21-2013 |
20130073495 | ELEMENTARY NETWORK DESCRIPTION FOR NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. Neuronal network and methods for operating neuronal networks comprise a plurality of units, where each unit has a memory and a plurality of doublets, each doublet being connected to a pair of the plurality of units. Execution of unit update rules for the plurality of units is order-independent and execution of doublet event rules for the plurality of doublets is order-independent. | 03-21-2013 |
20130073496 | Tag-based apparatus and methods for neural networks - Apparatus and methods for high-level neuromorphic network description (HLND) using tags. The framework may be used to define nodes types, define node-to-node connection types, instantiate node instances for different node types, and/or generate instances of connection types between these nodes. The HLND format may be used to define nodes types, define node-to-node connection types, instantiate node instances for different node types, dynamically identify and/or select network subsets using tags, and/or generate instances of one or more connections between these nodes using such subsets. To facilitate the HLND operation and disambiguation, individual elements of the network (e.g., nodes, extensions, connections, I/O ports) may be assigned at least one unique tag. The tags may be used to identify and/or refer to respective network elements. The HLND kernel may comprises an interface to Elementary Network Description. | 03-21-2013 |
20130073498 | ELEMENTARY NETWORK DESCRIPTION FOR EFFICIENT LINK BETWEEN NEURONAL MODELS AND NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. The format is specifically tuned for neural systems and specialized neuromorphic hardware, thereby serving as a bridge between developers of brain models and neuromorphic hardware manufactures. | 03-21-2013 |
20130073500 | High level neuromorphic network description apparatus and methods - Apparatus and methods for high-level neuromorphic network description (HLND) framework that may be configured to enable users to define neuromorphic network architectures using a unified and unambiguous representation that is both human-readable and machine-interpretable. The framework may be used to define nodes types, node-to-node connection types, instantiate node instances for different node types, and to generate instances of connection types between these nodes. To facilitate framework usage, the HLND format may provide the flexibility required by computational neuroscientists and, at the same time, provides a user-friendly interface for users with limited experience in modeling neurons. The HLND kernel may comprise an interface to Elementary Network Description (END) that is optimized for efficient representation of neuronal systems in hardware-independent manner and enables seamless translation of HLND model description into hardware instructions for execution by various processing modules. | 03-21-2013 |
20130218821 | Round-trip engineering apparatus and methods for neural networks - Apparatus and methods for high-level neuromorphic network description (HLND) framework that may be configured to enable users to define neuromorphic network architectures using a unified and unambiguous representation that is both human-readable and machine-interpretable. The framework may be used to define nodes types, node-to-node connection types, instantiate node instances for different node types, and to generate instances of connection types between these nodes. To facilitate framework usage, the HLND format may provide the flexibility required by computational neuroscientists and, at the same time, provides a user-friendly interface for users with limited experience in modeling neurons. The HLND kernel may comprise an interface to Elementary Network Description (END) that is optimized for efficient representation of neuronal systems in hardware-independent manner and enables seamless translation of HLND model description into hardware instructions for execution by various processing modules. | 08-22-2013 |
20130251278 | INVARIANT PULSE LATENCY CODING SYSTEMS AND METHODS - Systems and methods for processing image signals are described. One method comprises obtaining a generator signal based on an image signal and determining relative latencies associated with two or more pulses in a pulsed signal using a function of the generator signal that can comprise a logarithmic function. The function of the generator signal can be the absolute value of its argument. Information can be encoded in the pattern of relative latencies. Latencies can be determined using a scaling parameter that is calculated from a history of the image signal. The pulsed signal is typically received from a plurality of channels and the scaling parameter corresponds to at least one of the channels. The scaling parameter may be adaptively calculated such that the latency of the next pulse falls within one or more of a desired interval and an optimal interval. | 09-26-2013 |
20130297539 | SPIKING NEURAL NETWORK OBJECT RECOGNITION APPARATUS AND METHODS - Apparatus and methods for feedback in a spiking neural network. In one approach, spiking neurons receive sensory stimulus and context signal that correspond to the same context. When the stimulus provides sufficient excitation, neurons generate response. Context connections are adjusted according to inverse spike-timing dependent plasticity. When the context signal precedes the post synaptic spike, context synaptic connections are depressed. Conversely, whenever the context signal follows the post synaptic spike, the connections are potentiated. The inverse STDP connection adjustment ensures precise control of feedback-induced firing, eliminates runaway positive feedback loops, enables self-stabilizing network operation. In another aspect of the invention, the connection adjustment methodology facilitates robust context switching when processing visual information. When a context (such an object) becomes intermittently absent, prior context connection potentiation enables firing for a period of time. If the object remains absent, the connection becomes depressed thereby preventing further firing. | 11-07-2013 |
20130297541 | SPIKING NEURAL NETWORK FEEDBACK APPARATUS AND METHODS - Apparatus and methods for feedback in a spiking neural network. In one approach, spiking neurons receive sensory stimulus and context signal that correspond to the same context. When the stimulus provides sufficient excitation, neurons generate response. Context connections are adjusted according to inverse spike-timing dependent plasticity. When the context signal precedes the post synaptic spike, context synaptic connections are depressed. Conversely, whenever the context signal follows the post synaptic spike, the connections are potentiated. The inverse STDP connection adjustment ensures precise control of feedback-induced firing, eliminates runaway positive feedback loops, enables self-stabilizing network operation. In another aspect of the invention, the connection adjustment methodology facilitates robust context switching when processing visual information. When a context (such an object) becomes intermittently absent, prior context connection potentiation enables firing for a period of time. If the object remains absent, the connection becomes depressed thereby preventing further firing. | 11-07-2013 |
20130297542 | SENSORY INPUT PROCESSING APPARATUS IN A SPIKING NEURAL NETWORK - Apparatus and methods for feedback in a spiking neural network. In one approach, spiking neurons receive sensory stimulus and context signal that correspond to the same context. When the stimulus provides sufficient excitation, neurons generate response. Context connections are adjusted according to inverse spike-timing dependent plasticity. When the context signal precedes the post synaptic spike, context synaptic connections are depressed. Conversely, whenever the context signal follows the post synaptic spike, the connections are potentiated. The inverse STDP connection adjustment ensures precise control of feedback-induced firing, eliminates runaway positive feedback loops, enables self-stabilizing network operation. In another aspect of the invention, the connection adjustment methodology facilitates robust context switching when processing visual information. When a context (such an object) becomes intermittently absent, prior context connection potentiation enables firing for a period of time. If the object remains absent, the connection becomes depressed thereby preventing further firing. | 11-07-2013 |
20130325766 | SPIKING NEURON NETWORK APPARATUS AND METHODS - Apparatus and methods for heterosynaptic plasticity in a spiking neural network having multiple neurons configured to process sensory input. In one exemplary approach, a heterosynaptic plasticity mechanism is configured to select alternate plasticity rules when performing neuronal updates. The selection mechanism is adapted based on recent post-synaptic activity of neighboring neurons. When neighbor activity is low, a regular STDP update rule is effectuated. When neighbor activity is high, an alternate STDP update rule, configured to reduce probability of post-synaptic spike generation by the neuron associated with the update, is used. The heterosynaptic mechanism impedes that neuron to respond to (or learn) features within the sensory input that have been detected by neighboring neurons, thereby forcing the neuron to learn a different feature or feature set. The heterosynaptic methodology advantageously introduces competition among neighboring neurons, in order to increase receptive field diversity and improve feature detection capabilities of the network. | 12-05-2013 |
20130325777 | SPIKING NEURON NETWORK APPARATUS AND METHODS - Apparatus and methods for heterosynaptic plasticity in a spiking neural network having multiple neurons configured to process sensory input. In one exemplary approach, a heterosynaptic plasticity mechanism is configured to select alternate plasticity rules when performing neuronal updates. The selection mechanism is adapted based on recent post-synaptic activity of neighboring neurons. When neighbor activity is low, a regular STDP update rule is effectuated. When neighbor activity is high, an alternate STDP update rule, configured to reduce probability of post-synaptic spike generation by the neuron associated with the update, is used. The heterosynaptic mechanism impedes that neuron to respond to (or learn) features within the sensory input that have been detected by neighboring neurons, thereby forcing the neuron to learn a different feature or feature set. The heterosynaptic methodology advantageously introduces competition among neighboring neurons, in order to increase receptive field diversity and improve feature detection capabilities of the network. | 12-05-2013 |
20140064609 | SENSORY INPUT PROCESSING APPARATUS AND METHODS - Sensory input processing apparatus and methods useful for adaptive encoding and decoding of features. In one embodiment, the apparatus receives an input frame having a representation of the object feature, generates a sequence of sub-frames that are displaced from one another (and correspond to different areas within the frame), and encodes the sub-frame sequence into groups of pulses. The patterns of pulses are directed via transmission channels to detection apparatus configured to generate an output pulse upon detecting a predetermined pattern within received groups of pulses that is associated with the feature. Upon detecting a particular pattern, the detection apparatus provides feedback to the displacement module in order to optimize sub-frame displacement for detecting the feature of interest. In another embodiment, the detections apparatus elevates its sensitivity (and/or channel characteristics) to that particular pulse pattern when processing subsequent pulse group inputs, thereby increasing the likelihood of feature detection. | 03-06-2014 |
20140089232 | NEURAL NETWORK LEARNING AND COLLABORATION APPARATUS AND METHODS - Apparatus and methods for learning and training in neural network-based devices. In one implementation, the devices each comprise multiple spiking neurons, configured to process sensory input. In one approach, alternate heterosynaptic plasticity mechanisms are used to enhance learning and field diversity within the devices. The selection of alternate plasticity rules is based on recent post-synaptic activity of neighboring neurons. Apparatus and methods for simplifying training of the devices are also disclosed, including a computer-based application. A data representation of the neural network may be imaged and transferred to another computational environment, effectively copying the brain. Techniques and architectures for achieve this training, storing, and distributing these data representations are also disclosed. | 03-27-2014 |
20140122399 | APPARATUS AND METHODS FOR ACTIVITY-BASED PLASTICITY IN A SPIKING NEURON NETWORK - Apparatus and methods for plasticity in spiking neuron network. The network may comprise feature-specific units capable of responding to different objects (red and green color). Plasticity mechanism may be configured based on difference between two similarity measures related to activity of different unit types obtained during network training. One similarity measure may be based on activity of units of the same type (red). Another similarity measure may be based on activity of units of one type (red) and another type (green). Similarity measures may comprise a cross-correlogram and/or mutual information determined over an activity window. Several similarity estimates, corresponding to different unit-to-unit pairs may be combined. The combination may comprise a weighted average. During network operation, the activity based plasticity mechanism may be used to potentiate connections between units of the same type (red-red). The plasticity mechanism may be used to depress connections between units of different types (red-green). | 05-01-2014 |
20140250036 | APPARATUS AND METHODS FOR EVENT-TRIGGERED UPDATES IN PARALLEL NETWORKS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. The software and hardware engines are optimized to take into account short-term and long-term synaptic plasticity in the form of LTD, LTP, and STDP. | 09-04-2014 |
20140250037 | METHODS FOR MEMORY MANAGEMENT IN PARALLEL NETWORKS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. Methods for managing memory in a processing system are described whereby memory can be allocated among a plurality of elements and rules configured for each element such that the parallel execution of the spiking networks is most optimal. | 09-04-2014 |
20140317035 | APPARATUS AND METHODS FOR EVENT-BASED COMMUNICATION IN A SPIKING NEURON NETWORKS - Apparatus and methods for event based communication in a spiking neuron network. The network may comprise units communicating by spikes via synapses. The spikes may communicate a payload data. The data may comprise one or more bits. The payload may be stored in a buffer of a pre-synaptic unit and be configured to accessed by the post-synaptic unit. Spikes of different payload may cause different actions by the recipient unit. Sensory input spikes may cause postsynaptic response and trigger connection efficacy update. Teaching input spikes trigger the efficacy update without causing the post-synaptic response. | 10-23-2014 |
20150074026 | APPARATUS AND METHODS FOR EVENT-BASED PLASTICITY IN SPIKING NEURON NETWORKS - Event based communication in a spiking neuron network may be provided. The network may comprise units communicating by spikes via synapses. Responsive to a spike generation, a unit may be configured to update states of outgoing synapses. The spikes may communicate a payload data. The data may comprise one or more bits. The payload may be stored in a buffer of a pre-synaptic unit and be configured to accessed by the post-synaptic unit. Spikes of different payload may cause different actions by the recipient unit. Sensory input spikes may cause postsynaptic response and trigger connection efficacy update. Teaching input may be used to modulate plasticity. | 03-12-2015 |