Entries |
Document | Title | Date |
20080281767 | Method for Training Neural Networks - The present invention provides a method ( | 11-13-2008 |
20080288427 | FORMING A SIGNATURE OF PARAMETERS EXTRACTED FROM INFORMATION - A method of storing information relating to the transmission of messages by an entity over a given time period comprises the step of
| 11-20-2008 |
20080294580 | Neuromorphic Device for Proofreading Connection Adjustments in Hardware Artificial Neural Networks - A hardware-implemented method for proofreading updates of connections in a hardware artificial neural network (hANN) includes computing a draft weight change independently at a connection between neuroids and at a corresponding dedicated special purpose nousoid, determining whether the draft weight changes agree, and executing a weight change at the connection equal to the draft weight change upon determining that the draft weight changes agree. | 11-27-2008 |
20080301075 | METHOD OF TRAINING A NEURAL NETWORK AND A NEURAL NETWORK TRAINED ACCORDING TO THE METHOD - A neural network comprises trained interconnected neurons. The neural network is configured to constrain the relationship between one or more inputs and one or more outputs of the neural network so the relationships between them are consistent with expectations of the relationships; and/or the neural network is trained by creating a set of data comprising input data and associated outputs that represent archetypal results and providing real exemplary input data and associated output data and the created data to neural network. The real exemplary output data and the created associated output data is compared to the actual output of the neural network, which is adjusted to create a best fit to the real exemplary data and the created data. | 12-04-2008 |
20090043722 | ADAPTIVE NEURAL NETWORK UTILIZING NANOTECHNOLOGY-BASED COMPONENTS - Methods and systems for modifying at least one synapse of a physicallelectromechanical neural network. A physical/electromechanical neural network implemented as an adaptive neural network can be provided, which includes one or more neurons and one or more synapses thereof, wherein the neurons and synapses are formed from a plurality of nanoparticles disposed within a dielectric solution in association with one or more pre-synaptic electrodes and one or more post-synaptic electrodes and an applied electric field. At least one pulse can be generated from one or more of the neurons to one or more of the pre-synaptic electrodes of a succeeding neuron and one or more post-synaptic electrodes of one or more of the neurons of the physical/electromechanical neural network, thereby strengthening at least one nanoparticle of a plurality of nanoparticles disposed within the dielectric solution and at least one synapse thereof. | 02-12-2009 |
20090089230 | COMPUTER GAME WITH INTUITIVE LEARNING CAPABILITY - A computer game and a method of providing learning capability thereto are provided. The computer game has an objective of matching a skill level of the computer game with a skill level of a game player. A move performed by the game player is identified, one of a plurality of game moves is selected based on a game move probability distribution comprising a plurality of probability values corresponding to the plurality of game moves, an outcome of the selected game move relative to the identified player move is determined, the game move probability distribution is updated based on the outcome, and one or more of the game move selection, the outcome determination, and the game move probability distribution update is modified based on the objective. | 04-02-2009 |
20090132452 | Artificial Neuron - Artificial neurons and processing elements for artificial neurons are disclosed. One processing element generates a continuous value signal based on the first plurality of inputs and generates a responsiveness based on a second plurality of inputs. An output value determining portion generates an output signal that is equal to a predetermined value when the responsiveness signal corresponds to a non-responsive and equal to the continuous value signal when the responsiveness signal corresponds to a responsive state. Another processing element produces an output signal having a magnitude equal to zero except during a fixed time after an event when the output signal has a magnitude based on an event time. | 05-21-2009 |
20090138421 | MULTIPLE-USER PROCESSING DEVICE WITH INTUITIVE LEARNING CAPABILITY - A processing device having one or more objectives is provided. The processing device comprises a probabilistic learning module having a learning automaton configured for learning a plurality of processor actions in response to a plurality of actions performed by a plurality of users, and an intuition module configured for modifying a functionality of said probabilistic learning module based on said one or more objectives. | 05-28-2009 |
20090164399 | Method for Autonomic Workload Distribution on a Multicore Processor - A multiprocessor system which includes automatic workload distribution. As threads execute in the multiprocessor system, an operating system or hypervisor continuously learns the execution characteristics of the threads and saves the information in thread-specific control blocks. The execution characteristics are used to generate thread performance data. As the thread executes, the operating system continuously uses the performance data to steer the thread to a core that will execute the workload most efficiently. | 06-25-2009 |
20090204559 | Bounding error rate based on a worst likely assignment - Given a set of training examples—with known inputs and outputs—and a set of working examples—with known inputs but unknown outputs—train a classifier on the training examples. For each possible assignment of outputs to the working examples, determine whether assigning the outputs to the working examples results in a training and working set that are likely to have resulted from the same distribution. If so, then add the assignment to a likely set of assignments. For each assignment in the likely set, compute the error of the trained classifier on the assignment. Use the maximum of these errors as a probably approximately correct error bound for the classifier. | 08-13-2009 |
20090276385 | Artificial-Neural-Networks Training Artificial-Neural-Networks - A method of training an artificial-neural-network includes applying a training algorithm to a first artificial-neural-network using a first training set to generate a sequence of weight values associated with a connection in the first artificial-neural-network. The method also includes training a second artificial-neural-network to generate a weight value, where the training utilizes a second training set. The second training set includes the generated sequence of weight values associated with the connection in the first artificial-neural-network. A system includes a first artificial-neural-network including a plurality of connections, where each connection is associated with a weight value. The system also includes a second artificial-neural-network including a plurality of outputs, where each output generates the weight value associated with one connection of the plurality of connections in the first artificial-neural-network during a training of the first artificial-neural-network. | 11-05-2009 |
20090299929 | Methods of improved learning in simultaneous recurrent neural networks - Methods, computer-readable media, and systems are provided for machine learning in a simultaneous recurrent neural network. One embodiment of the invention provides a method including initializing one or more weight in the network, initializing parameters of an extended Kalman filter, setting a Jacobian matrix to an empty matrix, augmenting the Jacobian matrix for each of a plurality of training patterns, adjusting the one or more weights using the extended Kalman filter formulas, and calculating a network output for one or more testing patterns. | 12-03-2009 |
20090299930 | Method of Measuring the Thickness Profile of a Film Tube - A method of measuring the thickness profile of a film produced in a blow film line having a rotatable pull-off rig in which a flattened film tube is scanned by performing individual measurements at measurement positions distributed over the width over the film tube and, in each individual measurement, the total thickness of two segments of the film tube is measured that are superposed at the measurement position, the thickness profile is calculated from the measured values obtained for a number individual measurements that is larger than the number of measurement positions, the improvement including the steps of training a neural network with measured values for the total thicknesses, which measured values have been obtained in simulated or real measurement processes with known thickness profiles and supplying the measured results obtained by scanning the film tube to the neural network for calculating the thickness profile. | 12-03-2009 |
20100057654 | SELF-LEARNING SYSTEM AND METHOD FOR PROVIDING A LOTTERY TICKET AT A POINT OF SALE DEVICE - A system for managing a purchase agreement, including: a memory element for at least one specially-programmed general purpose computer, for storing an artificial intelligence program (AIP), and a purchase agreement between a customer and at least one business entity, the purchase agreement including at least one requirement regarding at least one retail transaction between the customer and the at least one business entity; a processor in the computer for: compiling a purchasing history for the customer with respect to the at least one business entity and the purchase agreement, the memory element for storing the purchasing history and modifying, using the processor, the purchasing history, and the AIP, the at least one requirement to increase revenue or profitability of the at least one business entity; and an interface element in the computer for transmitting the modified at least one requirement for presentation to the customer. | 03-04-2010 |
20100076915 | Field-Programmable Gate Array Based Accelerator System - Accelerator systems and methods are disclosed that utilize FPGA technology to achieve better parallelism and processing speed. A Field Programmable Gate Array (FPGA) is configured to have a hardware logic performing computations associated with a neural network training algorithm, especially a Web relevance ranking algorithm such as LambaRank. The training data is first processed and organized by a host computing device, and then streamed to the FPGA for direct access by the FPGA to perform high-bandwidth computation with increased training speed. Thus, large data sets such as that related to Web relevance ranking can be processed. The FPGA may include a processing element performing computations of a hidden layer of the neural network training algorithm. Parallel computing may be realized using a single instruction multiple data streams (SIMD) architecture with multiple arithmetic logic units in the FPGA. | 03-25-2010 |
20100088263 | Method for Computer-Aided Learning of a Neural Network and Neural Network - There is described a method for computer-aided learning of a neural network, with a plurality of neurons in which the neurons of the neural network are divided into at least two layers, comprising a first layer and a second layer crosslinked with the first layer. In the first layer input information is respectively represented by one or more characteristic values from one or several characteristics, wherein every characteristic value comprises one or more neurons of the first layer. A plurality of categories is stored in the second layer, wherein every category comprises one or more neurons of the second layer. For one or several pieces of input information, respectively at least one category in the second layer is assigned to the characteristic values of the input information in the first layer. Input information is entered into the first layer and subsequently at least one state variable of the neural network is determined and compared to the at least one category of this input information assigned in a preceding step. The crosslinking between the first and second layer is changed depending on the comparison result from a preceding step. | 04-08-2010 |
20100094790 | MACHINE LEARNING OF DIMENSIONS USING SPECTRAL INTENSITY RESPONSE OF A REFLECTOMETER - A method and a system for determining critical dimensions using an artificial neural network, where the artificial neural network is trained based on a spectral intensity response of a reflectometer are provided. Additional apparatus, systems, and methods are disclosed. | 04-15-2010 |
20100114807 | REINFORCEMENT LEARNING SYSTEM - A reinforcement learning system ( | 05-06-2010 |
20100138372 | COGNITIVE PATTERN MATCHING SYSTEM WITH BUILT-IN CONFIDENCE MEASURE - Artificial neural systems are very powerful tools for pattern matching, classification, feature extraction and signal analysis. Systems to date lack an essential feature of their biological counterparts, a measure of confidence that the network response has actually been trained and is not an artifact. In the proposed artificial neural system one output is a produced (trained) measure of confidence in the remaining outputs i.e. a measure of certainty that the inputs match the training data. | 06-03-2010 |
20100169255 | COATING COLOR DATABASE CREATING METHOD, SEARCH METHOD USING THE DATABASE, THEIR SYSTEM, PROGRAM, AND RECORDING MEDIUM - The subject invention provides a method of creating a database for searching for a paint color having a desired texture, a search method using the database, and systems, programs, and recording mediums for carrying out the method and the search. The method for creating a database includes a step (S | 07-01-2010 |
20100169256 | Separate Learning System and Method Using Two-Layered Neural Network Having Target Values for Hidden Nodes - Disclosed herein is a separate learning system and method using a two-layered neural network having target values for hidden nodes. The separate learning system of the present invention includes an input layer for receiving training data from a user, and including at least one input node. A hidden layer includes at least one hidden node. A first connection weight unit connects the input layer to the hidden layer, and changes a weight between the input node and the hidden node. An output layer outputs training data that has been completely learned. The second connection weight unit connects the hidden layer to the output layer, changing a weight between the output and the hidden node, and calculates a target value for the hidden node, based on a current error for the output node. A control unit stops learning, fixes the second connection weight unit, turns a learning direction to the first connection weight unit, and causes learning to be repeatedly performed between the input node and the hidden node if a learning speed decreases or a cost function increases due to local minima or plateaus when the first connection weight unit is fixed and learning is performed using only the second connection weight unit, thus allowing learning to be repeatedly performed until learning converges to the target value for the hidden node. | 07-01-2010 |
20100198766 | Nano-Electric Synapse and Method for Training Said Synapse - The invention relates to an electric synapse that comprises a main conductor with a predetermined potential V | 08-05-2010 |
20100217734 | Method and system for calculating value of website visitor - Calculating a value of a website visitor includes initializing a calculation model for calculating the value of the website visitor, the calculation model being a neural network model with visitor information as an input and the visitor's value as an output; training the calculation model by using a data sample and determining the calculation model; and obtaining the visitor information, and calculating the value of the visitor by using the determined calculation model. | 08-26-2010 |
20100228694 | Data Processing Using Restricted Boltzmann Machines - Data processing using restricted Boltzmann machines is described, for example, to pre-process continuous data and provide binary outputs. In embodiments, restricted Boltzmann machines based on either Gaussian distributions or Beta distributions are described which are able to learn and model both the mean and variance of data. In some embodiments, a stack of restricted Boltzmann machines are connected in series with outputs of one restricted Boltzmann machine providing input to the next in the stack and so on. Embodiments describe how training for each machine in the stack may be carried out efficiently and the combined system used for one of a variety of applications such as data compression, object recognition, image processing, information retrieval, data analysis and the like. | 09-09-2010 |
20100280982 | WATERSHED MEMORY SYSTEMS AND METHODS - An emotional memory control system and method for generating behavior. A sensory encoder provides a condensed encoding of a current circumstance received from an external environment. A memory associated with a regulator recognizes the encoding and activates one or more emotional springs according to a predefined set of instructions. The activated emotional springs can then transmit signals to at least one moment on a fractal moment sheet incorporated with a timeline for each channel in order to form one or more watersheds. An activation magnitude can be calculated for each moment and transmitted to a reaction relay. A synaptic link can then form between the moment and a motor encoder, thereby linking a specific moment with a specific action state. | 11-04-2010 |
20100299296 | ELECTRONIC LEARNING SYNAPSE WITH SPIKE-TIMING DEPENDENT PLASTICITY USING UNIPOLAR MEMORY-SWITCHING ELEMENTS - According to embodiments of the invention, a system, method and computer program product producing spike-dependent plasticity in an artificial synapse. In an embodiment, a method includes: receiving a pre-synaptic spike in an electronic component; receiving a post-synaptic spike in the electronic component; in response to the pre-synaptic spike, generating a pre-synaptic pulse that occurs a predetermined period of time after the received pre-synaptic spike; in response to the post-synaptic spike, generating a post-synaptic pulse that starts at a baseline value and reaches a first voltage value a first period of time after the post-synaptic spike, followed by a second voltage value a second period of time after the post synaptic spike, followed by a return to the baseline voltage a third period of time after the post-synaptic spike; applying the generated pre-synaptic pulse to a pre-synaptic node of a synaptic device that includes a uni-polar, two-terminal bi-stable device in series with a rectifying element; and applying the generated post-synaptic pulse to a post-synaptic node of the synaptic device, wherein the synaptic device changes from a first conductive state to a second conductive state based on the value of input voltage applied to its pre and post-synaptic nodes, wherein the resultant state of the conductance of the synaptic device after the pre- and post-synaptic pulses are applied thereto depends on the relative timing of the received pre-synaptic spike with respect to the post synaptic spike. | 11-25-2010 |
20110004579 | Neuromorphic Circuit - Embodiments of the present invention are directed to neuromorphic circuits containing two or more internal neuron computational units. Each internal neuron computational unit includes a synchronization-signal input for receiving a synchronizing signal, at least one input for receiving input signals, and at least one output for transmitting an output signal. A memristive synapse connects an output signal line carrying output signals from a first set of one or more internal neurons to an input signal line that carries signals to a second set of one or more internal neurons. | 01-06-2011 |
20110029471 | DYNAMICALLY CONFIGURABLE, MULTI-PORTED CO-PROCESSOR FOR CONVOLUTIONAL NEURAL NETWORKS - A coprocessor and method for processing convolutional neural networks includes a configurable input switch coupled to an input. A plurality of convolver elements are enabled in accordance with the input switch. An output switch is configured to receive outputs from the set of convolver elements to provide data to output branches. A controller is configured to provide control signals to the input switch and the output switch such that the set of convolver elements are rendered active and a number of output branches are selected for a given cycle in accordance with the control signals. | 02-03-2011 |
20110055131 | METHOD OF UNIVERSAL COMPUTING DEVICE - A method for using artificial neural networks as a universal computing device to model the relationship between the training inputs and corresponding outputs and to solve all problems with estimation, classification, and ranking tasks in their nature. Raw data related to problems is obtained and a subset of that data is processed and distilled for application to this universal computing device. The training data includes inputs and their corresponding results, which values could be continuous, categorical, or binary. The goal of this universal computing device is to solve problems by the universal approximation property of artificial neural networks. In this invention, a practical solution is created to resolve the issues of local minima and generalization, which have been the obstacles to the use of artificial neural networks for decades. This universal computing device uses an efficient and effective search algorithm, Retreat and Turn, to escape local minima and approach the best solutions. Generalization for this universal computing device is achieved by monitoring its non-saturated hidden neurons as related its effective free parameters and In-line Cross Validation process. The output process of ranking is achieved by an added baseline probability retaining from best logistic regression model as a secondary order while the categorical results from a MLP neural network as the first order. | 03-03-2011 |
20110066580 | CODEBOOK GENERATING METHOD - A codebook generating method comprises a dividing and transforming step dividing an original image into original blocks and transforming the original blocks into original vectors; a dividing step grouping the original vectors to obtain centroids; a first layer neuron training step selecting a portion of the centroids as first-level neurons; a grouping step assigning each of the original vectors to a closest first-level neuron so as to obtain groups; a second layer neuron assigning step assigning a number of second-level neurons in each of the groups, and selecting a portion of the original vectors in each of the groups as the second-level neurons; and a second layer neuron training step defining the original vectors in each of the groups as samples, training the second-level neurons in each of the groups to obtain final neurons, and storing vectors corresponding to the final neurons in a codebook. | 03-17-2011 |
20110087628 | Methods for Updating and Training for a Self-Organising Card - The updating method comprises selecting the best winning neuron and second best winning neuron, modifying the prototype vectors of the best winning neuron and the neurons located around the best winning neuron in the direction of the vector of the learning point (x(k)), determining the neighbouring neurons (N(u*)) of the best winning neuron (u*) and, if the second best winning neuron (u**) is part of the neighbouring neurons (N(u*)), increasing the valuation of the connection between the first and second best winning neurons. | 04-14-2011 |
20110106741 | SYSTEM FOR ADDRESS-EVENT-REPRESENTATION NETWORK SIMULATION - A system, method, and design structure for address-event-representation network simulation are provided. The system includes a hardware structure with a plurality of interconnected processing modules configured to simulate a plurality of interconnected nodes. To simulate each node, the hardware structure includes a source table configured to receive an input message and identify a weight associated with a source of the input message. The hardware structure also includes state management logic configured to update a node state as a function of the identified weight, and generate an output signal responsive to the updated node state. The hardware structure further includes a target table configured to generate an output message in response to the output signal, identify a target to receive the output message, and transmit the output message. The hardware structure may further include learning logic configured to combine information about input messages and generated output signals, and to update weights. | 05-05-2011 |
20110131166 | FUZZY USERS' ATTRIBUTES PREDICTION BASED ON USERS' BEHAVIORS - A method, apparatus, system, article of manufacture, and computer readable storage medium provide the ability to predict and utilize a user's attributes. A sample user behavior and a sample user attribute are collected. A model is trained based on the sample user behavior and sample user attribute. Using the model, a probability of a predicted user attribute based on the sample user behavior is predicted. Using the model and the probability, the predicted user attribute is fuzzily determined based on a real user behavior. The predicted user attribute is used to improve a user's experience. | 06-02-2011 |
20110196819 | METHOD FOR APPROXIMATION OF OPTIMAL CONTROL FOR NONLINEAR DISCRETE TIME SYSTEMS - A method for approximation of optimal control for a nonlinear discrete time system in which the state variables are first obtained from a system model. Control sequences are then iteratively generated for the network to optimize control variables for the network and in which the value for each control variable is independent of the other control variables. Following optimization of the control variables, the control variables are then mapped onto a recurrent neural network utilizing conventional training methods. | 08-11-2011 |
20110202489 | LEARNING AND AUDITORY SCENE ANALYSIS IN GRADIENT FREQUENCY NONLINEAR OSCILLATOR NETWORKS - A method for learning connections between nonlinear oscillators in a neural network comprising the steps of providing a plurality of nonlinear oscillators, with each respective oscillator producing an oscillation distinct from the others in response to an input and detecting an input at an at least first oscillator of the plurality of nonlinear oscillators. Detecting an input at an at least a second oscillator of the plurality of nonlinear oscillators, comparing the oscillation of the at least first oscillator to the oscillation of the at least second oscillator at a point in time, and determining whether there is coherency between the oscillation of the at least first oscillator and the oscillation of the at least second oscillator. Changing at least one of the amplitude and phase of a connection between the at least first oscillator and the at least second least oscillator as a function coherency between the at least first oscillator and the oscillation of the at least second oscillator. | 08-18-2011 |
20110270789 | COMPETITIVE BCM LEARNING RULE FOR IDENTIFYING FEATURES - Disclosed are systems, apparatuses, and methods for implementing a competitive BCM learning rule used in a neural network. Such a method includes identifying a maximally responding neuron with respect to a feature of an input signal. The maximally responding neuron is the neuron in a group that has a response with respect to the feature of the input signal that is greater than a response of each other neuron in the group. Such a method also includes applying a learning rule to weaken the response of each other neuron with respect to the feature of the input signal. The learning rule may also strengthen the response of the maximally responding neuron with respect to the feature of the input signal. | 11-03-2011 |
20110282819 | DATA ANALYSIS METHOD AND SYSTEM - The present invention relates to the analysis of data to identify relationships between the input data and one or more conditions. One method of analysing such data is by the use of neural networks which are non-linear statistical data modelling tools, the structure of which may be changed based on information that is passed through the network during a training phase. A known problem that affects neural networks is the issue of overtraining which arises in overcomplex or overspecified systems when the capacity of the network significantly exceeds the needed parameters. The present invention provides a method of analysing data using a neural network with a constrained architecture that mitigates the problems associated with the prior art. | 11-17-2011 |
20110307432 | RELEVANCE FOR NAME SEGMENT SEARCHES - Improved search result relevance is provided for name segment searches performed by a general web search engine. Entity-related information is mined from web documents and search engine query logs, and metadata is indexed in a search system index. The metadata may include information identifying entity homepages, entity web pages at high quality top sites, other entity-related web pages, entity equivalent data, and/or entity misspellings data. The indexed metadata is employed to provide improved search results relevance for search queries that include an entity's name by improving the ranking of search results corresponding with entity-relevant web pages. | 12-15-2011 |
20120005141 | NEURAL NETWORK SYSTEM - A neural network system that can minimize circuit resources for constituting a self-learning mechanism and be reconfigured into network configurations suitable for various purposes includes a neural network engine that operates in a first and a second operation mode and performs an operation representing a characteristic determined by setting network configuration information and weight information with respect to the network configuration, and a von Neumann-type microprocessor that is connected to the neural network engine and performs a cooperative operation in accordance with the first or the second operation mode together with the neural network engine. The von Neumann-type microprocessor recalculates the weight information or remakes the configuration information as a cooperative operation according to the first operation mode, and sets or updates the configuration information or the weight information set in the neural network engine, as a cooperative operation according to the second operation mode. | 01-05-2012 |
20120011087 | METHODS AND SYSTEMS FOR REPLACEABLE SYNAPTIC WEIGHT STORAGE IN NEURO-PROCESSORS - Certain embodiments of the present disclosure support techniques for storing synaptic weights separately from a neuro-processor chip into a replaceable storage. The replaceable synaptic memory gives a unique functionality to the neuro-processor and improves its flexibility for supporting a large variety of applications. In addition, the replaceable synaptic storage can provide more choices for the type of memory used, and might decrease the area and implementation cost of the overall neuro-processor chip. | 01-12-2012 |
20120011088 | COMMUNICATION AND SYNAPSE TRAINING METHOD AND HARDWARE FOR BIOLOGICALLY INSPIRED NETWORKS - Certain embodiments of the present disclosure support techniques for training of synapses in biologically inspired networks. Only one device based on a memristor can be used as a synaptic connection between a pair of neurons. The training of synaptic weights can be achieved with a low current consumption. A proposed synapse training circuit may be shared by a plurality of incoming/outgoing connections, while only one digitally implemented pulse-width modulation (PWM) generator can be utilized per neuron circuit for generating synapse-training pulses. Only up to three phases of a slow clock can be used for both the neuron-to-neuron communications and synapse training. Some special control signals can be also generated for setting up synapse training events. By means of these signals, the synapse training circuit can be in a high-impedance state outside the training events, thus the synaptic resistance (i.e., the synaptic weight) is not affected outside the training process. | 01-12-2012 |
20120011089 | METHODS AND SYSTEMS FOR NEURAL PROCESSOR TRAINING BY ENCOURAGEMENT OF CORRECT OUTPUT - Certain embodiments of the present disclosure support implementation of a neural processor with synaptic weights, wherein training of the synapse weights is based on encouraging a specific output neuron to generate a spike. The implemented neural processor can be applied for classification of images and other patterns. | 01-12-2012 |
20120023052 | SYSTEMS, METHODS, AND APPARATUS FOR OTOACOUSTIC PROTECTION OF AUTONOMIC SYSTEMS - Systems, methods and apparatus are provided through which in some embodiments an autonomic unit transmits an otoacoustic signal to counteract a potentially harmful incoming signal. | 01-26-2012 |
20120072383 | METHOD FOR THE SELECTION OF ATTRIBUTES FOR STATISTICAL LEARNING FOR OBJECT DETECTION AND RECOGNITION - The invention relates to an attribute selection method for making statistical learning of descriptors intended to enable automatic recognition and/or detection of an object from a set of images, method characterized by the following steps:
| 03-22-2012 |
20120109863 | CANONICAL SPIKING NEURON NETWORK FOR SPATIOTEMPORAL ASSOCIATIVE MEMORY - Embodiments of the invention relate to canonical spiking neurons for spatiotemporal associative memory. An aspect of the invention provides a spatiotemporal associative memory including a plurality of electronic neurons having a layered neural net relationship with directional synaptic connectivity. The plurality of electronic neurons configured to detect the presence of a spatiotemporal pattern in a real-time data stream, and extract the spatiotemporal pattern. The plurality of electronic neurons are further configured to, based on learning rules, store the spatiotemporal pattern in the plurality of electronic neurons, and upon being presented with a version of the spatiotemporal pattern, retrieve the stored spatiotemporal pattern. | 05-03-2012 |
20120109864 | NEUROMORPHIC AND SYNAPTRONIC SPIKING NEURAL NETWORK WITH SYNAPTIC WEIGHTS LEARNED USING SIMULATION - Embodiments of the invention provide neuromorphic-synaptronic systems, including neuromorphic-synaptronic circuits implementing spiking neural network with synaptic weights learned using simulation. One embodiment includes simulating a spiking neural network to generate synaptic weights learned via the simulation while maintaining one-to-one correspondence between the simulation and a digital circuit chip. The learned synaptic weights are loaded into the digital circuit chip implementing a spiking neural network, the digital circuit chip comprising a neuromorphic-synaptronic spiking neural network including plural synapse devices interconnecting multiple digital neurons. | 05-03-2012 |
20120109865 | USING AFFINITY MEASURES WITH SUPERVISED CLASSIFIERS - A non-binary affinity measure between any two data points for a supervised classifier may be determined. For example, affinity measures may be determined for tree, kernel-based, nearest neighbor-based and neural network supervised classifiers. By providing non-binary affinity measures using supervised classifiers, more information may be provided for clustering, analyzing and, particularly, for visualizing the results of data mining. | 05-03-2012 |
20120173471 | SYNAPTIC WEIGHT NORMALIZED SPIKING NEURONAL NETWORKS - Neuronal networks of electronic neurons interconnected via electronic synapses with synaptic weight normalization. The synaptic weights are based on learning rules for the neuronal network, such that a synaptic weight for a synapse determines the effect of a spiking source neuron on a target neuron connected via the synapse. Each synaptic weight is maintained within a predetermined range by performing synaptic weight normalization for neural network stability. | 07-05-2012 |
20120215728 | PROCESSOR NODE, ARTIFICIAL NEURAL NETWORK AND METHOD OF OPERATION OF AN ARTIFICIAL NEURAL NETWORK - There is provided a temporal processor node for use as an input node in the input layer of a class network in an artificial neural network, the class network being operable to generate an output signal based on a network input vector component received by the input layer, the temporal processor node being operable to receive observation data representing the observed state of a monitored entity as a component of the network input vector. The temporal processor node comprises a memory module operable to store a most recently observed state of the monitored entity in the memory module as a current state, a modification module having a timer, the timer being operable to output a value representing time elapsed since observation of the current state, the modification module being operable to modify the current state with a modification factor dependent on the value output by the timer, wherein when triggered, the temporal processor node is operable to output the modified current state as a representation of the current state. | 08-23-2012 |
20120246102 | ADAPTIVE ANALYTICAL BEHAVIORAL AND HEALTH ASSISTANT SYSTEM AND RELATED METHOD OF USE - This present disclosure relates to systems and methods for providing an Adaptive Analytical Behavioral and Health Assistant. These systems and methods may include collecting one or more of patient behavior information, clinical information, or personal information; learning one or more patterns that cause an event based on the collected information and one or more pattern recognition algorithms; identifying one or more interventions to prevent the event from occurring or to facilitate the event based on the learned patterns; preparing a plan based on the collected information and the identified interventions; and/or presenting the plan to a user or executing the plan. | 09-27-2012 |
20120254086 | DEEP CONVEX NETWORK WITH JOINT USE OF NONLINEAR RANDOM PROJECTION, RESTRICTED BOLTZMANN MACHINE AND BATCH-BASED PARALLELIZABLE OPTIMIZATION - A method is disclosed herein that includes an act of causing a processor to access a deep-structured, layered or hierarchical model, called deep convex network, retained in a computer-readable medium, wherein the deep-structured model comprises a plurality of layers with weights assigned thereto. This layered model can produce the output serving as the scores to combine with transition probabilities between states in a hidden Markov model and language model scores to form a full speech recognizer. The method makes joint use of nonlinear random projections and RBM weights, and it stacks a lower module's output with the raw data to establish its immediately higher module. Batch-based, convex optimization is performed to learn a portion of the deep convex network's weights, rendering it appropriate for parallel computation to accomplish the training. The method can further include the act of jointly substantially optimizing the weights, the transition probabilities, and the language model scores of the deep-structured model using the optimization criterion based on a sequence rather than a set of unrelated frames. | 10-04-2012 |
20120259804 | RECONFIGURABLE AND CUSTOMIZABLE GENERAL-PURPOSE CIRCUITS FOR NEURAL NETWORKS - A reconfigurable neural network circuit is provided. The reconfigurable neural network circuit comprises an electronic synapse array including multiple synapses interconnecting a plurality of digital electronic neurons. Each neuron comprises an integrator that integrates input spikes and generates a signal when the integrated inputs exceed a threshold. The circuit further comprises a control module for reconfiguring the synapse array. The control module comprises a global final state machine that controls timing for operation of the circuit, and a priority encoder that allows spiking neurons to sequentially access the synapse array. | 10-11-2012 |
20120303565 | LEARNING PROCESSES FOR SINGLE HIDDEN LAYER NEURAL NETWORKS WITH LINEAR OUTPUT UNITS - Learning processes for a single hidden layer neural network, including linear input units, nonlinear hidden units, and linear output units, calculate the lower-layer network parameter gradients by taking into consideration a solution for the upper-layer network parameters. The upper-layer network parameters are calculated by a closed form formula given the lower-layer network parameters. An accelerated gradient algorithm can be used to update the lower-layer network parameters. A weighted gradient also can be used. With the combination of these techniques, accelerated training with faster convergence, to a point with a lower error rate, can be obtained. | 11-29-2012 |
20120303566 | METHOD AND APPARATUS FOR UNSUPERVISED TRAINING OF INPUT SYNAPSES OF PRIMARY VISUAL CORTEX SIMPLE CELLS AND OTHER NEURAL CIRCUITS - Certain aspects of the present disclosure present a technique for unsupervised training of input synapses of primary visual cortex (V1) simple cells and other neural circuits. The proposed unsupervised training method utilizes simple neuron models for both Retinal Ganglion Cell (RGC) and V1 layers. The model simply adds the weighted inputs of each cell, wherein the inputs can have positive or negative values. The resulting weighted sums of inputs represent activations that can also be positive or negative. In an aspect of the present disclosure, the weights of each V1 cell can be adjusted depending on a sign of corresponding RGC output and a sign of activation of that V1 cell in the direction of increasing the absolute value of the activation. The RGC-to-V1 weights can be positive and negative for modeling ON and OFF RGCs, respectively. | 11-29-2012 |
20120317061 | TIME ENCODING USING INTEGRATE AND FIRE SAMPLER - Systems and methods of time encoding using an integrate and fire (IF) sampler are disclosed. In an example, a method includes receiving input signals for separate classes. The method also includes generating a pulse train based on the input signals. The method also includes binning the pulse train to generate a feature vector. | 12-13-2012 |
20120317062 | RECONFIGURABLE AND CUSTOMIZABLE GENERAL-PURPOSE CIRCUITS FOR NEURAL NETWORKS - A reconfigurable neural network circuit is provided. The reconfigurable neural network circuit comprises an electronic synapse array including multiple synapses interconnecting a plurality of digital electronic neurons. Each neuron comprises an integrator that integrates input spikes and generates a signal when the integrated inputs exceed a threshold. The circuit further comprises a control module for reconfiguring the synapse array. The control module comprises a global final state machine that controls timing for operation of the circuit, and a priority encoder that allows spiking neurons to sequentially access the synapse array. | 12-13-2012 |
20120330871 | USING VALUES OF PRPD ENVELOPE TO CLASSIFY SINGLE AND MULTIPLE PARTIAL DISCHARGE (PD) DEFECTS IN HV EQUIPMENT - A method, system and computer program product for classifying types of partial discharge experienced by high voltage motors, reducing the labor and expertise required for such classification. This method, system and computer program product utilize feature extraction techniques to preprocess partial discharge measurements data to suit neural network input requirements. | 12-27-2012 |
20120330872 | CANONICAL SPIKING NEURON NETWORK FOR SPATIOTEMPORAL ASSOCIATIVE MEMORY - Embodiments of the invention relate to canonical spiking neurons for spatiotemporal associative memory. An aspect of the invention provides a spatiotemporal associative memory including a plurality of electronic neurons having a layered neural net relationship with directional synaptic connectivity. The plurality of electronic neurons configured to detect the presence of a spatiotemporal pattern in a real-time data stream, and extract the spatiotemporal pattern. The plurality of electronic neurons are further configured to, based on learning rules, store the spatiotemporal pattern in the plurality of electronic neurons, and upon being presented with a version of the spatiotemporal pattern, retrieve the stored spatiotemporal pattern. | 12-27-2012 |
20130013543 | METHOD FOR THE COMPUTER-AIDED CONTROL OF A TECHNICAL SYSTEM - A method for the computer-aided control of a technical system is provided. A recurrent neuronal network is used for modeling the dynamic behaviour of the technical system, the input layer of which contains states of the technical system and actions carried out on the technical system, which are supplied to a recurrent hidden layer. The output layer of the recurrent neuronal network is represented by an evaluation signal which reproduces the dynamics of technical system. The hidden states generated using the recurrent neural network are used to control the technical system on the basis of a learning and/or optimization method. | 01-10-2013 |
20130018832 | DATA STRUCTURE AND A METHOD FOR USING THE DATA STRUCTUREAANM Ramanathan; KiruthikaAACI SingaporeAACO SGAAGP Ramanathan; Kiruthika Singapore SGAANM Sadeghi; SepidehAACI SingaporeAACO SGAAGP Sadeghi; Sepideh Singapore SG - A method is proposed of generating a data structure that comprises a plurality of modules containing neurons. Each module performs a function defined by the neurons. The modules are structured hierarchically in layers, in a bottom-up manner. Competitive ciustering is used to generate the neurons. In the bottom layer, the neurons are associated with data clusters in training data, and in higher layers the neurons are associated with clusters in the output of the next lower layer. Hebbian Association is used to generate “connectivity” data, by which is meant data for pairs of the neurons (in the same layer or in different layer) indicative of the correlation between the output of the pair of neurons. | 01-17-2013 |
20130018833 | NEURAL NETWORK SYSTEM AND METHOD FOR CONTROLLING OUTPUT BASED ON USER FEEDBACK - For various information sources, information output based on user feedback about information from the sources is controlled. A neural network module selects object(s) to receive information from the information sources based on inputs and weight values during that epoch. A server, associated with the neural network module, provides the object(s) to recipients. The object(s) may comprise electronic mail messages, chat participants viewers, or slots within a link directory page. The recipients provide feedback about the information during an epoch. At the conclusion of an epoch, the neural network takes the feedback provided by the recipients and generates a rating value for the object(s). Based on the rating value and the selections made, the neural network re-determines the weight values within the network. The neural network then selects the object(s) to receive information during a subsequent epoch using the re-determined weight values and the inputs for that subsequent epoch. | 01-17-2013 |
20130024409 | METHOD AND APPARATUS OF ROBUST NEURAL TEMPORAL CODING, LEARNING AND CELL RECRUITMENTS FOR MEMORY USING OSCILLATION - Certain aspects of the present disclosure support a technique for robust neural temporal coding, learning and cell recruitments for memory using oscillations. Methods are proposed for distinguishing temporal patterns and, in contrast to other “temporal pattern” methods, not merely coincidence of inputs or order of inputs. Moreover, the present disclosure propose practical methods that are biologically-inspired/consistent but reduced in complexity and capable of coding, decoding, recognizing, and learning temporal spike signal patterns. In this disclosure, extensions are proposed to a scalable temporal neural model for robustness, confidence or integrity coding, and recruitment of cells for efficient temporal pattern memory. | 01-24-2013 |
20130041859 | NEURAL NETWORK FREQUENCY CONTROL - Systems and methods for controlling frequency output of an electronic oscillator to compensate for effects of a parameter experienced by the oscillator incorporate artificial neural network processing functionality for generating correction signals. A neural network processing module includes one or more neurons which receive one or more inputs corresponding to a parameter of an electronic oscillator, such as temperature. Weights are calculated and applied to inputs to the neurons of the neural network as part of a training process, wherein the weights help shape the output of the neural network processing module. The neural network may include a linear summation module configured to provide an output signal that is at least partially based on outputs of the one or more neurons. | 02-14-2013 |
20130073493 | UNSUPERVISED, SUPERVISED, AND REINFORCED LEARNING VIA SPIKING COMPUTATION - The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module. | 03-21-2013 |
20130073494 | EVENT-DRIVEN UNIVERSAL NEURAL NETWORK CIRCUIT - The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network. | 03-21-2013 |
20130073495 | ELEMENTARY NETWORK DESCRIPTION FOR NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. Neuronal network and methods for operating neuronal networks comprise a plurality of units, where each unit has a memory and a plurality of doublets, each doublet being connected to a pair of the plurality of units. Execution of unit update rules for the plurality of units is order-independent and execution of doublet event rules for the plurality of doublets is order-independent. | 03-21-2013 |
20130073496 | Tag-based apparatus and methods for neural networks - Apparatus and methods for high-level neuromorphic network description (HLND) using tags. The framework may be used to define nodes types, define node-to-node connection types, instantiate node instances for different node types, and/or generate instances of connection types between these nodes. The HLND format may be used to define nodes types, define node-to-node connection types, instantiate node instances for different node types, dynamically identify and/or select network subsets using tags, and/or generate instances of one or more connections between these nodes using such subsets. To facilitate the HLND operation and disambiguation, individual elements of the network (e.g., nodes, extensions, connections, I/O ports) may be assigned at least one unique tag. The tags may be used to identify and/or refer to respective network elements. The HLND kernel may comprises an interface to Elementary Network Description. | 03-21-2013 |
20130117209 | METHOD AND APPARATUS FOR USING MEMORY IN PROBABILISTIC MANNER TO STORE SYNAPTIC WEIGHTS OF NEURAL NETWORK - Certain aspects of the present disclosure support a technique for utilizing a memory in probabilistic manner to store information about weights of synapses of a neural network. | 05-09-2013 |
20130117210 | METHODS AND APPARATUS FOR UNSUPERVISED NEURAL REPLAY, LEARNING REFINEMENT, ASSOCIATION AND MEMORY TRANSFER: NEURAL COMPONENT REPLAY - Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer. | 05-09-2013 |
20130117211 | METHODS AND APPARATUS FOR UNSUPERVISED NEURAL REPLAY, LEARNING REFINEMENT, ASSOCIATION AND MEMORY TRANSFER: NEURAL COMPONENT MEMORY TRANSFER - Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer. | 05-09-2013 |
20130117212 | METHODS AND APPARATUS FOR UNSUPERVISED NEURAL REPLAY, LEARNING REFINEMENT, ASSOCIATION AND MEMORY TRANSFER: NEURAL ASSOCIATIVE LEARNING, PATTERN COMPLETION, SEPARATION, GENERALIZATION AND HIERARCHICAL REPLAY - Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer. | 05-09-2013 |
20130117213 | METHODS AND APPARATUS FOR UNSUPERVISED NEURAL REPLAY, LEARNING REFINEMENT, ASSOCIATION AND MEMORY TRANSFER: STRUCTURAL PLASTICITY AND STRUCTURAL CONSTRAINT MODELING - Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer. | 05-09-2013 |
20130138589 | EXPLOITING SPARSENESS IN TRAINING DEEP NEURAL NETWORKS - Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training. | 05-30-2013 |
20130151448 | APPARATUS AND METHODS FOR IMPLEMENTING LEARNING FOR ANALOG AND SPIKING SIGNALS IN ARTIFICIAL NEURAL NETWORKS - Apparatus and methods for universal node design implementing a universal learning rule in a mixed signal spiking neural network. In one implementation, at one instance, the node apparatus, operable according to the parameterized universal learning model, receives a mixture of analog and spiking inputs, and generates a spiking output based on the model parameter for that node that is selected by the parameterized model for that specific mix of inputs. At another instance, the same node receives a different mix of inputs, that also may comprise only analog or only spiking inputs and generates an analog output based on a different value of the node parameter that is selected by the model for the second mix of inputs. In another implementation, the node apparatus may change its output from analog to spiking responsive to a training input for the same inputs. | 06-13-2013 |
20130151449 | APPARATUS AND METHODS FOR IMPLEMENTING LEARNING FOR ANALOG AND SPIKING SIGNALS IN ARTIFICIAL NEURAL NETWORKS - Apparatus and methods for universal node design implementing a universal learning rule in a mixed signal spiking neural network. In one implementation, at one instance, the node apparatus, operable according to the parameterized universal learning model, receives a mixture of analog and spiking inputs, and generates a spiking output based on the model parameter for that node that is selected by the parameterized model for that specific mix of inputs. At another instance, the same node receives a different mix of inputs, that also may comprise only analog or only spiking inputs and generates an analog output based on a different value of the node parameter that is selected by the model for the second mix of inputs. In another implementation, the node apparatus may change its output from analog to spiking responsive to a training input for the same inputs. | 06-13-2013 |
20130151450 | NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION - Apparatus and methods for universal node design implementing a universal learning rule in a mixed signal spiking neural network. In one implementation, at one instance, the node apparatus, operable according to the parameterized universal learning model, receives a mixture of analog and spiking inputs, and generates a spiking output based on the model parameter for that node that is selected by the parameterized model for that specific mix of inputs. At another instance, the same node receives a different mix of inputs, that also may comprise only analog or only spiking inputs and generates an analog output based on a different value of the node parameter that is selected by the model for the second mix of inputs. In another implementation, the node apparatus may change its output from analog to spiking responsive to a training input for the same inputs. | 06-13-2013 |
20130159231 | MULTI-MODAL NEURAL NETWORK FOR UNIVERSAL, ONLINE LEARNING - In one embodiment, the present invention provides a neural network comprising multiple modalities. Each modality comprises multiple neurons. The neural network further comprises an interconnection lattice for cross-associating signaling between the neurons in different modalities. The interconnection lattice includes a plurality of perception neuron populations along a number of bottom-up signaling pathways, and a plurality of action neuron populations along a number of top-down signaling pathways. Each perception neuron along a bottom-up signaling pathway has a corresponding action neuron along a reciprocal top-down signaling pathway. An input neuron population configured to receive sensory input drives perception neurons along a number of bottom-up signaling pathways. A first set of perception neurons along bottom-up signaling pathways drive a first set of action neurons along top-down signaling pathways. Action neurons along a number of top-down signaling pathways drive an output neuron population configured to generate motor output. | 06-20-2013 |
20130204818 | MODELING METHOD OF NEURO-FUZZY SYSTEM - A modeling method of neuro-fuzzy system including a rule-defining process and a network-building process is disclosed. The rule-defining process divides a plurality of training data into a plurality of groups to accordingly define a plurality of fuzzy rules, and the network-building process constructs a fuzzy neural network based on the fuzzy rules obtained by the rule-defining process. The provided modeling method of neuro-fuzzy system is capable of building a neuro-fuzzy system extremely similar to an original function that generates training data of the neuro-fuzzy system. | 08-08-2013 |
20130204819 | METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION - Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs. | 08-08-2013 |
20130204820 | METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION - Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs. | 08-08-2013 |
20130212052 | TENSOR DEEP STACKED NEURAL NETWORK - A tensor deep stacked neural (T-DSN) network for obtaining predictions for discriminative modeling problems. The T-DSN network and method use bilinear modeling with a tensor representation to map a hidden layer to the predication layer. The T-DSN network is constructed by stacking blocks of a single hidden layer tensor neural network (SHLTNN) on top of each other. The single hidden layer for each block then is separated or divided into a plurality of two or more sections. In some embodiments, the hidden layer is separated into a first hidden layer section and a second hidden layer section. These multiple sections of the hidden layer are combined using a product operator to obtain an implicit hidden layer having a single section. In some embodiments the product operator is a Khatri-Rao product. A prediction is made using the implicit hidden layer and weights, and the output prediction layer is consequently obtained. | 08-15-2013 |
20130218821 | Round-trip engineering apparatus and methods for neural networks - Apparatus and methods for high-level neuromorphic network description (HLND) framework that may be configured to enable users to define neuromorphic network architectures using a unified and unambiguous representation that is both human-readable and machine-interpretable. The framework may be used to define nodes types, node-to-node connection types, instantiate node instances for different node types, and to generate instances of connection types between these nodes. To facilitate framework usage, the HLND format may provide the flexibility required by computational neuroscientists and, at the same time, provides a user-friendly interface for users with limited experience in modeling neurons. The HLND kernel may comprise an interface to Elementary Network Description (END) that is optimized for efficient representation of neuronal systems in hardware-independent manner and enables seamless translation of HLND model description into hardware instructions for execution by various processing modules. | 08-22-2013 |
20130226851 | METHOD AND APPARATUS FOR MODELING NEURAL RESOURCE BASED SYNAPTIC PLACTICITY - Certain aspects of the present disclosure support a method of designing the resource model in hardware (or software) for learning spiking neural networks. The present disclosure comprises accounting for resources in a different domain (e.g., negative log lack-of-resources instead of availability of resources), modulating weight changes for multiple spike events upon a single trigger, and strategically advancing or retarding the resource replenishment or decay (respectively) to overcome the limitation of single event-based triggering. | 08-29-2013 |
20130268472 | ARTIFICIAL INTELLIGENCE AND METHODS FOR RELATING HERBAL INGREDIENTS WITH ILLNESSES IN TRADITIONAL CHINESE MEDICINE - Described herein are systems and methods for identifying herbal ingredients effective in treating illnesses in Traditional Chinese Medicine (TCM) using an artificial neural network. | 10-10-2013 |
20130282634 | DEEP CONVEX NETWORK WITH JOINT USE OF NONLINEAR RANDOM PROJECTION, RESTRICTED BOLTZMANN MACHINE AND BATCH-BASED PARALLELIZABLE OPTIMIZATION - A method is disclosed herein that includes an act of causing a processor to access a deep-structured, layered or hierarchical model, called a deep convex network, retained in a computer-readable medium, wherein the deep-structured model comprises a plurality of layers with weights assigned thereto. This layered model can produce the output serving as the scores to combine with transition probabilities between states in a hidden Markov model and language model scores to form a full speech recognizer. Batch-based, convex optimization is performed to learn a portion of the deep convex network's weights, rendering it appropriate for parallel computation to accomplish the training. The method can further include the act of jointly substantially optimizing the weights, the transition probabilities, and the language model scores of the deep-structured model using the optimization criterion based on a sequence rather than a set of unrelated frames. | 10-24-2013 |
20130282635 | Method For The Computer-Assisted Modeling Of A Technical System - A method for computer-assisted modeling of a technical system is disclosed. At multiple different operating points, the technical system is described by a first state vector with first state variable(s) and by a second state vector with second state variable(s). A neural network comprising a special form of a feed-forward network is used for the computer-assisted modeling of said system. The feed-forward network includes at least one bridging connector that connects a neural layer with an output layer, thereby bridging at least one hidden layer, which allows the training of networks with multiple hidden layers in a simple manner with known learning methods, e.g., the gradient descent method. The method may be used for modeling a gas turbine system, in which a neural network trained using the method may be used to estimate or predict nitrogen oxide or carbon monoxide emissions or parameters relating to combustion chamber vibrations. | 10-24-2013 |
20130311413 | Electronic charge sharing CMOS-memristor neural circuit - CMOS-memristor circuit is constructed to behave as a trainable artificial synapse for neuromorphic hardware systems. The invention relies on the memristance of a memristor at the input side of the device to act as a reconfigurable weight that is adjusted to realize a desired function. The invention relies on charge sharing at the output to enable the summation of signals from multiple synapses at the input node of a neuron circuit, implemented using a CMOS amplifier circuit. The combination of several memristive synapses and a neuron circuit constitute a neuromorphic circuit capable of learning and implementing a multitude of possible functionalities. | 11-21-2013 |
20130311414 | LEARNING METHOD OF NEURAL NETWORK CIRCUIT - A neuron circuit in a neural network circuit element includes a waveform generating circuit for generating a predetermined pulse voltage, and a first input signal has a waveform of the predetermined pulse voltage. For a period having a predetermined duration of the predetermined pulse voltage generated within the neural network circuit element including the variable resistance element which is applied with the first input signal from another neural network circuit element, the first input signal is permitted to be input to the control electrode of the variable resistance element, to change the resistance value of the variable resistance element due to an electric potential difference generated between the first electrode and the control electrode which occurs depending on an input timing of the first input signal with respect to the period during which the first input signal is permitted to be input to the control electrode. | 11-21-2013 |
20130318019 | METHODS AND APPARATUSES FOR MODELING SHALE CHARACTERISTICS IN WELLBORE SERVICING FLUIDS USING AN ARTIFICIAL NEURAL NETWORK - An apparatus and method for determining a formation/fluid interaction of a target formation and a target drilling fluid is described herein. The method may include training an artificial neural network using a training data set. The training data set may include a formation characteristic of a source formation and a fluid characteristic of a source drilling fluid and experimental data on source formation/fluid interaction. Once the artificial neural network is trained, a formation characteristic of the target formation and fluid characteristic of target drilling fluid may be input. The formation characteristic of the target formation may correspond to the formation characteristic of the source formation. The fluid characteristic of the target drilling fluid may correspond to the fluid characteristic of the source drilling fluid. A formation/fluid interaction of the target formation and the target drilling fluid may be determined using a value output by the artificial neural network. | 11-28-2013 |
20130325775 | DYNAMICALLY RECONFIGURABLE STOCHASTIC LEARNING APPARATUS AND METHODS - Generalized learning rules may be implemented. A framework may be used to enable adaptive signal processing system to flexibly combine different learning rules (supervised, unsupervised, reinforcement learning) with different methods (online or batch learning). The generalized learning framework may employ average performance function as the learning measure thereby enabling modular architecture where learning tasks are separated from control tasks, so that changes in one of the modules do not necessitate changes within the other. Separation of learning tasks from the control tasks implementations may allow dynamic reconfiguration of the learning block in response to a task change or learning method change in real time. The generalized learning apparatus may be capable of implementing several learning rules concurrently based on the desired control application and without requiring users to explicitly identify the required learning rule composition for that application. | 12-05-2013 |
20130325776 | APPARATUS AND METHODS FOR REINFORCEMENT LEARNING IN ARTIFICIAL NEURAL NETWORKS - Neural network apparatus and methods for implementing reinforcement learning. In one implementation, the neural network is a spiking neural network, and the apparatus and methods may be used for example to enable an adaptive signal processing system to effect focused exploration by associative adaptation, including providing a negative reward signal to the network, which may increase excitability of the neurons in combination with decrease in excitability of active neurons. In certain implementations, the increase is gradual and of smaller magnitude, compared to the excitability decrease. In some implementations, the increase/decrease of the neuron excitability is effectuated by increasing/decreasing an efficacy of the respective synaptic connections delivering presynaptic inputs into the neuron. The focused exploration may be achieved for instance by non-associative potentiation configured based at least on the input spike rate. The non-associative potentiation may further comprise depression of connections that provide input in excess of a desired limit. | 12-05-2013 |
20130339280 | LEARNING SPIKE TIMING PRECISION - Certain aspects of the present disclosure provide methods and apparatus for learning or determining delays between neuron models so that the uncertainty in input spike timing is accounted for in the margin of time between a delayed pre-synaptic input spike and a post-synaptic spike. In this manner, a neural network can correctly match patterns (even in the presence of significant jitter) and correctly distinguish between different noisy patterns. One example method generally includes determining an uncertainty associated with a first pre-synaptic spike time of a first neuron model for a pattern to be learned; and determining a delay based on the uncertainty, such that the delay added to a second pre-synaptic spike time of the first neuron model results in a causal margin of time between the delayed second pre-synaptic spike time and a post-synaptic spike time of a second neuron model. | 12-19-2013 |
20140012789 | PROBLEM SOLVING BY PLASTIC NEURONAL NETWORKS - More realistic neural networks are disclosed that are able to learn to solve complex problems though a decision making network, modeled as a virtual entity foraging in a digital environment. Specifically, the neural networks overcome many of the limitations in prior neural networks by using rewarded STDP bounded with rules to solve a complex problem. | 01-09-2014 |
20140025613 | APPARATUS AND METHODS FOR REINFORCEMENT LEARNING IN LARGE POPULATIONS OF ARTIFICIAL SPIKING NEURONS - Neural network apparatus and methods for implementing reinforcement learning. In one implementation, the neural network is a spiking neural network, and the apparatus and methods may be used for example to enable an adaptive signal processing system to effect network adaptation by optimized credit assignment. In certain implementations, the credit assignment may be based on a comparison between network output and individual unit contribution. The unit contribution may be determined for example using eligibility traces that may comprise pre-synaptic and/or post-synaptic activity. In certain implementations, the unit credit may be determined using correlation between rate of change of network output and eligibility trace of the unit. | 01-23-2014 |
20140032461 | SYNAPSE MAINTENANCE IN THE DEVELOPMENTAL NETWORKS - The developmental neural network is trained using a synaptic maintenance process. Synaptogenic trimming is first performed on the neuron inputs using a synaptogenic factor for each neuron based on standard deviation of a measured match between the input and synaptic weight value. A top-k competition among all neurons then selects a subset of said neurons as winning neurons. Neuronal learning is applied only to these winning neurons, updating their synaptic weights and updating their synaptogenic factors. | 01-30-2014 |
20140032462 | SYSTEMS AND METHODS FOR AUTOCONFIGURATION OF PATTERN-RECOGNITION CONTROLLED MYOELECTRIC PROSTHESES - Embodiments of the invention provide for a prosthesis guided training system that includes a plurality of sensors for detecting electromyographic activity. A computing device, which can include a processor and memory, can extract data from the electromyographic activity. A real-time pattern recognition control algorithm and an autoconfiguring pattern recognition training algorithm can be stored in the memory. The computing device can determine movement of a prosthesis based on the execution of the real-time pattern recognition control algorithm. The computing device can also alter operational parameters of the real-time pattern recognition control algorithm based on execution of the autoconfiguring pattern recognition training algorithm. | 01-30-2014 |
20140032463 | ACCURATE AND FAST NEURAL NETWORK TRAINING FOR LIBRARY-BASED CRITICAL DIMENSION (CD) METROLOGY - Approaches for accurate neural network training for library-based critical dimension (CD) metrology are described. Approaches for fast neural network training for library-based CD metrology are also described. | 01-30-2014 |
20140052679 | APPARATUS AND METHODS FOR IMPLEMENTING EVENT-BASED UPDATES IN SPIKING NEURON NETWORKS - Event-based updates in artificial neuron networks may be implemented. An internal event may be defined in order to update incoming connections of a neuron. The internal event may be triggered by an external signal and/or internally by the neuron. A reinforcement signal may be used to trigger an internal event of a neuron in order to perform synaptic updates without necessitating post-synaptic response. An external event may be defined in order to deliver response of the neuron to desired targets. The external and internal events may be combined into a composite event configured to effectuate connection update and spike delivery to post-synaptic target. The scope of the internal event may comprise the respective neuron and does not extend to other neurons of the network. Conversely, the scope of the external event may extend to other neurons of the network via, for example, post-synaptic spike delivery. | 02-20-2014 |
20140067739 | REDUCTION OR ELIMINATION OF TRAINING FOR ADAPTIVE FILTERS AND NEURAL NETWORKS THROUGH LOOK-UP TABLE - A system and method of reducing or eliminating training for adaptive receiver and neural networks is disclosed. The adaptive filter or neural network is pre-training using simulation or empirically received data and a took-up table is created. Coefficient instantiation from the receiver for ail permutations of the key parameters of training data are stored along with the key parameters within the look-up table. After creating the look-up table, the key parameters of the signal to be decoded are estimated. The coefficient of filter or neural network for the estimated key parameters is obtained by accessing the loop-up table. The demodulated signal is produced by setting the filter or neural network coefficents to coefficient values obtained from the look-up table. For slow varying key parameters, the coefficients from the lookup table are occasionally replaced instead of implementing the adaptive filter or neural network. | 03-06-2014 |
20140081893 | STRUCTURAL PLASTICITY IN SPIKING NEURAL NETWORKS WITH SYMMETRIC DUAL OF AN ELECTRONIC NEURON - A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron. | 03-20-2014 |
20140081894 | PLUGGABLE MODULES IN A CASCADING LEARNING SYSTEM - A cascading teaming system for semantic search is described including the generation, training and testing of a domain-specific module for a domain-specific search. One or more input elements and output elements are specified for the domain-specific module with reference to a domain that relates these elements together through data sets that include related metadata. The related metadata may include semantic terms that are incorporated into a contextual network applicable to the domain. | 03-20-2014 |
20140081895 | SPIKING NEURON NETWORK ADAPTIVE CONTROL APPARATUS AND METHODS - Adaptive controller apparatus of a plant may be implemented. The controller may comprise an encoder block and a control block. The encoder may utilize basis function kernel expansion technique to encode an arbitrary combination of inputs into spike output. The controller may comprise spiking neuron network operable according to reinforcement learning process. The network may receive the encoder output via a plurality of plastic connections. The process may be configured to adaptively modify connection weights in order to maximize process performance, associated with a target outcome. The relevant features of the input may be identified and used for enabling the controlled plant to achieve the target outcome. | 03-20-2014 |
20140108315 | STRUCTURAL TO FUNCTIONAL SYNAPTIC CONVERSION - Computer-implemented methods, software, and systems for determining functional synapses from given structural touches between cells in a neuronal circuit are described. One computer-implemented method for determining functional synapses from predetermined synapses of connections between two cells in a neuronal circuit, includes determining, from the predetermined synapses, the functional synapses by leaving a portion of the connections unused, e.g. for activation by plasticity mechanisms. | 04-17-2014 |
20140114893 | LOW-POWER EVENT-DRIVEN NEURAL COMPUTING ARCHITECTURE IN NEURAL NETWORKS - A neural network includes an electronic synapse array of multiple digital synapses interconnecting a plurality of digital electronic neurons. Each synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. Each neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. A decoder receives spike events sequentially and transmits the spike events to selected axons in the synapse array. An encoder transmits spike events corresponding to spiking neurons. A controller coordinates events from the synapse array to the neurons, and signals when neurons may compute their spike events within each time step, ensuring one-to-one correspondence with an equivalent software model. The synapse array includes an interconnecting crossbar that sequentially receives spike events from axons, wherein one axon at a time drives the crossbar, and the crossbar transmits synaptic events in parallel to multiple neurons. | 04-24-2014 |
20140129498 | METHOD FOR NON-SUPERVISED LEARNING IN AN ARTIFICIAL NEURAL NETWORK BASED ON MEMRISTIVE NANODEVICES, AND ARTIFICIAL NEURAL NETWORK IMPLEMENTING SAID METHOD - An unsupervised learning method is provided implemented in an artificial neural network based on memristive devices. It consists notably in producing an increase in the conductance of a synapse when there is temporal overlap between a pre-synaptic pulse and a post-synaptic pulse and in decreasing its conductance on receipt of a post-synaptic pulse when there is no temporal overlap with a pre-synaptic pulse. | 05-08-2014 |
20140143193 | METHOD AND APPARATUS FOR DESIGNING EMERGENT MULTI-LAYER SPIKING NETWORKS - Certain aspects of the present disclosure support a technique for designing an emergent multi-layer spiking neural network. Parameters of the neural network can be first determined based upon desired one or more functional features of the neural network. Then, the one or more functional features can be developed towards the desired functional features as the determined parameters are further adapted, tuned and updated. The parameters can comprise at least one of time constants of neuron circuits of the neural network, time constants of synapse connections of the neural network, timing parameters of the neural network, or timing aspects of learning in the neural network. The one or more functional features can comprise at least one of feature detection in a layer of the multi-layer spiking neural network or saliency detection in another layer of the multi-layer spiking neural network. | 05-22-2014 |
20140143194 | PIECEWISE LINEAR NEURON MODELING - Methods and apparatus for piecewise linear neuron modeling and implementing artificial neurons in an artificial nervous system based on linearized neuron models. One example method for operating an artificial neuron generally includes determining that a first state of the artificial neuron is within a first region; determining a second state of the artificial neuron based at least in part on a first set of linear equations, wherein the first set of linear equations is based at least in part on a first set of parameters corresponding to the first region; determining that the second state of the artificial neuron is within a second region; and determining a third state of the artificial neuron based at least in part on a second set of linear equations, wherein the second set of linear equations is based at least in part on a second set of parameters corresponding to the second region. | 05-22-2014 |
20140149327 | METHOD AND APPARATUS FOR MONITORING NETWORK TRAFFIC - A system that collects data from monitored network traffic. The system inputs, in parallel, the data through inputs of a neural network. The system compares an output of the neural network, generated in response to the inputted data, to at least one predetermined output. If the output of the neural network corresponds to the at least one predetermined output, the system provides a notification relating to the data. | 05-29-2014 |
20140156576 | MEMRISTIVE NEURAL PROCESSOR UTILIZING ANTI-HEBBIAN AND HEBBIAN TECHNOLOGY - An AHaH (Anti-Hebbian and Hebbian) apparatus for use in electronic circuits. Such an AHaH apparatus can include one or more meta-stable switches, and one or more differential pairs of output electrodes, wherein each electrode among each differential pair of output electrodes can include one or more input lines coupled thereto via one or more of the meta-stable switch. | 06-05-2014 |
20140156577 | METHODS AND SYSTEMS FOR ARTIFICIAL COGNITION - Methods, systems and apparatus that provide for perceptual, cognitive, and motor behaviors in an integrated system implemented using neural architectures. Components of the system communicate using artificial neurons that implement neural networks. The connections between these networks form representations—referred to as semantic pointers—which model the various firing patterns of biological neural network connections. Semantic pointers can be thought of as elements of a neural vector space, and can implement a form of abstraction level filtering or compression, in which high-dimensional structures can be abstracted one or more times thereby reducing the number of dimensions needed to represent a particular structure. | 06-05-2014 |
20140172762 | Unknown - A neuromorphic system comprises a set of at least one input neuron, a set of at least one output neuron and a synaptic network formed from a set of at least one variable-resistance memristive component, said synaptic network connecting at least one input neuron to at least one output neuron, the resistance of the at least one memristive component being adjusted by delivering to the synaptic network write pulses generated by the at least one input neuron, and return pulses generated by the at least one output neuron, the characteristics of the write and return pulses being deduced from the intrinsic characteristics of the at least one memristive component so that the combination of a write pulse and a return pulse in the at least one memristive component results in a modification of its resistance according to a learning rule chosen beforehand. | 06-19-2014 |
20140180987 | TIME-DIVISION MULTIPLEXED NEUROSYNAPTIC MODULE WITH IMPLICIT MEMORY ADDRESSING FOR IMPLEMENTING A NEURAL NETWORK - Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step. | 06-26-2014 |
20140188771 | NEUROMORPHIC AND SYNAPTRONIC SPIKING NEURAL NETWORK CROSSBAR CIRCUITS WITH SYNAPTIC WEIGHTS LEARNED USING A ONE-TO-ONE CORRESPONDENCE WITH A SIMULATION - Embodiments of the invention provide neuromorphic-synaptronic systems, including neuromorphic-synaptronic circuit chips implementing spiking neural network with synaptic weights learned using simulation. One embodiment includes simulating a spiking neural network to generate synaptic weights learned via the simulation while maintaining one-to-one correspondence between the simulation and a digital circuit chip. The learned synaptic weights are loaded into the digital circuit chip implementing a spiking neural network, the digital circuit chip comprising a neuromorphic-synaptronic spiking neural network including plural synapse devices interconnecting multiple digital neurons. | 07-03-2014 |
20140222739 | APPARATUS AND METHODS FOR GATING ANALOG AND SPIKING SIGNALS IN ARTIFICIAL NEURAL NETWORKS - Apparatus and methods for universal node design implementing a universal learning rule in a mixed signal spiking neural network. In one implementation, at one instance, the node apparatus, operable according to the parameterized universal learning model, receives a mixture of analog and spiking inputs, and generates a spiking output based on the model parameter for that node that is selected by the parameterized model for that specific mix of inputs. At another instance, the same node receives a different mix of inputs, that also may comprise only analog or only spiking inputs and generates an analog output based on a different value of the node parameter that is selected by the model for the second mix of inputs. In another implementation, the node apparatus may change its output from analog to spiking responsive to a training input for the same inputs. | 08-07-2014 |
20140250038 | STRUCTURAL PLASTICITY IN SPIKING NEURAL NETWORKS WITH SYMMETRIC DUAL OF AN ELECTRONIC NEURON - A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron. | 09-04-2014 |
20140250039 | UNSUPERVISED, SUPERVISED AND REINFORCED LEARNING VIA SPIKING COMPUTATION - The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module. | 09-04-2014 |
20140279779 | SYSTEM AND METHOD FOR DETECTING PLATFORM ANOMALIES THROUGH NEURAL NETWORKS - A system and method for detecting behavior of a computing platform that includes obtaining platform data; for each data motif identifiers in a set data motif identifiers, performing data motif detection on data in an associated timescale, wherein a first data motif identifier operates on data in a first timescale, wherein a second data motif identifier operates on data in a second timescale, wherein the first timescale and second timescale are different; in a neural network model, synthesizing platform data anomaly detection with at least a set of features inputs from data motif detection of the set of motif identifiers; and signaling if a platform data anomaly is detected through the neural network model. | 09-18-2014 |
20140289179 | ANALOG MULTIPLIER USING A MEMRISTIVE DEVICE AND METHOD FOR IMPLEMENING HEBBIAN LEARNING RULES USING MEMRISOR ARRAYS - A device, comprising: an array of cells, wherein the cells are arranged in columns and rows; wherein each cell comprises a memristive device; an interfacing circuit that is coupled to each cell of the array of cells; wherein the interfacing circuit is arranged to: receive or generate first variables and second variables; generate memristive device input signals that once provided to memristive devices of the array will cause a change in a state variable of each of the memristive devices of the cells of the array, wherein the change in the state variable of each of the memristive devices of the cells of array reflects a product of one of the first variables and one of the second variables; provide the memristive device input signals to memristive devices of the array; and receive output signals that are a function of at least products of the first variables and the second variables; | 09-25-2014 |
20140297574 | PROBABILISTIC LANGUAGE MODEL IN CONTEXTUAL NETWORK - A method and apparatus for detection of relationships between objects in a meta-model semantic network is described. Semantic objects and semantic relations of a meta-model of business objects are generated from a meta-model semantic network. The semantic relations are based on connections between the semantic objects. A probability model of terminology usage in the semantic objects and the semantic relations is generated. A neural network is formed based on usage of the semantic objects, the semantic relations, and the probability model. The neural network is integrated with the semantic objects, the semantic relations, and the probability model to generate a contextual network. The generated probability model is integrated with semantic objects and neural networks for form parallel networks. | 10-02-2014 |
20140310220 | ELECTRONIC SYNAPSES FOR REINFORCEMENT LEARNING - Embodiments of the invention provide electronic synapse devices for reinforcement learning. An electronic synapse is configured for interconnecting a pre-synaptic electronic neuron and a post-synaptic electronic neuron. The electronic synapse comprises memory elements configured for storing a state of the electronic synapse and storing meta information for updating the state of the electronic synapse. The electronic synapse further comprises an update module configured for updating the state of the electronic synapse based on the meta information in response to an update signal for reinforcement learning. The update module is configured for updating the state of the electronic synapse based on the meta information, in response to a delayed update signal for reinforcement learning based on a learning rule. | 10-16-2014 |
20140310221 | INTERPRETABLE SPARSE HIGH-ORDER BOLTZMANN MACHINES - A method for performing structured learning for high-dimensional discrete graphical models includes estimating a high-order interaction neighborhood structure of each visible unit or a Markov blanket of each unit; once a high-order interaction neighborhood structure of each visible unit is identified, adding corresponding energy functions with respect to the high-order interaction of that unit into an energy function of High-order BM (HBM); and applying Maximum-Likelihood Estimation updates to learn the weights associated with the identified high-order energy functions. The system can effectively identify meaningful high-order interactions between input features for system output prediction, especially for early cancer diagnosis, biomarker discovery, sentiment analysis, automatic essay grading, Natural Language Processing, text summarization, document visualization, and many other data exploration problems in Big Data. | 10-16-2014 |
20140330761 | NEUROMORPHIC CHIP AND METHOD AND APPARATUS FOR DETECTING SPIKE EVENT - Disclosed are a method and an apparatus for detecting spike event or transmitting spike event information generated in a neuromorphic chip. The apparatus for detecting spike event generated in a neuromorphic chip may detect spike event information for a plurality of neurons included in the neuromorphic chip based on a neuron group. | 11-06-2014 |
20140337261 | TRIM EFFECT COMPENSATION USING AN ARTIFICIAL NEURAL NETWORK - Systems and methods for controlling frequency output of an electronic oscillator to compensate for effects of one or more parameters experienced by the oscillator incorporate artificial neural network processing functionality for generating correction signals. A neural network processing module includes one or more neurons which receive one or more inputs corresponding to parameters of an electronic oscillator, such as temperature and control voltage (or correction voltage). One or more sets of weights are calculated and applied to inputs to the neurons of the neural network as part of a training process, wherein the weights help shape the output of the neural network processing module. The neural network may include a linear summation module configured to provide an output signal that is at least partially based on outputs of the one or more neurons. | 11-13-2014 |
20140344201 | PROVIDING TRANSPOSABLE ACCESS TO A SYNAPSE ARRAY USING COLUMN AGGREGATION - Embodiments of the invention relate to providing transposable access to a synapse array using column aggregation. One embodiment comprises a neural network including a plurality of electronic axons, a plurality of electronic neurons, and a crossbar for interconnecting the axons with the neurons. The crossbar comprises a plurality of electronic synapses. Each synapse interconnects an axon with a neuron. The neural network further comprises a column aggregation module for transposable access to one or more synapses of the crossbar using column aggregation. | 11-20-2014 |
20140344202 | NEURAL MODEL FOR REINFORCEMENT LEARNING - A neural model for reinforcement-learning and for action-selection includes a plurality of channels, a population of input neurons in each of the channels, a population of output neurons in each of the channels, each population of input neurons in each of the channels coupled to each population of output neurons in each of the channels, and a population of reward neurons in each of the channels. Each channel of a population of reward neurons receives input from an environmental input, and is coupled only to output neurons in a channel that the reward neuron is part of. If the environmental input for a channel is positive, the corresponding channel of a population of output neurons are rewarded and have their responses reinforced, otherwise the corresponding channel of a population of output neurons are punished and have their responses attenuated. | 11-20-2014 |
20140344203 | NEURAL NETWORK COMPUTING APPARATUS AND SYSTEM, AND METHOD THEREFOR - In order to provide a neural network computing apparatus and system, as well as a method therefor, which operate via a synchronization circuit in which all components are synchronized with one system clock, and which include a dispersion-type memory structure for storing artificial neural network data, and a calculating structure for processing all neurons through time-sharing in a pipeline circuit. The neural network computing apparatus includes a control unit for controlling the neural network computing apparatus; a plurality of memory units for outputting both a connection weight value and a neuron state value; and one calculating unit for using the connecting line attribute value and neuron state value inputted from the plurality of memory units so as to calculate a new neuron state value and provide feedback to each of the plurality of memory units. | 11-20-2014 |
20140351190 | EFFICIENT HARDWARE IMPLEMENTATION OF SPIKING NETWORKS - Certain aspects of the present disclosure support operating simultaneously multiple super neuron processing units in an artificial nervous system, wherein a plurality of artificial neurons is assigned to each super neuron processing unit. The super neuron processing units can be interfaced with a memory for storing and loading synaptic weights and plasticity parameters of the artificial nervous system, wherein organization of the memory allows contiguous memory access. | 11-27-2014 |
20140358834 | SYNAPSE CIRCUIT AND NEUROMORPHIC SYSTEM INCLUDING THE SAME - A synapse circuit to perform spike timing dependent plasticity (STDP) operation is provided. The synapse circuit includes a memristor having a resistance value, a transistor connected to the memristor, and the transistor configured to receive at least two input signals. The resistance value of the memristor is changed based on a time difference between the at least two input signals received by the transistor. | 12-04-2014 |
20140365416 | SYNAPSE ARRAY, PULSE SHAPER CIRCUIT AND NEUROMORPHIC SYSTEM - A synapse array based on a static random access memory (SRAM), a pulse shaper circuit, and a neuromorphic system are provided. The synapse array includes a plurality of synapse circuits. At least one synapse circuit among the plurality of synapse circuits includes at least one bias transistor and at least two cut-off transistors, and the at least one synapse circuit is configured to charge a membrane node of a neuron circuit connected with the at least one synapse circuit using a sub-threshold leakage current that passed through the at least one bias transistor. | 12-11-2014 |
20140365417 | APPARATUS AND METHODS FOR RATE-MODULATED PLASTICITY IN A SPIKING NEURON NETWORK - Apparatus and methods for activity based plasticity in a spiking neuron network adapted to process sensory input. In one approach, the plasticity mechanism of a connection may comprise a causal potentiation portion and an anti-causal portion. The anti-causal portion, corresponding to the input into a neuron occurring after the neuron response, may be configured based on the prior activity of the neuron. When the neuron is in low activity state, the connection, when active, may be potentiated by a base amount. When the neuron activity increases due to another input, the efficacy of the connection, if active, may be reduced proportionally to the neuron activity. Such functionality may enable the network to maintain strong, albeit inactive, connections available for use for extended intervals. | 12-11-2014 |
20140372354 | NEURITE SYSTEMS - Neurite systems, methods, and media are disclosed. An input section may be configured to receive an input voltage and amplify the input voltage by a weight into a weighted voltage rate-of change. A firing center comprising an existing instantaneous voltage and a trigger firing voltage may be configured to, in a period of time, receive the weighted voltage rate-of-change, determine a new instantaneous voltage based on the existing instantaneous voltage, the weighted voltage rate-of-change, and the period of time, transmit a pulse trigger to an output section when the new instantaneous voltage rises to or above the trigger firing voltage, and reset the new instantaneous voltage to zero or some other predefined value. The output section may be configured to receive the pulse trigger and to transmit an output voltage pulse having a finite duration to one or more branches. | 12-18-2014 |
20140379625 | SPIKE TAGGING FOR DEBUGGING, QUERYING, AND CAUSAL ANALYSIS - Embodiments of the invention relate to spike tagging for a neural network. One embodiment comprises a neural network including multiple electronic neurons and a plurality of weighted synaptic connections interconnecting the neurons. An originating neuron of the neural network generates a spike event and a message tag that includes information relating to said originating neuron. A neuron of the neural network receives a spike event and a message tag from an interconnected neuron. In response to one or more received spike events, a receiving neuron spikes and sends a message tag selected from received message tags to an interconnected neuron. | 12-25-2014 |
20150019467 | FRAMEWORK FOR THE EVOLUTION OF ELECTRONIC NEURAL ASSEMBLIES TOWARD DIRECTED GOALS - Methods and systems for the evolution of electronic neural assemblies toward directed goals. A compact computing architecture includes electronics that allows users of such an architecture to create autonomous agents, in real or virtual world and add intelligence to machines. An intelligent machine is composed of four basic modules: one or more sensors, one or more motors, a (Reward Input Output System) RIOS and a cortex. A number of genetically evolved detectors can project both to cortex and RIOS. At first the neurons within the cortex evolve to predict the structure of the sensory data followed by the structure of proprioceptive activations of its own motor system. Finally, once the cortex has learned its sensory and motor programs, it evolves to predict the reward signals, which comes in multiple channels but is dominated by the detection of the aquisition of free-energy. | 01-15-2015 |
20150019468 | THERMODYNAMIC COMPUTING - Methods and systems for thermodynamic computing based on the attractor dynamics of volatile dissipative electronics attempting to maximize circuit power consumption. A general model of memristive devices based on collections of metastable switches, adaptive synaptic weights can be formed from a differential pair of memristors and modified according to anti-hebbian and hebbian plasticity. The arrays of synaptic weights can be employed to build a neural node circuit with attractor states that are shown to be logic functions forming a computationally complete set. By configuring the attractor states of the computational building block in different ways, high-level machine learning functions can be demonstrated for real-world applications. | 01-15-2015 |
20150026110 | SPIKING MODEL TO LEARN ARBITRARY MULTIPLE TRANSFORMATIONS FOR A SELF-REALIZING NETWORK - A neural network, wherein a portion of the neural network comprises: a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and a second array having a second number of neurons, wherein the second number is smaller than the first number, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array. | 01-22-2015 |
20150046382 | COMPUTED SYNAPSES FOR NEUROMORPHIC SYSTEMS - Methods and apparatus are provided for determining synapses in an artificial nervous system based on connectivity patterns. One example method generally includes determining, for an artificial neuron, an event has occurred; based on the event, determining one or more synapses with other artificial neurons based on a connectivity pattern associated with the artificial neuron; and applying a spike from the artificial neuron to the other artificial neurons based on the determined synapses. In this manner, the connectivity patterns (or parameters for determining such patterns) for particular neuron types, rather than the connectivity itself, may be stored. Using the stored information, synapses may be computed on the fly, thereby reducing memory consumption and increasing memory bandwidth. This also saves time during artificial nervous system updates. | 02-12-2015 |
20150046383 | BEHAVIORAL HOMEOSTASIS IN ARTIFICIAL NERVOUS SYSTEMS USING DYNAMICAL SPIKING NEURON MODELS - Methods and apparatus are provided for implementing behavioral homeostasis in artificial neurons that use a dynamical spiking neuron model. The homeostatic mechanism may be driven by neuron state, rather than by neuron spiking rate, and this mechanism may drive changes to the neuron temporal dynamics, rather than to contributions of input or weights. As a result, certain aspects of the present disclosure are a more natural fit with spiking neural networks and have many functional and computational advantages. One example method for implementing homeostasis of an artificial nervous system generally includes determining one or more state variables of a neuron model used by an artificial neuron, based at least in part on dynamics of the neuron model; determining one or more conditions based at least in part on the state variables; and adjusting the dynamics based at least in part on the conditions. | 02-12-2015 |
20150052093 | METHODS AND APPARATUS FOR MODULATING THE TRAINING OF A NEURAL DEVICE - Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems. | 02-19-2015 |
20150052094 | POST GHOST PLASTICITY - Methods and apparatus are provided for inferring and accounting for missing post-synaptic events (e.g., a post-synaptic spike that is not associated with any pre-synaptic spikes) at an artificial neuron and adjusting spike-timing dependent plasticity (STDP) accordingly. One example method generally includes receiving, at an artificial neuron, a plurality of pre-synaptic spikes associated with a synapse, tracking a plurality of post-synaptic spikes output by the artificial neuron, and determining at least one of the post-synaptic spikes is associated with none of the plurality of pre-synaptic spikes. According to certain aspects, determining inferring missing post-synaptic events may be accomplished by using a flag, counter, or other variable that is updated on post-synaptic firings. If this post-ghost variable changes between pre-synaptic-triggered adjustments, then the artificial nervous system can determine there was a missing post-synaptic pairing. | 02-19-2015 |
20150066826 | METHODS AND APPARATUS FOR IMPLEMENTING A BREAKPOINT DETERMINATION UNIT IN AN ARTIFICIAL NERVOUS SYSTEM - Methods and apparatus are provided for using a breakpoint determination unit to examine an artificial nervous system. One example method generally includes operating at least a portion of the artificial nervous system; using the breakpoint determination unit to detect that a condition exists based at least in part on monitoring one or more components in the artificial nervous system; and at least one of suspending, examining, modifying, or flagging the operation of the at least the portion of the artificial nervous system, based at least in part on the detection. | 03-05-2015 |
20150074027 | Deep Structured Semantic Model Produced Using Click-Through Data - A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures. | 03-12-2015 |
20150074028 | PROCESSING DEVICE AND COMPUTATION DEVICE - According to one embodiment, a processing device is configured to process input data formed of a plurality of input digital values. The processing device has a plurality of computation layers connected in series. Each of the computation layers has a plurality of computation devices. Each of the plurality of computation devices in the computation layer of a first stage is configured to generate a digital value from the input digital values and weight coefficients defined in advance. The weight coefficients are applied to each of the input digital values. Each of the plurality of computation devices of the computation layer of a second or subsequent stage is configured to generate a new digital value from the digital values generated by the computation devices of the computation layer of the previous stage and weight coefficients defined in advance. The weight coefficients are applied to each of the digital values. | 03-12-2015 |
20150074029 | ktRAM DESIGN - A ktRAM architecture comprising a memory wherein each input synapse or “bit” of the memory interacts on or with a common electrode through a common “dendritic” electrode, and wherein each input can be individually driven. Each input constitutes a memory cell driving a common electrode. One or more AHaH nodes can be provided wherein read out of data is accomplished via a common summing electrode through memristive components and wherein multiple input cells are simultaneously active. | 03-12-2015 |
20150081606 | Reduction of Computation Complexity of Neural Network Sensitivity Analysis - As part of neural network sensitivity analysis, base outputs of hidden layer nodes of a neural network model for non-perturbed variables can be reused when perturbing the variables. Such an arrangement greatly reduces complexity of the calculations required to generate outputs of the model. Related apparatus, systems, techniques and articles are also described. | 03-19-2015 |
20150081607 | IMPLEMENTING STRUCTURAL PLASTICITY IN AN ARTIFICIAL NERVOUS SYSTEM - Methods and apparatus are provided for implementing structural plasticity in an artificial nervous system. One example method for altering a structure of an artificial nervous system generally includes determining a synapse in the artificial nervous system for reassignment, determining a first artificial neuron and a second artificial neuron for connecting via the synapse, and reassigning the synapse to connect the first artificial neuron with the second artificial neuron. Another example method for operating an artificial nervous system, generally includes determining a synapse in the artificial nervous system for assignment; determining a first artificial neuron and a second artificial neuron for connecting via the synapse, wherein at least one of the synapse or the first and second artificial neurons are determined randomly or pseudo-randomly; and assigning the synapse to connect the first artificial neuron with the second artificial neuron. | 03-19-2015 |
20150088796 | METHODS AND APPARATUS FOR IMPLEMENTATION OF GROUP TAGS FOR NEURAL MODELS - Certain aspects of the present disclosure support assigning neurons and/or synapses to group tags where group tags have an associated set of parameters. By using group tags, neurons or synapses in a population can be assigned a group tag. Then, by changing a parameter associated with the group tag, all synapses or neurons in the group may have that parameter changed. | 03-26-2015 |
20150095273 | AUTOMATED METHOD FOR MODIFYING NEURAL DYNAMICS - A method for improving neural dynamics includes obtaining prototypical neuron dynamics. The method also includes modifying parameters of a neuron model so that the neuron model matches the prototypical neuron dynamics. The neuron dynamics comprise membrane voltages and/or spike timing. | 04-02-2015 |
20150095274 | METHOD AND APPARATUS FOR PRODUCING PROGRAMMABLE PROBABILITY DISTRIBUTION FUNCTION OF PSEUDO-RANDOM NUMBERS - Certain aspects of the present disclosure provide methods and apparatus for producing programmable probability distribution function of pseudo-random numbers that can be utilized for filtering (dropping and passing) neuron spikes. The present disclosure provides a simpler, smaller, and lower-power circuit than that typically used. It can be programmed to produce any of a variety of non-uniformly distributed sequences of numbers. These sequences can approximate true probabilistic distributions, but maintain sufficient pseudo-randomness to still be considered random in a probabilistic sense. This circuit can be an integral part of a filter block within an ASIC chip emulating an artificial nervous system. | 04-02-2015 |
20150100529 | COMPILING NETWORK DESCRIPTIONS TO MULTIPLE PLATFORMS - A method of generating executable code for a target platform in a neural network includes receiving a spiking neural network description. The method also includes receiving platform-specific instructions for one or more target platforms. Further, the method includes, generating executable code for the target platform(s) based on the platform-specific instructions and the network description. | 04-09-2015 |
20150100530 | METHODS AND APPARATUS FOR REINFORCEMENT LEARNING - We describe a method of reinforcement learning for a subject system having multiple states and actions to move from one state to the next. Training data is generated by operating on the system with a succession of actions and used to train a second neural network. Target values for training the second neural network are derived from a first neural network which is generated by copying weights of the second neural network at intervals. | 04-09-2015 |
20150100531 | METHOD AND APPARATUS TO CONTROL AND MONITOR NEURAL MODEL EXECUTION REMOTELY - Aspects of the present disclosure provide methods and apparatus for remotely controlling and monitoring neural model execution (e.g., such as execution of the neural models described above) remotely, such as via the Internet. According to certain aspects, a client at a remote location (e.g., a webclient), may establish a connection with a server on which the neural model is running (or at least capable of controlling and monitoring the execution). | 04-09-2015 |
20150106314 | METHOD AND APPARATUS FOR CONSTRUCTING A DYNAMIC ADAPTIVE NEURAL NETWORK ARRAY (DANNA) - A circuit element of a multi-dimensional dynamic adaptive neural network array (DANNA) may comprise a neuron/synapse select input functional to select the circuit element to function as one of a neuron and a synapse. In one embodiment of a DANNA array of such circuit elements, (wherein a circuit element or component thereof may be analog or digital), a destination neuron may be connected to a first neuron by a first synapse in one dimension, a second destination neuron may be connected to the first neuron by a second synapse in a second dimension and, optionally, a third destination neuron may be connected to the first neuron by a third synapse. The DANNA may thus form multiple levels of neuron and synapse circuit elements. In one embodiment, multiples of eight inputs may be selectively received by the circuit element selectively functioning as one of a neuron and a synapse. The dynamic adaptive neural network array (DANNA) may comprise a special purpose processor for performing one of a control, anomaly detection and classification application and may comprise a first structure connected to a neuroscience-inspired dynamic artificial neural network (NIDA), comprise substructures thereof or be combined with other neural networks. | 04-16-2015 |
20150106315 | METHOD AND APPARATUS FOR PROVIDING RANDOM SELECTION AND LONG-TERM POTENTIATION AND DEPRESSION IN AN ARTIFICIAL NETWORK - A digital circuit element of a two dimensional dynamic adaptive neural network array (DANNA) may comprise a neuron/synapse select input functional to select the digital circuit element to function as one of a neuron and a synapse. In one embodiment of a DANNA array of such digital circuit elements, a destination neuron may be connected to a first neuron by a first synapse in one dimension, a second destination neuron may be connected to the first neuron by a second synapse in a second dimension and, optionally, a third destination neuron may be connected to the first neuron by a third synapse thus forming multiple levels of neuron and synapse digital circuit elements. In one embodiment, multiples of eight inputs may be selectively received by the digital circuit element selectively functioning as one of a neuron and a synapse. The dynamic adaptive neural network array (DANNA) may implement long-term potentiation or depression to facilitate learning through the use of an affective system and random selection of input events. | 04-16-2015 |
20150112908 | DYNAMICALLY ASSIGNING AND EXAMINING SYNAPTIC DELAY - A method for dynamically modifying synaptic delays in a neural network includes initializing a delay parameter and operating the neural network. The method further includes dynamically updating the delay parameter based on a program which is based on a statement including the delay parameter. | 04-23-2015 |
20150120628 | DOPPLER EFFECT PROCESSING IN A NEURAL NETWORK MODEL - A method of frequency discrimination associated with the Doppler effect is presented. The method includes mapping a first signal to a first plurality of frequency bins and a second signal to a second plurality of frequency bins. The first signal and the second signal corresponding to different times. The method also includes firing a first plurality of neurons based on contents of the first plurality of frequency bins and firing a second plurality of neurons based on contents of the second plurality of frequency bins. | 04-30-2015 |
20150120629 | NEURON LEARNING TYPE INTEGRATED CIRCUIT DEVICE - According to one embodiment, a neuron learning type integrated circuit device includes neuron cell units. Each of the neuron cell units includes synapse circuit units, and a soma circuit unit connected to the synapse circuit units. Each of the synapse circuit units includes a first transistor including a first terminal, a second terminal, and a first control terminal, a second transistor including a third terminal, a fourth terminal, and a second control terminal, a first condenser, one end of the first condenser being connected between the second and third terminals, and a control line connected to the first and second control terminals. The soma circuit unit includes a Zener diode including an input terminal and an output terminal, the input terminal being connected to the fourth terminal, and a second condenser, one end of the second condenser being connected between the fourth terminal and the input terminal. | 04-30-2015 |
20150120630 | NONLINEAR PARAMETER VARYING (NPV) MODEL IDENTIFICATION METHOD - The invention discloses an identification method of nonlinear parameter varying models (NPV) and belongs to the industrial identification field. The invention carries out identification tests and model identification for an identified object with nonlinear parameter varying characteristics. Firstly, the multi-input single-output nonlinear parameter varying model is identified through the steps of local nonlinear model tests, local nonlinear models identification, and operating point variable transition tests; after completing the identification of all the multi-input single-output nonlinear parameter varying models with respect to all the controlled variables, the completed multi-input multi-output nonlinear parameter varying models are built. The nonlinear parameter varying models of an identified object can be obtained by the identification method of the present invention with limited input/output data without detailed mechanism knowledge of the identified object. The nonlinear parameter varying models obtained can be used in model-based control algorithm design and process simulation, as well as in product quality prediction reasoning models and soft sensors. | 04-30-2015 |
20150120631 | Method and System for Converting Pulsed-Processing Neural Network with Instantaneous Integration Synapses into Dynamic Integration Synapses - The invention solves the technical problem associated with finding a way of making contributions as if using dynamic synapses, but using the simplest neural circuits possible, such as with instantaneous integration synapses. In this way, each neuron would be capable of making the correct decision, thereby allowing correct recognition on the part of the neural network. For this purpose, each input pulse is replaced with a train of “r” pulses and the “weight” value is attenuated by a value in the vicinity of “r”. The “r” pulses are spaced apart for a characteristic time. The spacing of the “r” pulses may or may not be equidistant. Consequently, if a front of simultaneous pulses arrives at the neuron, originating from multiple neurons in the preceding layer, the trains of pulses are interleaved with one another and they all contribute to the decision of the neuron as to whether or not it should be activated. | 04-30-2015 |
20150134580 | Method And System For Training A Neural Network - A method and system for training a neural network is disclosed herein. A processor is configured to train a neural network to learn to generate a plurality of sub-concept outputs from a first plurality of inputs of the plurality of digital input signals. The processor is also configured to use the plurality of sub-concept outputs as a plurality of target outputs for a plurality of top-level inputs of the plurality of digital input signals. | 05-14-2015 |
20150134581 | METHOD FOR TRAINING AN ARTIFICIAL NEURAL NETWORK - Method of training an artificial neural network, comprising at least one layer with input neurons and one output layer with output neurons which are adapted differently from the input neurons. | 05-14-2015 |
20150134582 | IMPLEMENTING SYNAPTIC LEARNING USING REPLAY IN SPIKING NEURAL NETWORKS - Aspects of the present disclosure relate to methods and apparatus for training an artificial nervous system. According to certain aspects, timing of spikes of an artificial neuron during a training iteration are recorded, the spikes of the artificial neuron are replayed according to the recorded timing, during a subsequent training iteration, and parameters associated with the artificial neuron are updated based, at least in part, on the subsequent training iteration. | 05-14-2015 |
20150134583 | LEARNING APPARATUS, LEARNING PROGRAM, AND LEARNING METHOD - A learning apparatus performs a learning process for a feed-forward multilayer neural network with supervised learning. The network includes an input layer, an output layer, and at least one hidden layer having at least one probing neuron that does not transfer an output to an uppermost layer side of the network. The learning apparatus includes a learning unit and a layer quantity adjusting unit. The learning unit performs a learning process by calculation of a cost derived by a cost function defined in the multilayer neural network using a training data set for supervised learning. The layer quantity adjusting unit removes at least one uppermost layer from the network based on the cost derived by the output from the probing neuron, and sets, as the output layer, the probing neuron in the uppermost layer of the remaining layers. | 05-14-2015 |
20150294219 | PARALLELIZING THE TRAINING OF CONVOLUTIONAL NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a convolutional neural network (CNN). The system includes a plurality of workers, wherein each worker is configured to maintain a respective replica of each of the convolutional layers of the CNN and a respective disjoint partition of each of the fully-connected layers of the CNN, wherein each replica of a convolutional layer includes all of the nodes in the convolutional layer, and wherein each disjoint partition of a fully-connected layer includes a portion of the nodes of the fully-connected layer. | 10-15-2015 |
20150302295 | GLOBALLY ASYNCHRONOUS AND LOCALLY SYNCHRONOUS (GALS) NEUROMORPHIC NETWORK - Embodiments of the invention relate to a globally asynchronous and locally synchronous neuromorphic network. One embodiment comprises generating a synchronization signal that is distributed to a plurality of neural core circuits. In response to the synchronization signal, in at least one core circuit, incoming spike events maintained by said at least one core circuit are processed to generate an outgoing spike event. Spike events are asynchronously communicated between the core circuits via a routing fabric comprising multiple asynchronous routers. | 10-22-2015 |
20150302296 | PLASTIC ACTION-SELECTION NETWORKS FOR NEUROMORPHIC HARDWARE - A neural model for reinforcement-learning and for action-selection includes a plurality of channels, a population of input neurons in each of the channels, a population of output neurons in each of the channels, each population of input neurons in each of the channels coupled to each population of output neurons in each of the channels, and a population of reward neurons in each of the channels. Each channel of a population of reward neurons receives input from an environmental input, and is coupled only to output neurons in a channel that the reward neuron is part of. If the environmental input for a channel is positive, the corresponding channel of a population of output neurons are rewarded and have their responses reinforced, otherwise the corresponding channel of a population of output neurons are punished and have their responses attenuated. | 10-22-2015 |
20150310329 | SYSTEMS AND METHODS FOR COMBINING STOCHASTIC AVERAGE GRADIENT AND HESSIAN-FREE OPTIMIZATION FOR SEQUENCE TRAINING OF DEEP NEURAL NETWORKS - A method for training a deep neural network (DNN), comprises receiving and formatting speech data for the training, performing Hessian-free sequence training (HFST) on a first subset of a plurality of subsets of the speech data, and iteratively performing the HFST on successive subsets of the plurality of subsets of the speech data, wherein iteratively performing the HFST comprises reusing information from at least one previous iteration. | 10-29-2015 |
20150317557 | TEMPORAL SPIKE ENCODING FOR TEMPORAL LEARNING - Certain aspects of the present disclosure support methods and apparatus for temporal spike encoding for temporal learning in an artificial nervous system. The temporal spike encoding for temporal learning can comprise obtaining sensor data being input into the artificial nervous system, processing the sensor data to generate feature vectors, converting element values of the feature vectors into delays, and causing at least one artificial neuron of the artificial nervous system to spike at times based on the delays. | 11-05-2015 |
20150324686 | DISTRIBUTED MODEL LEARNING - A method of learning a model includes receiving model updates from one or more users. The method also includes computing an updated model based on a previous model and the model updates. The method further includes transmitting data related to a subset of the updated model to the a user(s) based on the updated model. | 11-12-2015 |
20150324690 | Deep Learning Training System - Training large neural network models by providing training input to model training machines organized as multiple replicas that asynchronously update a shared model via a global parameter server is described herein. In at least one embodiment, a system including a model module storing a portion of a model and a deep learning training module that communicates with the model module are configured for asynchronously sending updates to shared parameters associated with the model. The techniques herein describe receiving and processing a batch of data items to calculate updates. Replicas of training machines communicate asynchronously with a global parameter server to provide updates to a shared model and return updated weight values. The model may be modified to reflect the updated weight values. The techniques described herein include computation and communication optimizations that improve system efficiency and scaling of large neural networks. | 11-12-2015 |
20150324691 | NEURAL NETWORK CONNECTIONS USING NONVOLATILE MEMORY DEVICES - A system includes a plurality of nonvolatile memory cells and a map that assigns connections between nodes of a neural network to the memory cells. Memory devices containing nonvolatile memory cells and applicable circuitry for reading and writing may operate with the map. Information stored in the memory cells can represent weights of the connections. One or more neural processors can be present and configured to implement the neural network. | 11-12-2015 |
20150347899 | CORTICAL PROCESSING WITH THERMODYNAMIC RAM - A thermodynamic RAM apparatus includes a physical substrate of addressable adaptive synapses that are temporarily partitioned to emulate adaptive neurons of arbitrary sizes, wherein the physical substrate mates electronically with a digital computing platform for high-throughput and low-power neuromorphic adaptive learning applications. The physical substrate addressable adaptive synapses can be configured as a part of a memristor-based physical neural processing unit. | 12-03-2015 |
20150363689 | Organizing Neural Networks - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for organizing trained and untrained neural networks. In one aspect, a neural network device includes a collection of node assemblies interconnected by between-assembly links, each node assembly itself comprising a network of nodes interconnected by a plurality of within-assembly links, wherein each of the between-assembly links and the within-assembly links have an associated weight, each weight embodying a strength of connection between the nodes joined by the associated link, the nodes within each assembly being more likely to be connected to other nodes within that assembly than to be connected to nodes within others of the node assemblies. | 12-17-2015 |
20150371131 | EVENT-DRIVEN UNIVERSAL NEURAL NETWORK CIRCUIT - The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network. | 12-24-2015 |
20150379398 | SCALABLE NEURAL HARDWARE FOR THE NOISY-OR MODEL OF BAYESIAN NETWORKS - Embodiments of the invention relate to a scalable neural hardware for the noisy-OR model of Bayesian networks. One embodiment comprises a neural core circuit including a pseudo-random number generator for generating random numbers. The neural core circuit further comprises a plurality of incoming electronic axons, a plurality of neural modules, and a plurality of electronic synapses interconnecting the axons to the neural modules. Each synapse interconnects an axon with a neural module. Each neural module receives incoming spikes from interconnected axons. Each neural module represents a noisy-OR gate. Each neural module spikes probabilistically based on at least one random number generated by the pseudo-random number generator unit. | 12-31-2015 |
20160004960 | UNIT HAVING AN ARTIFICIAL NEURON AND A MEMRISTOR - An artificial neuron unit comprising one artificial neuron having at least one output port and at least one input port, and one memristor having two terminals; said unit being characterized in that it also comprises at least one current conveyor having two input ports X and Y, and one output port Z; and in which said memristor is connected by one of its terminals to the input port X of said current conveyor, said current conveyor is connected by its output port Z to an input port of said artificial neuron and said artificial neuron is connected by one of its output ports to the input port Y of said current conveyor or to another of said terminals of said memristor. | 01-07-2016 |
20160004963 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus including inter-class node insertion means for inserting an input vector into a network as an inter-class insertion node. The apparatus further includes a winner node learning time calculation means for incrementing, when an edge is connected between a first winner node and a second winner node, learning time of a node for the first winner node by a predetermined value. The apparatus includes load balancing means for detecting, for each predetermined cycle according to the total number of input vectors, a node where the value of the learning time is relatively large and unbalanced, inserting a new node into a position near the node that has been detected and the adjacent node of the node that has been detected, reducing the learning time of the node that has been detected and the learning time of the adjacent node of the node that has been detected, deleting an edge between the node that has been detected and the adjacent node of the node that has been detected, connecting an edge between the node that has been newly inserted and the node that has been detected, and connecting an edge between the node that has been newly inserted and the adjacent node of the node that has been detected. | 01-07-2016 |
20160004964 | NEUROMORPHIC SYSTEM AND METHOD FOR OPERATING THE SAME - A neuromorphic system includes: an unsupervised learning hardware device configured to perform learning in an unsupervised manner, the unsupervised learning hardware device performing grouping on input signals; and a supervised learning hardware device configured to perform learning in a supervised manner with labeled values, the supervised learning hardware device performing clustering on input signals. | 01-07-2016 |
20160012330 | NEURAL NETWORK AND METHOD OF NEURAL NETWORK TRAINING | 01-14-2016 |
20160019454 | J Patrick's Ladder A Machine Learning Enhancement Tool - The invention is an add-on implementation of a stabilized association memory matrix system to an existing convolutional neural network framework. This invention emulates the intra-action and the inter-action of the cognitive processes of the (logical) left-brain and (intuitive) right-brain. The invention is a numerically stable soft-ware based implementation that (1) reduces the long training times, (2) reduces the execution time, and (3) produces intralayer and interlayer connections. The implementation of this joint processing architecture is designed to take an existing hierarchy of stepped based processes, add next to it a parallel hierarchy of associative memory processes, and then connect the two processes by another set of associative memory processes. Or, the stepped-based process may be replaced with additional associative memory processes to enhance the emulation of several bidirectional intralayer and interlayer cognitive process communication. In addition, the invention can be used as a neural network layer compression tool that takes in a multilayer perceptron, also known as a multilayer neural network, and outputs a single layer perceptron. The final construction can be visualized as two vertical rails connected with a set of horizontal rungs which motivates the name to this invention: J. Patrick's Ladder: A Machine Learning Enhancement Tool. | 01-21-2016 |
20160019457 | METHOD AND A SYSTEM FOR CREATING DYNAMIC NEURAL FUNCTION LIBRARIES - A method for creating a dynamic neural function library that relates to Artificial Intelligence systems and devices is provided. Within a dynamic neural network (artificial intelligent device), a plurality of control values are autonomously generated during a learning process and thus stored in synaptic registers of the artificial intelligent device that represent a training model of a task or a function learned by the artificial intelligent device. Control Values include, but are not limited to, values that indicate the neurotransmitter level that is present in the synapse, the neurotransmitter type, the connectome, the neuromodulator sensitivity, and other synaptic, dendric delay and axonal delay parameters. These values form collectively a training model. Training models are stored in the dynamic neural function library of the artificial intelligent device. The artificial intelligent device copies the function library to an electronic data processing device memory that is reusable to train another artificial intelligent device. | 01-21-2016 |
20160026912 | WEIGHT-SHIFTING MECHANISM FOR CONVOLUTIONAL NEURAL NETWORKS - A processor includes a processor core and a calculation circuit. The processor core includes logic determine a set of weights for use in a convolutional neural network (CNN) calculation and scale up the weights using a scale value. The calculation circuit includes logic to receive the scale value, the set of weights, and a set of input values, wherein each input value and associated weight of a same fixed size. The calculation circuit also includes logic to determine results from convolutional neural network (CNN) calculations based upon the set of weights applied to the set of input values, scale down the results using the scale value, truncate the scaled down results to the fixed size, and communicatively couple the truncated results to an output for a layer of the CNN. | 01-28-2016 |
20160026913 | NEURAL NETWORK TRAINING METHOD AND APPARATUS, AND DATA PROCESSING APPARATUS - A neural network training method based on training data, includes receiving training data including sequential data, and selecting a reference hidden node from hidden nodes in a neural network. The method further includes training the neural network based on remaining hidden nodes obtained by excluding the reference hidden node from the hidden nodes, and based on the training data, the remaining hidden nodes being connected with hidden nodes in a different time interval, and a connection between the reference hidden node and the hidden nodes in the different time interval being ignored. | 01-28-2016 |
20160026914 | DISCRIMINATIVE PRETRAINING OF DEEP NEURAL NETWORKS - Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively. | 01-28-2016 |
20160034808 | HARDWARE ARCHITECTURE FOR SIMULATING A NEURAL NETWORK OF NEURONS - Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state. | 02-04-2016 |
20160034812 | LONG SHORT-TERM MEMORY USING A SPIKING NEURAL NETWORK - A method for configuring long short-term memory (LSTM) in a spiking neural network includes decoding input spikes into analog values within the LSTM. The method further includes implementing the LSTM based on an encoded representation of the analog values. The implementing can include encoding the analog values using base expansive coding, rate coding, latency coding or synaptic weight coding. | 02-04-2016 |
20160042271 | ARTIFICIAL NEURONS AND SPIKING NEURONS WITH ASYNCHRONOUS PULSE MODULATION - A method for configuring an artificial neuron includes receiving a set of input spike trains comprising asynchronous pulse modulation coding representations. The method also includes generating output spikes representing a similarity between the set of input spike trains and a spatial-temporal filter. | 02-11-2016 |
20160063373 | HAPTIC-BASED ARTIFICIAL NEURAL NETWORK TRAINING - In a method for training an artificial neural network based algorithm designed to monitor a first device, a processor receives a first data. A processor determines a first service action recommendation for a first device using the received first data and an artificial neural network (ANN) algorithm. A processor causes a second device to provide haptic feedback using the received first data. A processor receives a second service action recommendation for the first device based on the haptic feedback. A processor adjusts at least one parameter of the ANN algorithm such that the ANN algorithm determines a third service action recommendation for the first device using the received first data, wherein the third service action recommendation is equivalent to the second service action recommendation. | 03-03-2016 |
20160071005 | EVENT-DRIVEN TEMPORAL CONVOLUTION FOR ASYNCHRONOUS PULSE-MODULATED SAMPLED SIGNALS - A method of processing asynchronous event-driven input samples of a continuous time signal, includes calculating a convolutional output directly from the event-driven input samples. The convolutional output is based on an asynchronous pulse modulated (APM) encoding pulse. The method further includes interpolating output between events. | 03-10-2016 |
20160071007 | Methods and Systems for Radial Basis Function Neural Network With Hammerstein Structure Based Non-Linear Interference Management in Multi-Technology Communications Devices - The various embodiments include methods and apparatuses for canceling nonlinear interference during concurrent communication of multi-technology wireless communication devices. Nonlinear interference may be estimated using a radial basis function neural network with Hammerstein structure by executing a radial basis function on aggressor signals at a hidden layer of the radial basis function neural network with Hammerstein structure to obtain hidden layer outputs, augmenting aggressor signal(s) by weight factors and, executing a linear combination of the augmented output, at an intermediate layer to produce a combined hidden layer outputs. At an output layer, a linear filter function may be executed on the hidden layer outputs to produce an estimated nonlinear interference used to cancel the nonlinear interference of a victim signal. | 03-10-2016 |
20160071008 | Methods and Systems for Multi-Model Radial Basis Function Neural Network Based Non-Linear Interference Management in Multi-Technology Communication Devices - The various embodiments include methods and apparatuses for canceling nonlinear interference during concurrent communication of multi-technology wireless communication devices. Nonlinear interference may be estimated using a multi-model radial basis function neural network with Hammerstein structure by executing a radial basis function on aggressor signals at a hidden layer of the radial basis function neural network with Hammerstein structure to obtain hidden layer outputs, augmenting aggressor signal(s) by weight factors, infusing the hidden layer outputs by infusion factors, and, executing a linear combination of the augmented output, at an intermediate layer to produce a combined hidden layer outputs. At an output layer, a linear filter function may be executed on the hidden layer outputs to produce an estimated nonlinear interference used to cancel the nonlinear interference of a victim signal. | 03-10-2016 |
20160071009 | Methods and Systems for Banked Radial Basis Function Neural Network Based Non-Linear Interference Management for Multi-Technology Communication Devices - The various embodiments include methods and apparatuses for canceling nonlinear interference during concurrent communication of multi-technology wireless communication devices. Nonlinear interference may be estimated using a radial basis function neural network with Hammerstein structure by executing a radial basis function on aggressor signals at a hidden layer of the radial basis function neural network with Hammerstein structure to obtain hidden layer outputs, augmenting aggressor signal(s) by weight factors and, executing a linear combination of the augmented output, at an intermediate layer to produce a combined hidden layer outputs. At an output layer, a linear filter function may be executed on the hidden layer outputs to produce an estimated nonlinear interference used to cancel the nonlinear interference of a victim signal. | 03-10-2016 |
20160071010 | Data Category Identification Method and Apparatus Based on Deep Neural Network - A deep neural network to which data category information is added is established locally, to-be-identified data is input to an input layer of the deep neural network generated based on the foregoing data category information, and information of a category to which the to-be-identified data belongs is acquired, where the information of the category is output by an output layer of the deep neural network. A deep neural network is established based on data category information, such that category information of to-be-identified data is conveniently and rapidly obtained using the deep neural network, thereby implementing a category identification function of the deep neural network, and facilitating discovery of an underlying law of the to-be-identified data according to the category information of the to-be-identified data. | 03-10-2016 |
20160071019 | NETWORK-PROBABILITY RECOMMENDATION SYSTEM - A method/apparatus/system for generating a recommendation based on user interactions with nodes and associated tasks within a prerequisite graph. The recommendation is generated by identifying the user's current position within the prerequisite graph and identifying potential next nodes to which the user could move. Based on the user's past interactions with nodes and/or tasks within the prerequisite graph, the user's likelihood of successfully completing the potential next node is calculated, and a recommendation is made based on this calculated likelihood of the user successfully completing the potential next node. | 03-10-2016 |
20160086075 | CONVERTING DIGITAL NUMERIC DATA TO SPIKE EVENT DATA - One embodiment of the invention provides a system comprising at least one data-to-spike converter unit for converting input numeric data received by the system to spike event data. Each data-to-spike converter unit is configured to support one or more spike codes. | 03-24-2016 |
20160086076 | CONVERTING SPIKE EVENT DATA TO DIGITAL NUMERIC DATA - One embodiment of the invention provides a system comprising at least one spike-to-data converter unit for converting spike event data generated by neurons to output numeric data. Each spike-to-data converter unit is configured to support one or more spike codes. | 03-24-2016 |
20160086080 | METHOD AND/OR SYSTEM FOR MAGNETIC LOCALIZATION - A method of real time magnetic localization comprising: providing an artificial neural network field model that is calibrated and optimized for a predetermined magnet; receiving signals from one or more magnetic sensors; and solving the location of the magnet using the model based on the signals. | 03-24-2016 |
20160092765 | Tool for Investigating the Performance of a Distributed Processing System - A performance investigation tool (PIT) is described herein for investigating the performance of a distributed processing system (DPS). The PIT operates by first receiving input information that describes a graph processing task to be executed using a plurality of computing units. The PIT then determines, based on the input information, at least one time-based performance measure that describes the performance of a DPS that is capable of performing the graphical task. More specifically, the PIT can operate in a manual mode to explore the behavior of a specified DPS, or in an automatic mode to find an optimal DPS from within a search space of candidate DPSs. A configuration system may then be used to construct a selected DPS, using the plurality of computing units. In one case, the graph processing task involves training a deep neural network model having a plurality of layers. | 03-31-2016 |
20160092767 | Apparatus and method for learning a model corresponding to time-series input data - A dynamic time-evolution Boltzmann machine capable of learning is provided. Aspects include acquiring a time-series input data and supplying a plurality of input values of input data of the time-series input data at one time point to a plurality of nodes of the mode. Aspects also include computing, based on an input data sequence before the one time point in the time-series input data and a weight parameter between each of a plurality of input values of input data of the input data sequence and a corresponding one of the plurality of nodes of the model, a conditional probability of the input value at the one time point given that the input data sequence has occurred. Aspects further include adjusting the weight parameter so as to increase a conditional probability of occurrence of the input data at the one time point given that the input data sequence has occurred. | 03-31-2016 |
20160098632 | TRAINING NEURAL NETWORKS ON PARTITIONED TRAINING DATA - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes obtaining partitioned training data for the neural network, wherein the partitioned training data comprises a plurality of training items each of which is assigned to a respective one of a plurality of partitions, wherein each partition is associated with a respective difficulty level; and training the neural network on each of the partitions in a sequence from a partition associated with an easiest difficulty level to a partition associated with a hardest difficulty level, wherein, for each of the partitions, training the neural network comprises: training the neural network on a sequence of training items that includes training items selected from the training items in the partition interspersed with training items selected from the training items in all of the partitions. | 04-07-2016 |
20160098633 | DEEP LEARNING MODEL FOR STRUCTURED OUTPUTS WITH HIGH-ORDER INTERACTION - Methods and systems for training a neural network include pre-training a bi-linear, tensor-based network, separately pre-training an auto-encoder, and training the bi-linear, tensor-based network and auto-encoder jointly. Pre-training the bi-linear, tensor-based network includes calculating high-order interactions between an input and a transformation to determine a preliminary network output and minimizing a loss function to pre-train network parameters. Pre-training the auto-encoder includes calculating high-order interactions of a corrupted real network output, determining an auto-encoder output using high-order interactions of the corrupted real network output, and minimizing a loss function to pre-train auto-encoder parameters. | 04-07-2016 |
20160110640 | TIME-DIVISION MULTIPLEXED NEUROSYNAPTIC MODULE WITH IMPLICIT MEMORY ADDRESSING FOR IMPLEMENTING A NEURAL NETWORK - Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step. | 04-21-2016 |
20160110641 | DETERMINING A LEVEL OF RISK FOR MAKING A CHANGE USING A NEURO FUZZY EXPERT SYSTEM - Determining a level of risk for making a change is provided. Valid-trained-neuro-fuzzy-expert-system-logic is generated. A plurality of input values is received. The input values are analyzed using the valid-trained-neuro-fuzzy-expert-system-logic. A level of risk of making the change is determined based on the analyzing of the input values. | 04-21-2016 |
20160110642 | DEEP NEURAL NETWORK LEARNING METHOD AND APPARATUS, AND CATEGORY-INDEPENDENT SUB-NETWORK LEARNING APPARATUS - Provided is a DNN learning method that can reduce DNN learning time using data belonging to a plurality of categories. The method includes the steps of training a language-independent sub-network | 04-21-2016 |
20160110644 | Time Correlation Learning Neuron Circuit Based on a Resistive Memristor and an Implementation Method Thereof - The present invention discloses a time correlation learning neuron circuit based on a resistive memristor and an implementation method thereof. The present invention utilizes switching characteristics of the resistive memristor. When two terminals of the resistive memristor are selected synchronously by two excitation signals, the voltage drop between these two terminals will change the resistance value of memristor, thereby achieving the on-off of a synapse connection and achieving the correction of the two excitation signals. Meanwhile the device also has a memory characteristic. Also, the previous excitation signal can be repeated. That is, the purpose of learning is achieved. Since the resistive memristor has a simple structure and a high degree of integration, it can achieve large-scale physical synapse connection in order to achieve more complex learning and even logic functions. The present invention has a good application prospect in a neuron cell computation. | 04-21-2016 |
20160117586 | AUGMENTING NEURAL NETWORKS WITH EXTERNAL MEMORY - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory. One of the methods includes providing an output derived from a first portion of a neural network output as a system output; determining one or more sets of writing weights for each of a plurality of locations in an external memory; writing data defined by a third portion of the neural network output to the external memory in accordance with the sets of writing weights; determining one or more sets of reading weights for each of the plurality of locations in the external memory from a fourth portion of the neural network output; reading data from the external memory in accordance with the sets of reading weights; and combining the data read from the external memory with a next system input to generate the next neural network input. | 04-28-2016 |
20160148091 | THERMODYNAMIC RAM TECHNOLOGY STACK - A thermodynamic RAM technology stack, two or more memristors or pairs of memristors comprising AHaH (Anti-Hebbian and Hebbian) computing components, and one or more AHaH nodes composed of such memristor pairs that form at least a portion of the thermodynamic RAM technology stack. The levels of the thermodynamic-RAM technology stack include the memristor, a Knowm synapse, an AHaH node, a kT-RAM, kT-RAM instruction set, a sparse spike encoding, a kT-RAM emulator, and a SENSE Server. | 05-26-2016 |
20160162780 | EVENT-DRIVEN UNIVERSAL NEURAL NETWORK CIRCUIT - The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network. | 06-09-2016 |
20160162781 | METHOD OF TRAINING A NEURAL NETWORK - A method of training a neural network having at least an input layer, an output layer and a hidden layer, and a weight matrix encoding connection weights between two of the layers, the method comprising the steps of (a) providing an input to the input layer, the input having an associated expected output, (b) receiving a generated output at the output layer, (c) generating an error vector from the difference between the generated output and expected output, (d) generating a change matrix, the change matrix being the product of a random weight matrix and the error vector, and (e) modifying the weight matrix in accordance with the change matrix. | 06-09-2016 |
20160171370 | COMPUTER-IMPLEMENTED SYSTEMS UTILIZING SENSOR NETWORKS FOR SENSING TEMPERATURE AND MOTION ENVIRONMENTAL PARAMETERS; AND METHODS OF USE THEREOF | 06-16-2016 |
20160180976 | METHOD OF SYNTHESIZING AXIAL POWER DISTRIBUTIONS OF NUCLEAR REACTOR CORE USING NEURAL NETWORK CIRCUIT AND IN-CORE PROTECTION SYSTEM (ICOPS) USING THE SAME | 06-23-2016 |
20160203858 | NEUROMORPHIC MEMORY CIRCUIT | 07-14-2016 |
20160379111 | MEMORY BANDWIDTH MANAGEMENT FOR DEEP LEARNING APPLICATIONS - In a data center, neural network evaluations can be included for services involving image or speech recognition by using a field programmable gate array (FPGA) or other parallel processor. The memory bandwidth limitations of providing weighted data sets from an external memory to the FPGA (or other parallel processor) can be managed by queuing up input data from the plurality of cores executing the services at the FPGA (or other parallel processor) in batches of at least two feature vectors. The at least two feature vectors can be at least two observation vectors from a same data stream or from different data streams. The FPGA (or other parallel processor) can then act on the batch of data for each loading of the weighted datasets. | 12-29-2016 |
20160379112 | TRAINING AND OPERATION OF COMPUTATIONAL MODELS - A processing unit can acquire datasets from respective data sources, each having a respective unique data domain. The processing unit can determine values of a plurality of features based on the plurality of datasets. The processing unit can modify input-specific parameters or history parameters of a computational model based on the values of the features. In some examples, the processing unit can determine an estimated value of a target feature based at least in part on the modified computational model and values of one or more reference features. In some examples, the computational model can include neural networks for several input sets. An output layer of at least one of the neural networks can be connected to the respective hidden layer(s) of one or more other(s) of the neural networks. In some examples, the neural networks can be operated to provide transformed feature value(s) for respective times. | 12-29-2016 |
20160379115 | DEEP NEURAL NETWORK PROCESSING ON HARDWARE ACCELERATORS WITH STACKED MEMORY - A method is provided for processing on an acceleration component a deep neural network. The method includes configuring the acceleration component to perform forward propagation and backpropagation stages of the deep neural network. The acceleration component includes an acceleration component die and a memory stack disposed in an integrated circuit package. The memory stack has a memory bandwidth greater than about 50 GB/sec and a power efficiency of greater than about 20 MB/sec/mW. | 12-29-2016 |
20170236054 | Hyper Aware Logic to Create an Agent of Consciousness and Intent for Devices and Machines | 08-17-2017 |
20170236056 | AUTOMATED PREDICTIVE MODELING AND FRAMEWORK | 08-17-2017 |
20170236057 | System and Method for Face Detection and Landmark Localization | 08-17-2017 |
20170236059 | APPARATUS AND METHOD FOR GENERATING WEIGHT ESTIMATION MODEL, AND APPARATUS AND METHOD FOR ESTIMATING WEIGHT | 08-17-2017 |
20180025268 | Configurable machine learning assemblies for autonomous operation in personal devices | 01-25-2018 |
20180025270 | GENERATING SETS OF TRAINING PROGRAMS FOR MACHINE LEARNING MODELS | 01-25-2018 |
20180025271 | LEARNING APPARATUS, IDENTIFYING APPARATUS, LEARNING AND IDENTIFYING SYSTEM, AND RECORDING MEDIUM | 01-25-2018 |
20180025272 | NEURAL NETWORK APPLICATIONS IN RESOURCE CONSTRAINED ENVIRONMENTS | 01-25-2018 |
20190147328 | COMPETITIVE MACHINE LEARNING ACCURACY ON NEUROMORPHIC ARRAYS WITH NON-IDEAL NON-VOLATILE MEMORY DEVICES | 05-16-2019 |
20190147332 | MEMORY BANDWIDTH REDUCTION TECHNIQUES FOR LOW POWER CONVOLUTIONAL NEURAL NETWORK INFERENCE APPLICATIONS | 05-16-2019 |
20190147333 | SYSTEM AND METHOD FOR SEMI-SUPERVISED CONDITIONAL GENERATIVE MODELING USING ADVERSARIAL NETWORKS | 05-16-2019 |
20190147334 | MATCHING NETWORK FOR MEDICAL IMAGE ANALYSIS | 05-16-2019 |
20190147337 | NEURAL NETWORK SYSTEM FOR SINGLE PROCESSING COMMON OPERATION GROUP OF NEURAL NETWORK MODELS, APPLICATION PROCESSOR INCLUDING THE SAME, AND OPERATION METHOD OF NEURAL NETWORK SYSTEM | 05-16-2019 |
20190147339 | LEARNING NEURAL NETWORK STRUCTURE | 05-16-2019 |
20190147340 | Machine Learning via Double Layer Optimization | 05-16-2019 |
20190147342 | DEEP NEURAL NETWORK PROCESSOR WITH INTERLEAVED BACKPROPAGATION | 05-16-2019 |
20190147343 | UNSUPERVISED ANOMALY DETECTION USING GENERATIVE ADVERSARIAL NETWORKS | 05-16-2019 |
20190147344 | SYSTEM AND METHOD OF DEPLOYING AN ARTIFICIAL NEURAL NETWORK ON A TARGET DEVICE | 05-16-2019 |
20190147350 | METHOD AND DEVICE FOR PRESENTING PREDICTION MODEL, AND METHOD AND DEVICE FOR ADJUSTING PREDICTION MODEL | 05-16-2019 |
20220138550 | BLOCKCHAIN FOR ARTIFICIAL INTELLIGENCE TRAINING - An example operation may include one or more of dividing a neural network that corresponds to an artificial intelligence (AI) model into a plurality of sub-models, assigning the plurality of sub-models to a plurality of blockchain peers, respectively, training the sub-models, via the plurality of blockchain peers, to generate training results within an iteration, and committing the training results to a blockchain which is accessible by the plurality of blockchain peers. | 05-05-2022 |
20220138556 | DATA LOG PARSING SYSTEM AND METHOD - A method of processing data logs, a system for processing data logs, a method of training a system for processing data logs, and a processor are described. The method of processing data logs may include receiving a data log from a data source, where the data log is received in a format native to a machine that generated the data log. The method may also include providing the data log to a neural network trained to process natural language-based inputs, parsing the data log with the neural network, and receiving an output from the neural network, where the output is generated in response to the neural network parsing the data log. The method may also include storing the output from the neural network in a data log repository. | 05-05-2022 |
20220138558 | DEEP SIMULATION NETWORKS - Systems utilize a set of stored simulation nodes including an initial simulation node and a subsequent simulation node constructed according to a neural network computational fabric for simulating a physical process. These systems are configured to implement/utilize the set of simulation nodes by, at the initial simulation node, receiving initial state input, calculating an initial state evolution output, and generating an initial message vector output. At the subsequent simulation node, systems implement/utilize the set of simulation nodes by receiving a subsequent state input and a subsequent message vector input based on the initial message vector output to facilitate coordination between the initial and subsequent simulation nodes for calculating respective state evolution outputs for simulating the physical process or component. The systems are also configured to calculate a subsequent state evolution output based on the subsequent state input and the subsequent message vector input. | 05-05-2022 |
20220138563 | METHOD AND DEVICE WITH DEEP LEARNING OPERATIONS - A method and a device with deep learning operations. An electronic device includes a processor configured to simultaneously perform, using a systolic array, a plurality of tasks, wherein the processor includes the systolic array having a plurality of processing elements (PEs), and a first on-chip network that performs data propagation between two or more of the plurality of PEs, where each of the plurality of tasks includes one or more deep learning operations. | 05-05-2022 |
20220138564 | Batch Processing in a Machine Learning Computer - A method of processing batches of data in a computer comprising a plurality of pipelined stages each providing one or more layers of a machine learning model. The method comprises: processing a first batch of data in the pipeline processing stages, each layer of the model using an activation function and weights for that layer to generate an output activation, wherein an output layer generates an output of the model. The method further comprises, for each layer: computing an estimated gradient of a loss function; generating updated weights by processing the estimated gradient with respect to the weights for the first batch using a learning rate for the model; and storing the updated weights for processing on the next batch of data. Updated weights are generated using a modulation factor based on the number of processing stages between that layer and the output layer. | 05-05-2022 |
20220138565 | ADVERSARIAL INFORMATION BOTTLENECK STRATEGY FOR IMPROVED MACHINE LEARNING - Certain aspects of the present disclosure provide techniques for performing machine learning, including: processing a training data instance with a task model to generate an encoding and a task model output; processing a discriminator input based on the encoding using a discriminator model to generate an estimated mutual information between the encoding and the one or more input variables of the training data instance; updating parameters of the discriminator model using a first iterative optimization algorithm to maximize a discriminator objective function based on the estimated mutual information; and updating parameters of the task model using a second iterative optimization algorithm to minimize a task objective function based on a sum of the estimated mutual information between the task model output and the one or more input variables of the training data instance and a conditional entropy between the target variable and an encoding generated by the task model. | 05-05-2022 |
20220138566 | LEARNING SYSTEM, LEARNING METHOD AND PROGRAM - A learning system comprising at least one processor configured to: obtain training data to be learned by a learning model; and repeatedly execute a learning process of the learning model based on the training data, wherein the at least one processor quantizes a parameter of a part of layers of the learning model and executes the learning process, and then quantizes parameters of other layers of the learning model and executes the learning process. | 05-05-2022 |
20220138575 | COMPUTER IMPLEMENTED METHOD AND TEST UNIT FOR APPROXIMATING TEST RESULTS AND A METHOD FOR PROVIDING A TRAINED, ARTIFICIAL NEURAL NETWORK - A computer-implemented method for approximating test results of a virtual test of a device for the at least partially autonomous guidance of a motor vehicle. The invention further relates to a computer-implemented method for providing a trained, artificial neural network, a test unit, a computer program and a computer-readable data carrier. | 05-05-2022 |