Patent application number | Description | Published |
20090006468 | EFFICIENT UPDATES FOR DISTRIBUTED FILE SYSTEMS - A directory services implementation is provided to associate distributed file system (DFS) links with individual directory objects, and metadata to attributes thereof, to allow leveraging directory services features for DFS for a given namespace. For example, updating directory objects with modified metadata related to DFS links requires only that the directory object related to the link be updated rather than an entire directory object related to the corresponding namespace. Moreover, directory services functionalities such as querying can be utilized with DFS to provide efficient location of updated records. In this regard, efficient replication of DFS throughout a network is facilitated. | 01-01-2009 |
20110188406 | Message Transport System Using Publication and Subscription Mechanisms - A message transport system may use a publication subscription mechanism to connect nodes and transport messages through the nodes. Each node may establish connections to other nodes, and subscription requests and publication notifications may be passed across the nodes to establish paths for messages. When a message is published, the message may be passed over those connections for which a subscription is active. A path identifier may be added to the message as it is passed between nodes, and the path identifier may be used by a subscribing node for identification of the information being received. When a subscriber notification is removed, the path may be deconstructed across multiple nodes. The nodes may be arranged such that each node is agnostic to any connections past the nodes to which it is connected, and may allow any node to subscribe to any information published within the network. | 08-04-2011 |
20140161129 | Message Transport System Using Publication and Subscription Mechanisms - A message transport system may use a publication subscription mechanism to connect nodes and transport messages through the nodes. Each node may establish connections to other nodes, and subscription requests and publication notifications may be passed across the nodes to establish paths for messages. When a message is published, the message may be passed over those connections for which a subscription is active. A path identifier may be added to the message as it is passed between nodes, and the path identifier may be used by a subscribing node for identification of the information being received. When a subscriber notification is removed, the path may be deconstructed across multiple nodes. The nodes may be arranged such that each node is agnostic to any connections past the nodes to which it is connected, and may allow any node to subscribe to any information published within the network. | 06-12-2014 |
Patent application number | Description | Published |
20140268813 | LIGHTING DEVICE WITH VIRTUAL LIGHT SOURCE - A lighting device includes a light source and an enclosure enclosing the light source wherein a portion of the enclosure has a focus-forming curvature such that when the light from the light source is reflected off the enclosure element the reflected light intersects at the focus of the curvature and creates a virtual light source at the focus. A reflective coating or a reflective material may be applied to the enclosure, or a ball lens may be used around the light source, to increase the intensity of the reflected light and of the virtual light source. A diffuser may be used to change the size and shape of the virtual light source. | 09-18-2014 |
20150049476 | SOLID-STATE LIGHTING TROFFER WITH READILY RETROFITTABLE STRUCTURE - A light-emitting diode (LED) troffer adopts LED light sources mounted along two lengthwise sides of an LED module that uses a reflecting diffuser and a diffused light exit window to sufficiently average white light emissions from a plurality of LEDs or to properly mix light emissions from white LEDs at correlated color temperature (CCT) of 6,200±300 K with emissions from LEDs having saturated colors for uniform and tunable CCT light outputs having a consistent intensity or color hue within viewing angles. The troffer adopting a retrofittable design enables single person to readily hang and secure the LED module single-ended on top of the troffer for installation, retrofit, and inspection. The troffer uses such an integrated LED module with a power density less than 0.0127 W/cm | 02-19-2015 |
20150127299 | Methods And Systems Of Proactive Monitoring Of LED Lights - Various embodiments of methods and systems of proactive monitoring of LED lights are described herein. An LED light monitoring system may include a specification database of LED lamps and drivers, a usage database that records the usage of LED lamps and drivers, a data acquisition subsystem that obtains the identity and the usage data of LED lamps or drivers in use, and a data processing subsystem that calculates, for each LED lamp, its current lumen output statistically and, for each LED driver, its remaining lifetime, and provides replacement recommendations for LED lamps and drivers. | 05-07-2015 |
Patent application number | Description | Published |
20150108900 | Add-On Smart Controller For LED Lighting Device - An add-on smart controller for an LED lighting device includes a power input port, a power output port, a housing, a control unit in the housing, and at least one control signal receiver in the control unit. A power input of the control unit is connected to the power input port. A power output of the control unit is connected to the power output port. The control signal receiver is configured to receive external control signals. The control unit is configured to activate the power output port to supply output voltage responsive to the control unit receiving an ON signal. The control unit is configured to deactivate the power output port responsive to the control unit receiving an OFF signal. | 04-23-2015 |
20150146418 | Retractable End-Cap For LED Tube - Embodiments of an end-cap for an LED tube are described. In one aspect, an end-cap for an LED tube may include an end-cap housing, at least one elastic component, and a connecting assembly. The end-cap housing may include at least one power pin thereon and configured to connect to an external power source. The elastic component may reside inside the end-cap housing. The end-cap housing may connect to a first end of the connecting assembly via an extendable connection and a second end of the connecting assembly opposite to the first end thereof connects to a body of the LED tube through at least one power connector. The power connector may connect to the at least one power pin when the elastic component is pressed. The power connector may remain separate from the at least one power pin when the elastic component is not pressed. | 05-28-2015 |
20150176771 | Retractable End-Cap For LED Tube - Embodiments of an end-cap with retractable and rotatable pin for an LED tube are described. In one aspect, an end-cap for an LED tube may include an end-cap housing, an end-cap base assembly, a power-pin assembly, and at least one elastic component. The power-pin assembly may include at least one power pin thereon and configured to connect to an external power source. The power-pin assembly may protrude out of a center opening of the end-cap housing. The end-cap base assembly may have at least one power connector one end of which is connected to the body of the LED tube to receive electric power. The at least one elastic components may reside inside the end-cap housing and is placed between the power-pin assembly and the end-cap base assembly. The power connector may connect to the at least one power pin when the at least one elastic component is pressed. | 06-25-2015 |
Patent application number | Description | Published |
20110231787 | GUI FOR PROGRAMMING STEP AND REPEAT OPERATIONS IN A MACHINE VISION INSPECTION SYSTEM - A method is provided for programming step and repeat operations of a machine vision inspection system. The machine vision inspection system includes an imaging portion, a stage for holding one or more workpieces in a field of view (FOV) of the imaging portion, a control portion, and a graphical user interface (GUI). According to the method, a user operates the machine vision inspection system to define a set of inspection operations to be performed on a first configuration of workpiece features. The user also operates the GUI to display a step and repeat dialog box, in which he defines a first plurality of parameters defining a set of default step and repeat locations for performing the defined set of inspection operations. The user further operates the GUI to define a set of inspection step and repeat locations, which is a subset of the defined set of default step and repeat locations, where the inspection operations are to be actually performed. | 09-22-2011 |
20130120567 | SYSTEM AND METHOD UTILIZING AN EDITING INITIALIZATION BLOCK IN A PART PROGRAM EDITING ENVIRONMENT IN A MACHINE VISION SYSTEM - A method is provided for defining and utilizing an editing initialization block for a part program. The part program comprises a plurality of steps for taking measurements of a part and is displayed in an editing interface. An option is provided in the editing interface for selecting which steps are in an editing initialization block. After the part program has been saved, at a later time when the part program is recalled for editing, the editing initialization block may be run before additional steps are added to the part program. At least some of the data that would have been obtained by one or more of the initial part program steps that are not in the editing initialization block may be based on estimated data that is related to (e.g., modified based on) data determined from running the editing initialization block. | 05-16-2013 |
20130123945 | MACHINE VISION SYSTEM PROGRAM EDITING ENVIRONMENT INCLUDING REAL TIME CONTEXT GENERATION FEATURES - A machine vision system program editing environment includes near real time context generation. Rather than requiring execution of all preceding instructions of a part program in order to generate a realistic context for subsequent edits, surrogate data operations using previously saved data replace execution of certain sets of instructions. The surrogate data may be saved during the actual execution of operations that are recorded in a part program. An edit mode of execution substitutes that data as a surrogate for executing the operations that would otherwise generate that data. Significant time savings may be achieved for context generation, such that editing occurs within an operating context which may be repeatedly refreshed for accuracy in near real time. This supports convenient program modification by relatively unskilled users, using the native user interface of the machine vision system, rather than difficult to use text-based or graphical object-based editing environments. | 05-16-2013 |
20150103156 | SYSTEM AND METHOD FOR CONTROLLING A TRACKING AUTOFOCUS (TAF) SENSOR IN A MACHINE VISION INSPECTION SYSTEM - A method is provided for controlling a Tracking AutoFocus (TAF) portion of a machine vision inspection system including an imaging portion, a movable workpiece stage, a control portion, and graphical user interface (GUI). The TAF portion automatically adjusts a focus position of the imaging portion to focus at a Z height corresponding to a current surface height of the workpiece. The method includes providing the TAF portion, and providing TAF enable and disable operations, wherein: the TAF disable operations comprise a first set of TAF automatic interrupt operations that are automatically triggered by user-initiated operations that include changing the Z height, and the TAF disable operations may further comprise automatic interrupt operations that are automatically triggered based on at least one respective TAF Z height surface tracking characteristic exceeding a previously set TAF disable limit for that respective TAF Z height surface tracking characteristic. | 04-16-2015 |
Patent application number | Description | Published |
20120303565 | LEARNING PROCESSES FOR SINGLE HIDDEN LAYER NEURAL NETWORKS WITH LINEAR OUTPUT UNITS - Learning processes for a single hidden layer neural network, including linear input units, nonlinear hidden units, and linear output units, calculate the lower-layer network parameter gradients by taking into consideration a solution for the upper-layer network parameters. The upper-layer network parameters are calculated by a closed form formula given the lower-layer network parameters. An accelerated gradient algorithm can be used to update the lower-layer network parameters. A weighted gradient also can be used. With the combination of these techniques, accelerated training with faster convergence, to a point with a lower error rate, can be obtained. | 11-29-2012 |
20130138436 | DISCRIMINATIVE PRETRAINING OF DEEP NEURAL NETWORKS - Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively. | 05-30-2013 |
20130138589 | EXPLOITING SPARSENESS IN TRAINING DEEP NEURAL NETWORKS - Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training. | 05-30-2013 |
20130212052 | TENSOR DEEP STACKED NEURAL NETWORK - A tensor deep stacked neural (T-DSN) network for obtaining predictions for discriminative modeling problems. The T-DSN network and method use bilinear modeling with a tensor representation to map a hidden layer to the predication layer. The T-DSN network is constructed by stacking blocks of a single hidden layer tensor neural network (SHLTNN) on top of each other. The single hidden layer for each block then is separated or divided into a plurality of two or more sections. In some embodiments, the hidden layer is separated into a first hidden layer section and a second hidden layer section. These multiple sections of the hidden layer are combined using a product operator to obtain an implicit hidden layer having a single section. In some embodiments the product operator is a Khatri-Rao product. A prediction is made using the implicit hidden layer and weights, and the output prediction layer is consequently obtained. | 08-15-2013 |
20140067735 | COMPUTER-IMPLEMENTED DEEP TENSOR NEURAL NETWORK - A deep tensor neural network (DTNN) is described herein, wherein the DTNN is suitable for employment in a computer-implemented recognition/classification system. Hidden layers in the DTNN comprise at least one projection layer, which includes a first subspace of hidden units and a second subspace of hidden units. The first subspace of hidden units receives a first nonlinear projection of input data to a projection layer and generates the first set of output data based at least in part thereon, and the second subspace of hidden units receives a second nonlinear projection of the input data to the projection layer and generates the second set of output data based at least in part thereon. A tensor layer, which can converted into a conventional layer of a DNN, generates the third set of output data based upon the first set of output data and the second set of output data. | 03-06-2014 |
20140142929 | DEEP NEURAL NETWORKS TRAINING FOR SPEECH AND PATTERN RECOGNITION - The use of a pipelined algorithm that performs parallelized computations to train deep neural networks (DNNs) for performing data analysis may reduce training time. The DNNs may be one of context-independent DNNs or context-dependent DNNs. The training may include partitioning training data into sample batches of a specific batch size. The partitioning may be performed based on rates of data transfers between processors that execute the pipelined algorithm, considerations of accuracy and convergence, and the execution speed of each processor. Other techniques for training may include grouping layers of the DNNs for processing on a single processor, distributing a layer of the DNNs to multiple processors for processing, or modifying an execution order of steps in the pipelined algorithm. | 05-22-2014 |
20140257803 | CONSERVATIVELY ADAPTING A DEEP NEURAL NETWORK IN A RECOGNITION SYSTEM - Various technologies described herein pertain to conservatively adapting a deep neural network (DNN) in a recognition system for a particular user or context. A DNN is employed to output a probability distribution over models of context-dependent units responsive to receipt of captured user input. The DNN is adapted for a particular user based upon the captured user input, wherein the adaption is undertaken conservatively such that a deviation between outputs of the adapted DNN and the unadapted DNN is constrained. | 09-11-2014 |
20140257804 | EXPLOITING HETEROGENEOUS DATA IN DEEP NEURAL NETWORK-BASED SPEECH RECOGNITION SYSTEMS - Technologies pertaining to training a deep neural network (DNN) for use in a recognition system are described herein. The DNN is trained using heterogeneous data, the heterogeneous data including narrowband signals and wideband signals. The DNN, subsequent to being trained, receives an input signal that can be either a wideband signal or narrowband signal. The DNN estimates the class posterior probability of the input signal regardless of whether the input signal is the wideband signal or the narrowband signal. | 09-11-2014 |
20140257805 | MULTILINGUAL DEEP NEURAL NETWORK - Described herein are various technologies pertaining to a multilingual deep neural network (MDNN). The MDNN includes a plurality of hidden layers, wherein values for weight parameters of the plurality of hidden layers are learned during a training phase based upon training data in terms of acoustic raw features for multiple languages. The MDNN further includes softmax layers that are trained for each target language separately, making use of the hidden layer values trained jointly with multiple source languages. The MDNN is adaptable, such that a new softmax layer may be added on top of the existing hidden layers, where the new softmax layer corresponds to a new target language. | 09-11-2014 |
20150255061 | LOW-FOOTPRINT ADAPTATION AND PERSONALIZATION FOR A DEEP NEURAL NETWORK - The adaptation and personalization of a deep neural network (DNN) model for automatic speech recognition is provided. An utterance which includes speech features for one or more speakers may be received in ASR tasks such as voice search or short message dictation. A decomposition approach may then be applied to an original matrix in the DNN model. In response to applying the decomposition approach, the original matrix may be converted into multiple new matrices which are smaller than the original matrix. A square matrix may then be added to the new matrices. Speaker-specific parameters may then be stored in the square matrix. The DNN model may then be adapted by updating the square matrix. This process may be applied to all of a number of original matrices in the DNN model. The adapted DNN model may include a reduced number of parameters than those received in the original DNN model. | 09-10-2015 |
20150269933 | MIXED SPEECH RECOGNITION - The claimed subject matter includes a system and method for recognizing mixed speech from a source. The method includes training a first neural network to recognize the speech signal spoken by the speaker with a higher level of a speech characteristic from a mixed speech sample. The method also includes training a second neural network to recognize the speech signal spoken by the speaker with a lower level of the speech characteristic from the mixed speech sample. Additionally, the method includes decoding the mixed speech sample with the first neural network and the second neural network by optimizing the joint likelihood of observing the two speech signals considering the probability that a specific frame is a switching point of the speech characteristic. | 09-24-2015 |
20160026914 | DISCRIMINATIVE PRETRAINING OF DEEP NEURAL NETWORKS - Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively. | 01-28-2016 |
20160140956 | PREDICTION-BASED SEQUENCE RECOGNITION - A sequence recognition system comprises a prediction component configured to receive a set of observed features from a signal to be recognized and to output a prediction output indicative of a predicted recognition based on the set of observed features. The sequence recognition system also comprises a classification component configured to receive the prediction output and to output a label indicative of recognition of the signal based on the prediction output. | 05-19-2016 |
Patent application number | Description | Published |
20080290066 | Method of Fabricating Polymer Modulators With Etch Stop Clads - A process that comprises dry etching a trench into a side clad polymer layer using an underlying passive polymer layer as an etch stop, and then back filling the trench with an electro-optic polymer. | 11-27-2008 |
20080298736 | Broadband Electro-Optic Polymer Modulators With Integrated Resistors - In one aspect, an electro-optic device comprises: a) a high speed electrode; b) a ground electrode; c) polymer layers embedding an electro-optic polymer waveguide; and d) at least one integrated resistor in electrical contact with the high speed electrode and the ground electrode, wherein the high speed electrode and the ground electrode are positioned to control light in the electro-optic polymer waveguide. | 12-04-2008 |
20080298737 | Integrated Resistor Fabrication Method and Optical Devices Therefrom - In one aspect, a process comprises: a) fabricating polymer layers embedding an electro-optic polymer waveguide; b) fabricating a high speed electrode and a ground electrode, wherein the high speed electrode and ground electrode are positioned to control the electro-optic polymer waveguide; and c) fabricating a resistor at a predetermined location along the high speed electrode. | 12-04-2008 |
20100074584 | ELECTRO-OPTIC DEVICE AND METHOD FOR MAKING LOW RESISTIVITY HYBRID POLYMER CLADS FOR AN ELECTRO-OPTIC DEVICE - A low resistivity hybrid optical cladding may be formed from a sol-gel doped with an inorganic salt such as lithium perchlorate. An electro-optic device may be formed by poling an organic chromophore-loaded modulation layer through at least one layer of the low resistivity hybrid optical cladding. | 03-25-2010 |
20100111465 | INTRINSICALLY LOW RESISTIVITY HYBRID SOL-GEL POLYMER CLADS AND ELECTRO-OPTIC DEVICES MADE THEREFROM - A low resistivity hybrid organic-inorganic material may include a proportion of charge traps including a trap element indirectly covalently bonded to a donor or acceptor element. The trap element may include tin. The donor or acceptor element may include indium and/or antimony. Bonding includes cross-linking via oxygen bonds and via organic cross-linkers. The material may be formed as a hybrid sol-gel. The material may have optical transmission and refractive index characteristics. The material may be formed as optical cladding proximal to a non-linear optical layer, and may form a portion of a second order nonlinear optical device. The second order nonlinear optical device may include and electro-optic device including an organic chromophore-loaded modulation layer. | 05-06-2010 |
20100121016 | LOW REFRACTIVE INDEX HYBRID OPTICAL CLADDING AND ELECTRO-OPTIC DEVICES MADE THEREFROM - A low index of refraction hybrid optical cladding may be formed from a fluorinated sol-gel. An electro-optic device may include a poled organic chromophore-loaded modulation layer and at least one adjacent fluorinated hybrid sol-gel clad. | 05-13-2010 |
20120157584 | STABILIZED ELECTRO-OPTIC MATERIALS AND ELECTRO-OPTIC DEVICES MADE THEREFROM - According to an embodiment, an electro-optic polymer comprises a host polymer and a guest nonlinear optical chromophore having the structure D-π-A, wherein: D is a donor, π is a π-bridge, and A is an acceptor; a bulky substituent group is covalently attached to at least one of D, π, or A; and the bulky substituent group has at least one non-covalent interaction with part of the host polymer that impedes chromophore depoling. | 06-21-2012 |
20120163749 | INTEGRATED CIRCUIT WITH OPTICAL DATA COMMUNICATION - An integrated circuit is configured for optical communication via an optical polymer stack located on top of the integrated circuit. The optical polymer stack may include one or more electro-optic polymer devices including an electro-optic polymer. The electro-optic polymer may include a host polymer and a second order nonlinear chromomophore, the host polymer and the chromophore both including aryl groups configured to interact with one another to provide enhanced thermal and/or temporal stability. | 06-28-2012 |
20130004137 | FLUORINATED SOL-GEL LOW REFRACTIVE INDEX HYBRID OPTICAL CLADDING AND ELECTRO-OPTIC DEVICES MADE THEREFROM - A low index of refraction hybrid optical cladding may be formed from a fluorinated sol-gel. An electro-optic device may include a poled organic chromophore-loaded modulation layer (electro-optic polymer) and at least one adjacent fluorinated hybrid sol-gel cladding layer. | 01-03-2013 |
Patent application number | Description | Published |
20130234067 | CHROMOPHORIC POLYMER DOTS - The present invention provides, among other aspects, stabilized chromophoric nanoparticles. In certain embodiments, the chromophoric nanoparticles provided herein are rationally functionalized with a pre-determined number of functional groups. In certain embodiments, the stable chromophoric nanoparticles provided herein are modified with a low density of functional groups. In yet other embodiments, the chromophoric nanoparticles provided herein are conjugated to one or more molecules. Also provided herein are methods for making rationally functionalized chromophoric nanoparticles. | 09-12-2013 |
20140302516 | POLYMER DOT COMPOSITIONS AND RELATED METHODS - Lyophilized polymer dot compositions are provided. Also disclosed are methods of making and using the lyophilized compositions and kits supplying the compositions. | 10-09-2014 |
20140350183 | CHROMOPHORIC POLYMER DOTS WITH NARROW-BAND EMISSION - Polymers, monomers, chromophoric polymer dots and related methods are provided. Highly fluorescent chromophoric polymer dots with narrow-band emissions are provided. Methods for synthesizing the chromophoric polymers, preparation methods for forming the chromophoric polymer dots, and biological applications using the unique properties of narrow-band emissions are also provided. | 11-27-2014 |
20150268229 | METAL-CONTAINING SEMICONDUCTING POLYMER DOTS AND METHODS OF MAKING AND USING THE SAME - The present disclosure provides metal-containing (MC) semiconducting (SC) Pdots (MC-SC-Pdots) with beneficial functionalities in both cellular imaging and manipulation, among other applications. The Pdots comprise at least one nanoparticle comprising at least one metal, and a semiconducting polymer associated with the nanoparticle. | 09-24-2015 |
20160131659 | FLUORINATED POLYMER DOTS - This disclosure provides semiconducting polymer dots (Pdots) for use in a wide variety of applications. In particular, this disclosure provides Pdots that are halogenated, including fluorinated Pdots. This disclosure also provides methods for synthesizing Pdots and methods for using Pdots, such as for biological imaging. | 05-12-2016 |
Patent application number | Description | Published |
20140195632 | IMMUTABLE SHARABLE ZERO-COPY DATA AND STREAMING - The environment and use of an immutable buffer. A computing entity acquires data or generates data and populates the data into the buffer, after which the buffer is classified as immutable. The classification protects the data populated within the immutable buffer from changing during the lifetime of the immutable buffer, and also protects the immutable buffer from having its physical address changed during the lifetime of the immutable buffer. As different computing entities consume data from the immutable buffer, they do so through views provided by a view providing entity. The immutable buffer architecture may also be used for streaming data in which each component of the streaming data uses an immutable buffer. Accordingly, different computing entities may view the immutable data differently without having to actually copy the data. | 07-10-2014 |
20140195739 | ZERO-COPY CACHING - Caching of an immutable buffer that has its data and address prevented from changing during the lifetime of the immutable buffer. A first computing entity maintains a cache of the immutable buffer and has a strong reference to the immutable buffer. So long as any entity has a strong reference to the immutable buffer, the immutable buffer is guaranteed to continue to exist for the duration of the strong reference. A second computing entity communicates with the first computing entity to obtain a strong reference to the immutable buffer and thereafter read data from the immutable buffer. Upon reading the data from the cache, the second computing entity demotes the strong reference to a weak reference to the immutable buffer. A weak reference to the immutable buffer does not guarantee that the immutable buffer will continue to exist for the duration of the weak reference. | 07-10-2014 |
20140195746 | DMA CHANNELS - Communicating between an application and a hardware device. A method includes an application writing data to host physical memory using an application view of the memory. The method further includes mapping the data in the physical memory to a hardware driver view, usable by a hardware driver, without needing to copy the data to a different physical storage location. The method further includes mapping the data to a hardware accessible view accessible by a hardware device without needing to copy the data to a different physical storage location | 07-10-2014 |
20140195834 | HIGH THROUGHPUT LOW LATENCY USER MODE DRIVERS IMPLEMENTED IN MANAGED CODE - Implementing a safe driver that can support high throughput and low latency devices. The method includes receiving a hardware message from a hardware device. The method further includes delivering the hardware message to one or more driver processes executing in user mode using a zero-copy to allow the one or more driver processes to support high throughput and low latency hardware devices. | 07-10-2014 |
20140195862 | SOFTWARE SYSTEMS BY MINIMIZING ERROR RECOVERY LOGIC - Handing errors in program execution. The method includes identifying a set including a plurality of explicitly identified failure conditions. The method further includes determining that one or more of the explicitly identified failure conditions has occurred. As a result, the method further includes halting a predetermined first execution scope of computing, and notifying another scope of computing of the failure condition. An alternative embodiment may be practiced in a computing environment, and includes a method handing errors. The method includes identifying a set including a plurality of explicitly identified failure conditions. The method further includes determining that an error condition has occurred that is not in the set including a plurality of explicitly identified failure conditions. As a result, the method further includes halting a predetermined first execution scope of computing, and notifying another scope of computing of the failure condition. | 07-10-2014 |
20140196004 | SOFTWARE INTERFACE FOR A HARDWARE DEVICE - Automatically generating code used with device drivers for interfacing with hardware. The method includes receiving a machine readable description of a hardware device, including at least one of hardware registers or shared memory structures of the hardware device. The method further includes determining an operating system with which the hardware device is to be used. The method further includes processing the machine readable description on a code generation tool to automatically generate code for a hardware driver for the hardware device specific to the determined operating system. | 07-10-2014 |
20140196059 | CAPABILITY BASED DEVICE DRIVER FRAMEWORK - Enforcing limitations on hardware drivers. The method includes from a system kernel, assigning I/O resources to the system's root bus. From the root bus, the method further includes assigning a subset of the I/O resources to a device bus. Assigning a subset of the I/O resources to a device bus includes limiting the device bus to only be able to assign I/O resources that are assigned to it by the root bus. From the device bus, the method includes assigning I/O resources to a device through a device interface. | 07-10-2014 |
20150089471 | INPUT FILTERS AND FILTER-DRIVEN INPUT PROCESSING - Input filters correlate to target components. For a given target component, the input filter defines input validation information. The input filter might also define conversions or transformations to be applied to valid input prior to being provided to the target component. At build time, code is accessed that contains the input validation, conversion and transformation and that identifies the associated target component. The information is then used to construct an input filter. At run time, when an input processing component receives an input, the input processing component identifies the target component, accesses the associated input filter, and uses the information contained in the input filter to determine whether the input is valid, and whether and how to convert and transform the value. | 03-26-2015 |
20150100947 | BUILD-TIME RESOLVING AND TYPE CHECKING REFERENCES - Build-time resolution and type-enforcing of corresponding references in different code that references the same value. In response to detecting a directive within the code itself that a first reference in first code is to be correlated with a second reference in second code, and in response to detection that the types of the references are the same, a code generation tool generates correlation code that is interpretable to a compiler as allowing a value of a type of the first reference of a compiled-form of the first code to be passed as the same value of the same type of the second reference of a compiled-form of the second code. The first code, the second code, and the generated correlation code may then be compiled. If compilation is successful, this means that the first and second references are already properly resolved as referring to the same value and type-enforced. | 04-09-2015 |
Patent application number | Description | Published |
20090017427 | Intelligent Math Problem Generation - A problem generator that takes an input as a math problem, analyzes the math problem, and intelligently spawns similar example problem types. The output is a set of math problems based on the conditions set during analysis and customization. For example, if the original problem deals with linear equations, this will be detected during analysis and used to spawn other linear equations as problems. Moreover, if the answer to the original problem is in integer format, so will the answers to the spawned problems. A customizable UI is designed to allow further customization of problem conditions to generate an accurate set of problems based on the initial input. Problem generator templates can be created, shared and modified for distribution and/or future use. Additionally, problem generation APIs can be extended for external code to automate and consume generated math problems. | 01-15-2009 |
20090018979 | MATH PROBLEM CHECKER - A problem checker architecture that monitors user progress during a problem-solving process and assists the user through the process (e.g., when requested) using common human methods of solving the problem. Assistance can be in the form of detecting errors during the process, and providing context-sensitive help information when the user gets stuck or makes a mistake. The problem checker can walk the user through the process of solving a math problem one step at a time allowing the user to learn to solve math problems according to a number of different methods. Rather than simply calculating and displaying the answer, the problem checker allows the user to attempt to solve math problems, providing direction only when asked and correction only when required. The problem checker can recognize multiple solution methods for many common math problems and guide the user to the solution via any of the methods. | 01-15-2009 |
20090019099 | MATH CALCULATION IN WORD PROCESSORS - Architecture for a word processing application that facilitates operating on mathematical symbols, expressions, and/or equations input to a word processing document, and returning results back to the document. User input to the document in the form of math symbols, expressions or equations is transformed into a format for processing by a math engine. The engine returns one or more operations to the user that can be performed on the input, including calculating mathematical solutions, graphing equations and viewing steps to solving math problems. A user interface allows the user choose from the possible operations and to interactively manipulate input and graphs in the word application. The results can be inserted directly into the document and also be graded automatically. | 01-15-2009 |
20090024366 | COMPUTERIZED PROGRESSIVE PARSING OF MATHEMATICAL EXPRESSIONS - Systems and methods for progressively parsing user input of a mathematical expression are provided. One disclosed method includes looping through characters in an input string, and on each loop, extracting a next token from the input string and determining a current grammar context based on the token or tokens extracted thus far. If it is determined that the current grammar context matches a predetermined condition, then the method may include modifying the tokens extracted from the input string in a predetermined manner associated with the predetermined condition. A parsing result may be obtained based on the modified tokens. The parsing result may be converted to a modified input string. | 01-22-2009 |
20090027393 | IDENTIFYING ASYMPTOTES IN APPROXIMATED CURVES AND SURFACES - Systems and methods for identifying asymptotes in approximated geometric forms are provided. One disclosed method includes identifying a set of data points that represent an approximated geometric form. The data points may be organized into segments. The method may further include determining a visible range of the geometric form to display. The method may further include looping through successive segments of the approximated geometric form, and on each loop, for a current segment, making a decision whether to draw the current segment based upon a prediction of whether the current segment traverses an asymptote within the visible range. The method may further include displaying on a graphical user interface of a computing device, a graph of the segments of the geometric form in the visible range, the graph not including those segments that were decided not be drawn. | 01-29-2009 |
20090052777 | USING HANDWRITING RECOGNITION IN COMPUTER ALGEBRA - Systems and methods for use in handwriting recognition in computer algebra are provided. One disclosed method includes receiving handwriting input from a user via a handwriting input device, the handwriting input representing a mathematical expression. The method further includes, at a recognizer, processing the handwriting input to recognize a plurality of candidates and ranking the plurality of candidates to form initial candidate data. The method may further include, at an application program, scanning the plurality of candidates for segments that match application-level criteria, and adjusting a rank of one or more of the plurality of candidates based on the matching, to form a processed candidate list. The method may further include displaying the processed candidate list via a graphical user interface. | 02-26-2009 |
20090328058 | PROTECTED MODE SCHEDULING OF OPERATIONS - The present invention extends to methods, systems, and computer program products for protected mode scheduling of operations. Protected mode (e.g., user mode) scheduling can facilitate the development of programming frameworks that better reflect the requirements of the workloads through the use of workload-specific execution abstractions. In addition, the ability to define scheduling policies tuned to the characteristics of the hardware resources available and the workload requirements has the potential of better system scaling characteristics. Further, protected mode scheduling decentralizes the scheduling responsibility by moving significant portions of scheduling functionality from supervisor mode (e.g., kernel mode) to an application. | 12-31-2009 |
Patent application number | Description | Published |
20130262203 | LOCATION-BASED TASK AND GAME FUNCTIONALITY - Techniques are described for providing functionality and information to users, including providing promotional information and opportunities to users of mobile devices in manners that are based at least in part on activities and locations of the users (e.g., based on games played by the users on their mobile devices and/or based on user satisfaction of system-directed tasks associated with offers or other activities). At least some of the promotional information and opportunities may be made available by various companies or entities that provide products and/or services (e.g., retailers, merchants, wholesalers, distributors, etc.) and/or by various companies or entities that provide advertising for available products and/or services. Various types of activities may be defined and used to provide promotional information and opportunities to users of mobile devices in particular embodiments and situations. | 10-03-2013 |
20150088624 | LOCATION-BASED TASK AND GAME FUNCTIONALITY - Techniques are described for providing functionality and information to users, including providing promotional information and opportunities to users of mobile devices in manners that are based at least in part on activities and locations of the users (e.g., based on games played by the users on their mobile devices and/or based on user satisfaction of system-directed tasks associated with offers or other activities). At least some of the promotional information and opportunities may be made available by various companies or entities that provide products and/or services (e.g., retailers, merchants, wholesalers, distributors, etc.) and/or by various companies or entities that provide advertising for available products and/or services. Various types of activities may be defined and used to provide promotional information and opportunities to users of mobile devices in particular embodiments and situations. | 03-26-2015 |
Patent application number | Description | Published |
20110151863 | Automated Communications Device Field Testing, Performance Management, And Resource Allocation - Field testing, performance monitoring, and resource management are performed via a communications device, automatically and autonomously, without user intervention. Abnormal conditions are automatically detected while the communications device is performing a service and adjustments are automatically made to network resources in order to improve service performance. Upon initiation of a request for service (e.g., IM, MMS, SMS, etc.), the communications device automatically begins to monitor the performance of the service session (e.g., send time, receive time, etc.). During or after the service session, the communications device stores the performance data associated with the performance of the service. The performance data is analyzed in accordance with a subscriber's user profile information, to detect any problems with the service. If problems are detected, necessary adjustments and/or reallocation of resources are made automatically and autonomously, without user intervention. | 06-23-2011 |
Patent application number | Description | Published |
20090001499 | THICK ACTIVE LAYER FOR MEMS DEVICE USING WAFER DISSOLVE PROCESS - Methods for producing MEMS (microelectromechanical systems) devices with a thick active layer and devices produced by the method. An example method includes heavily doping a first surface of a first silicon wafer with P-type impurities, and heavily doping a first surface of a second silicon wafer with N-type impurities. The heavily doped first surfaces are then bonded together, and a second side of the first wafer opposing the first side of the first wafer is thinned to a desired thickness, which may be greater than about 30 micrometers. The second side is then patterned and etched, and the etched surface is then heavily doped with P-type impurities. A cover is then bonded to the second side of the first wafer, and the second wafer is thinned. | 01-01-2009 |
20090176370 | SINGLE SOI WAFER ACCELEROMETER FABRICATION PROCESS - Methods for producing a MEMS device from a single silicon-on-insulator (SOI) wafer. An SOI wafer includes a silicon (Si) handle layer, a Si mechanism layer and an insulator layer located between the Si handle and Si mechanism layers. An example method includes etching active components from the Si mechanism layer. Then, the exposed surfaces of the Si mechanism layer is doped with boron. Next, portions of the insulator layer proximate to the etched active components of the Si mechanism layer are removed and the Si handle layer is etched proximate to the etched active components. | 07-09-2009 |
20090283917 | SYSTEMS AND METHODS FOR VERTICAL STACKED SEMICONDUCTOR DEVICES - Systems and methods fabricate a vertically stacked multi-chip semiconductor device assembly. An exemplary assembly is fabricated by forming a first semiconductor device in a first semiconductor device layer with a first connector located at a first surface of the first semiconductor device layer; forming a second semiconductor device in a second semiconductor device layer with a second connector located at an interior surface of the second semiconductor device layer; forming a via in the first semiconductor device layer extending from the first surface to an opposing second surface of the first semiconductor device layer corresponding to the location of the second connector; and joining the second surface of the first semiconductor device layer and the interior surface of the second semiconductor device layer, wherein the via at the second surface of the first semiconductor device layer is coupled to the second connector of the second semiconductor device. | 11-19-2009 |
20100233882 | SINGLE SILICON-ON-INSULATOR (SOI) WAFER ACCELEROMETER FABRICATION - Methods for creating at least one micro-electromechanical (MEMS) structure in a silicon-on-insulator (SOI) wafer. The SOI wafer with an extra layer of oxide is etched according to a predefined pattern. A layer of oxide is deposited over exposed surfaces. An etchant selectively removes the oxide to expose the SOI wafer substrate. A portion of the SOI substrate under at least one MEMS structure is removed, thereby releasing the MEMS structure to be used in the formation of an accelerometer. | 09-16-2010 |
20110300658 | METHODS OF CREATING A MICRO ELECTRO-MECHANICAL SYSTEMS ACCELEROMETER USING A SINGLE DOUBLE SILICON-ON-INSULATOR WAFER - Methods for creating a microelectromechanical systems (MEMS) device using a single double, silicon-on-insulator (SOI) wafer. The double SOI wafer includes at least a base layer of silicon, a first layer of silicon, and a second layer of silicon, the layers of silicon are separated by an oxide layer. A stationary electrode with rigid support beams is formed into the second layer of silicon. A proof mass and at least one spring are formed into the first layer of silicon. The proof mass is separated from the stationary electrode by a first gap and the proof mass is separated from the base silicon layer by a second gap. | 12-08-2011 |
Patent application number | Description | Published |
20100153559 | Method and Apparatus for Suspending Network Based Services - Disclosed is a method for de-registering user equipment from a network in response to notification that the user equipment will cease using network services. An Interruption Service Manager (ISM) notifies the network to suspend services as the IMS network conditions or the user equipment conditions change. The ISM may also notify the user equipment to re-register onto the IMS core network when the connection is re-established. | 06-17-2010 |
20130110942 | Intelligent Message Routing And Delivery In A Telecommunications Network | 05-02-2013 |
20140108978 | System and Method For Arranging Application Icons Of A User Interface On An Event-Triggered Basis - A device, tangible computer readable storage medium and method for detecting a weight adjustment event, selecting one displayable object from a plurality of displayable objects based on the weight adjustment event, adjusting a current weight associated with the one displayable object to a new weight and determining a location on a display screen for the one displayable object based on the new weight. | 04-17-2014 |
20150032833 | Intelligent Message Routing and Delivery in a Telecommunications Network - Messages directed to a mobile device are selectively routed to message servers based upon the capabilities of a network to which the mobile device is connected. According to an illustrative method disclosed herein, a network connectivity server receives a network identifier from the mobile device, the network connectivity server receives a request for routing instructions for the message, and the network connectivity server determines if the network identified by the network identifier is capable of delivering the message. The network connectivity server then instructs the message server to route the message according to a standard delivery method for a message type of the message or to a message conversion server computer based upon the determination. The message conversion server computer converts the message into a new message type that the network is capable of delivering to the mobile device. | 01-29-2015 |
20150070479 | Obstacle Avoidance Using Mobile Devices - Methods, systems, and products estimate distances to aid a visually impaired user of a mobile device. As the user carries the mobile device, a camera in the mobile device captures images of a walking cane. The images of the walking cane are analyzed to infer a distance between a tip of the walking cane and the mobile device. The distance may then used by navigational tools to aid the visually impaired user. | 03-12-2015 |
Patent application number | Description | Published |
20130276086 | PEER APPLICATIONS TRUST CENTER - Concepts and technologies are disclosed herein for a peer applications trust center. A trust client can execute on a client computer and a trust service can execute on a server computer to provide the peer applications trust center. The trust client or trust server can register applications. During registration, the trust server or the trust client can generate a public key or other identifier for identifying the registered application. If another application requests access to the registered application, the trust server or the trust client can determine if the request specifies a registered application by name. If the requestor is granted access to the application, the requestor can be issued a token. Tokens can be revoked, updated, replaced, or renewed for various purposes. | 10-17-2013 |
20150195271 | Peer Applications Trust Center - Concepts and technologies are disclosed herein for a peer applications trust center. A trust client can execute on a client computer and a trust service can execute on a server computer to provide the peer applications trust center. The trust client or trust server can register applications. During registration, the trust server or the trust client can generate a public key or other identifier for identifying the registered application. If another application requests access to the registered application, the trust server or the trust client can determine if the request specifies a registered application by name. If the requestor is granted access to the application, the requestor can be issued a token. Tokens can be revoked, updated, replaced, or renewed for various purposes. | 07-09-2015 |
Patent application number | Description | Published |
20080282017 | Serial Peripheral Interface Switch - An SPI switch allows selection of a BIOS memory transparent to a Southbridge chipset component. The SPI switch provides address translation to a selected BIOS memory area under the control of a security module processor. The SPI switch also provides command filtering to prevent commands that represent a security risk such as bulk erase commands. Because the SPI switch allows transparent redirection between BIOS programs, booting in different operating modes may be supported without any changes to the basic computer architecture or major chipset components. | 11-13-2008 |
20100037325 | Enhanced Packaging for PC Security - A pay-per-use computer, or other electronic device that uses local security, may use a security module or other circuit for monitoring and enforcement of a usage policy. To help prevent physical attacks on the security module, or the circuit board near the security module, a second circuit may be mounted over the security module to help prevent access to the security module. Both circuits may be mounted on a interposer and the interposer mounted to the circuit board, creating a stack including the first circuit, the interposer, the security module, and a main PC board. When the PC board includes dense signal traces under the security module a three dimensional envelope is created around the security module. When the first circuit is a high value circuit, such as a Northbridge, the risk/reward of attacking the security module is increased substantially and may deter all but the most determined hackers. | 02-11-2010 |
Patent application number | Description | Published |
20110202541 | RAPID UPDATE OF INDEX METADATA - Systems and methods for performing an updating process to an in-memory index are provided. Upon receiving notice of document modifications covered by an inverted index associated with a search engine, in the form of an update file, a representation of the modification is published onto various index serving machines. Each index serving machine receiving the update file determines if the modifications are applicable to the index serving machine. If an index serving machine determines that it contains mapping information corresponding to the modified documents, the index serving machine utilizes the update file and associated mapping information to update an in-memory index. In embodiments, the in-memory index is used to provide results to user queries in tandem with the inverted index. In some embodiments, an extra in-memory index is maintained that is revised with constantly incoming metadata updates and the existing in-memory index is periodically swapped with the revised in-memory index. | 08-18-2011 |
20110258198 | USING BEHAVIOR DATA TO QUICKLY IMPROVE SEARCH RANKING - Systems and methods for applying user behavior data to improve serach query result ranking are provided. Upon receiving an update file indicating that recent, significant user behavior data is available for a document associated with an inverted index, the update file is published periodically and frequently to an index server. After filtering out the relevant update information from the update file, the index server extracts identifiers of the documents having the associated user behavior data. The update file and the identifier of the documents are utilized to update an in-memory index containing representations of metadata indicative of the user behavior. The in-memory index is continuously updated and utilized to serve search query results in response to user search queries. Search query results from the in-memory index are ranked using the user behavior data prior to serving. Thus, results associated with recent, significant user-behavior metadata receive prominent placement on the search results page. | 10-20-2011 |
20110295844 | ENHANCING FRESHNESS OF SEARCH RESULTS - Methods, systems, and computer-storage media for improving the freshness, or the apparent freshness, of search results are described. In an embodiment, the first portion of search results presented on a search results page are based on responsiveness to the search query and a second portion of results describe only recently published documents that are responsive to the search query. In an embodiment, a more recent version of the document, which is not directly used to determine responsiveness, is used to build the caption for a search result. Another way to make search results appear fresh is to include a publication time within the search result caption. In one embodiment, the publication time is generated by calculating a point in time between when a document is first added to a search index and the previous time the search engine visited the site where the document was found. | 12-01-2011 |
20120016864 | HIERARCHICAL MERGING FOR OPTIMIZED INDEX - Methods, systems, and media are provided for an optimized search engine index. The optimized index is formed by merging small lower level indexes of fresh documents together into a hierarchical cluster of multiple higher level indexes. The optimized index of fresh documents is formed via a single threaded process, while a fresh index serving platform concurrently serves fresh queries. The hierarchy of higher level indexes is formed by merging lower and/or higher level indexes with similar expiration times together. Therefore, as some indexes expire, the remaining un-expired indexes can be re-used and merged with new incoming indexes. The single threaded process provides fast serving of fresh documents, while also providing time to integrate the fresh indexes into a long term primary search engine index, prior to expiring. | 01-19-2012 |
20120023093 | EXTRACTION OF RICH SEARCH INFORMATION FROM INDEX SERVERS VIA AN ALTERNATIVE ASYNCHRONOUS DATA PATH - A search engine system is described herein that provides an alternative data path for collecting results provided by index servers. The alternative data path collects the results in a direct and asynchronous manner; this is in contrast to a synchronous path used to deliver search results to end users via one or more aggregator modules. An analysis system can use the alternative data path to collect a large amount of richly descriptive information regarding the performance of the search engine system, circumventing bottlenecks and other constraints that would otherwise be imposed by the synchronous data path. The analysis system can analyze the information collected from the index servers to improve the performance of the search engine system. | 01-26-2012 |