Dotsenko
Alexey Andreevich Dotsenko, Moscow RU
Patent application number | Description | Published |
---|---|---|
20150339387 | METHOD OF AND SYSTEM FOR FURNISHING A USER OF A CLIENT DEVICE WITH A NETWORK RESOURCE - Furnishing a user of a client device having a user interface with a display displaying search bar, with a network resource, comprising: Receiving, by a server from the client device, a portion a search term having been entered in the search bar. Sending, by the server to the client device, identification of a network resource associated with the portion of the search term. Receiving, by the server from the client device, a request to furnish the client device with the network resource associated with the portion of the search term. Sending, by the server to the client device, the network resource associated with the portion of the search term. All prior to the user having requested a search in respect of the portion of the search term. | 11-26-2015 |
Ivan Petrovich Dotsenko, Moscow RU
Patent application number | Description | Published |
---|---|---|
20090041816 | Coated particles - A coated particle is disclosed comprising a core, a first innermost layer coating the core, a second middle layer coating the first innermost layer and a third outermost layer coating the second middle layer, wherein the core comprises at least a water-in-oil emulsion or a fat and/or oil, wherein the first innermost layer comprises at least one emulsifier and, wherein the second middle layer comprises either one or more polyanions or one or more polycations and the third outermost layer comprises one or more polyelectrolytes only of opposing charge to that of the polyanions or polycations of the second middle layer. The inventors have observed that the particle has improved stability, particularly when the particles are small as the forces between said small particles to aggregate is greater and simultaneously provides a delivery vehicle for included actives and/or flavours. A method for the manufacture of the particle is also disclosed as is a product selected from the group consisting of a food product, a home care product, a personal care product and a pharmaceutical product, wherein each product comprises a plurality of the coated particles. | 02-12-2009 |
Svetlana I. Dotsenko, Beverly, MA US
Patent application number | Description | Published |
---|---|---|
20130013328 | SYSTEMS, METHODS, AND DEVICES FOR AN ARCHITECTURE TO SUPPORT SECURE MASSIVELY SCALABLE APPLICATIONS HOSTED IN THE CLOUD AND SUPPORTED AND USER INTERFACES - The present invention describes an architecture for hosting and managing disparate, connected applications in a cloud environment. In addition to all of the traditional advantages of the cloud environment, e.g. the economies of renting vs. buying and scalability, this invention allows for management, security, data exchange, authentication, predictive performance and resource integrity it enables business opportunities and models that here-to-fore could not have been realized. Specifically an example being providing on a global scale an intelligent platform for managing a citizens health and health care, this patent covers the enabling technologies and the enabling business models and user interfaces. | 01-10-2013 |
20130067582 | SYSTEMS, METHODS AND DEVICES FOR PROVIDING DEVICE AUTHENTICATION, MITIGATION AND RISK ANALYSIS IN THE INTERNET AND CLOUD - The present invention is a method to provide mechanisms and judgment to determine the ongoing veracity of “purported” devices (sometimes called spoofing) with such parameters as unique device ID, access history, paths taken and other environmental data (Device Authentication). | 03-14-2013 |
Yuri Dotsenko, Redmond, WA US
Patent application number | Description | Published |
---|---|---|
20100076941 | MATRIX-BASED SCANS ON PARALLEL PROCESSORS - A system and method for performing a scan of an input sequence in a parallel processor having a shared register file. A two dimensional matrix is generated, having a number of rows representing a number of threads and a number of columns based on the input sequence block size and the number of rows. One or more padding columns may be added to the matrix to avoid or reduce memory bank conflicts. A first traversal of the rows performs a reduction or a scan of each of the rows in parallel, storing the reduction values. The reduction values are used during a second traversal to propagate the reduction values. In a segmented scan, propagation is selectively performed based on flags representing segment boundaries. | 03-25-2010 |
20100106758 | COMPUTING DISCRETE FOURIER TRANSFORMS - A system described herein includes a selector component that receives input data that is desirably transformed by way of a Discrete Fourier Transform, wherein the selector component selects one of a plurality of algorithms for computing the Discrete Fourier Transform from a library based at least in part upon a size of the input function. An evaluator component executes the selected one of the plurality of algorithms to compute the Discrete Fourier Transform, wherein the evaluator component causes leverages shared memory of a processor to compute the Discrete Fourier Transform. | 04-29-2010 |
Yuri Dotsenko, Kirkland, WA US
Patent application number | Description | Published |
---|---|---|
20130215117 | RASTERIZATION OF COMPUTE SHADERS - Described are compiler algorithms that partition a compute shader program into maximal-size regions, called thread-loops. The algorithms may remove original barrier-based synchronization yet the thus-transformed shader program remains semantically equivalent to the original shader program (i.e., the transformed shader program is correct). Moreover, the transformed shader program is amenable to optimization via existing compiler technology, and can be executed efficiently by CPU thread(s). A Dispatch call can be load-balanced on a CPU by assigning single or multiple CPU threads to execute thread blocks. In addition, the number of concurrently executing thread blocks do not overload the CPU. | 08-22-2013 |
20130219377 | SCALAR OPTIMIZATIONS FOR SHADERS - Described herein are optimizations of thread loop intermediate representation (IR) code. One embodiment involves an algorithm that, based on data-flow analysis, computes sets of temporary variables that are loaded at the beginning of a thread loop and stored upon exit from a thread loop. Another embodiment involves reducing the size of a thread loop trip for a commonly-found case where a piece of compute shader is executed by a single thread (or a compiler-analyzable range of threads). In yet another embodiment, compute shader thread indices are cached to avoid excessive divisions, further improving execution speed. | 08-22-2013 |
20130219378 | VECTORIZATION OF SHADERS - Intermediate representation (IR) code is received as compiled from a shader in the form of shader language source code. The input IR code is first analyzed during an analysis pass, during which operations, scopes, parts of scopes, and if-statement scopes are annotated for predication, mask usage, and branch protection and predication. This analysis outputs vectorization information that is then used by various sets of vectorization transformation rules to vectorize the input IR code, thus producing vectorized output IR code. | 08-22-2013 |
20140354658 | Shader Function Linking Graph - Methods, systems, and computer-storage media are provided for shader assembly and computation. Shader functions can be determined without specialization to a particular shader model and finalizing or resource bindings. Embodiments of the present invention facilitate final shader assembly and resource binding through linking before the shader is presented to a GPU driver. In this way, embodiments of the present invention alleviate combinatorial shader explosion and provide protection of intellectual property by not requiring distribution or generation of source code. | 12-04-2014 |
20150269767 | CONFIGURING RESOURCES USED BY A GRAPHICS PROCESSING UNIT - A resource used by a shader executed by a graphics processing unit is referenced using a “descriptor”. Descriptors are grouped together in memory called a descriptor heap. Applications allocate and store descriptors in descriptor heaps. Applications also create one or more descriptor tables specifying a subrange of a descriptor heap. To bind resources to a shader, descriptors are first loaded into a descriptor heap. When the resources are to be used by a set of executing shaders, descriptor tables are defined on the GPU identifying ranges within the descriptor heap. Shaders, when executing, refer to the currently defined descriptor tables to access the resources made available to them. If the shader is to be executed again with different resources, and if those resources are already in memory and specified in the descriptor heap, then the descriptor tables are changed to specify different ranges of the descriptor heap. | 09-24-2015 |