Patent application number | Description | Published |
20110314255 | MESSAGE BROADCAST WITH ROUTER BYPASSING - A processor and method for broadcasting data among a plurality of processing cores is disclosed. The processor includes a plurality of processing cores connected by point-to-point connections. A first of the processing cores includes a router that includes at least an allocation unit and an output port. The allocation unit is configured to determine that respective input buffers on at least two others of the processing cores are available to receive given data. The output port is usable by the router to send the given data across one of the point-to-point connections. The router is configured to send the given data contingent on determining that the respective input buffers are available. Furthermore, the processor is configured to deliver the data to the at least two other processing cores in response to the first processing core sending the data once across the point-to-point connection. | 12-22-2011 |
20120311228 | METHOD AND APPARATUS FOR PERFORMING MEMORY WEAR-LEVELING USING PASSIVE VARIABLE RESISTIVE MEMORY WRITE COUNTERS - Method and apparatus for performing wear-leveling using passive variable resistive memory (PVRM) based write counters are provided. In one example, a method for performing wear-leveling using passive PVRM based write counters is disclosed. The method includes associating a logical address of a memory array with a physical address of the memory array via at least one mapping table. Additionally, the method includes, in response to writing to the physical address of the memory array, incrementally updating at least one PVRM based write counter associated with the physical address of the memory array. The at least one PVRM based write counter may be incrementally updated by varying an amount of resistance stored in the at least one PVRM based write counter. | 12-06-2012 |
20130007373 | REGION BASED CACHE REPLACEMENT POLICY UTILIZING USAGE INFORMATION - A method, apparatus, and system for replacing at least one cache region selected from a plurality of cache regions, wherein each of the regions is composed of a plurality of blocks is disclosed. The method includes applying a first algorithm to the plurality of cache regions to limit the number of potential candidate regions to a preset value, wherein the first algorithm assesses the ability of a region to be replaced based on properties of the plurality of blocks associated with that region; and designating at least one of the limited potential candidate regions as a victim based region level information associated with each of the limited potential candidate regions. | 01-03-2013 |
20130054849 | UNIFORM MULTI-CHIP IDENTIFICATION AND ROUTING SYSTEM - Various methods, computer-readable mediums, articles of manufacture and systems are disclosed. In one aspect, a method is provided that includes generating a packet with a first semiconductor chip. The packet is destined to transit a first substrate and be received by a node of a second semiconductor chip. The packet includes a packet header and packet body. The packet header includes an identification of a first exit point from the first substrate and an identification of the node. The packet is sent to the first substrate and eventually to the node of the second semiconductor chip. | 02-28-2013 |
20130073811 | REGION PRIVATIZATION IN DIRECTORY-BASED CACHE COHERENCE - A system and method for region privatization in a directory-based cache coherence system is disclosed. The system and method includes receiving a request from a requesting node for at least one block in a region, allocating a new entry for the region based on the request for the block, requesting from the memory controller the data for the region be sent to the requesting node, receiving a subsequent request for a block within the region, determining that any blocks of the region that are cached are also cached at the requesting node, and privatizing the region at the requesting node. | 03-21-2013 |
20130097385 | DUAL-GRANULARITY STATE TRACKING FOR DIRECTORY-BASED CACHE COHERENCE - A system and method of providing directory cache coherence are disclosed. The system and method may include tracking the coherence state of at least one cache block contained within a region using a global directory, providing at least one region level sharing information about the least one cache block in the global directory, and providing at least one block level sharing information about the at least one cache block in the global directory. The tracking of the provided at least one region level sharing information and the provided at least one block level sharing information may organize the coherence state of the at least one cache block and the region. | 04-18-2013 |
20130159812 | MEMORY ARCHITECTURE FOR READ-MODIFY-WRITE OPERATIONS - According to one embodiment, a memory architecture implemented method is provided, where the memory architecture includes a logic chip and one or more memory chips on a single die, and where the method comprises: reading values of data from the one or more memory chips to the logic chip, where the one or more memory chips and the logic chip are on a single die; modifying, via the logic chip on the single die, the values of data; and writing, from the logic chip to the one or more memory chips, the modified values of data. | 06-20-2013 |
20130346058 | SIMULATING VECTOR EXECUTION - A system and method for simulating new instructions without compiler support for the new instructions. A simulator detects a given region in code generated by a compiler. The given region may be a candidate for vectorization or may be a region already vectorized. In response to the detection, the simulator suspends execution of a time-based simulation. The simulator then serially executes the region for at least two iterations using a functional-based simulation and using instructions with operands which correspond to P or less lanes of single-instruction-multiple-data (SIMD) execution. The value P is a maximum number of lanes of SIMD exection supported both by the compiler. The simulator stores checkpoint state during the serial execution. In response to determining no inter-iteration memory dependencies exist, the simulator returns to the time-based simulation and resumes execution using N-wide vector instructions. | 12-26-2013 |
20140040698 | STACKED MEMORY DEVICE WITH METADATA MANGEMENT - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a metadata manager that offloads metadata management from the other system components. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the devices external to the stacked memory device. The memory interface operates to perform memory accesses for the external devices and for the metadata manager. By virtue of the metadata manager's tight integration with the stacked memory layers, the metadata manager may perform certain memory-intensive metadata management operations more efficiently than could be performed by the external devices. | 02-06-2014 |
20140101405 | REDUCING COLD TLB MISSES IN A HETEROGENEOUS COMPUTING SYSTEM - Methods and apparatuses are provided for avoiding cold translation lookaside buffer (TLB) misses in a computer system. A typical system is configured as a heterogeneous computing system having at least one central processing unit (CPU) and one or more graphic processing units (GPUs) that share a common memory address space. Each processing unit (CPU and GPU) has an independent TLB. When offloading a task from a particular CPU to a particular GPU, translation information is sent along with the task assignment. The translation information allows the GPU to load the address translation data into the TLB associated with the one or more GPUs prior to executing the task. Preloading the TLB of the GPUs reduces or avoids cold TLB misses that could otherwise occur without the benefits offered by the present disclosure. | 04-10-2014 |
20140136870 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT SHUTDOWN DECISIONS - A device receives an indication that a memory bank is to be powered down, and determines, based on receiving the indication, shutdown scores corresponding to powered up memory banks. Each shutdown score is based on a shutdown metric associated with powering down a powered up memory bank. The device may power down a selected memory bank based on the shutdown scores. | 05-15-2014 |
20140136873 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT POWER UP DECISIONS - A device receives an indication that a memory bank is to be powered up, and determines, based on receiving the indication, power scores corresponding to powered down memory banks. Each power score corresponds to a power metric associated with powering up a powered down memory bank. The device powers up a selected memory bank based on the plurality of power scores. | 05-15-2014 |
20140143497 | STACK CACHE MANAGEMENT AND COHERENCE TECHNIQUES - A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data. | 05-22-2014 |
20140149710 | CREATING SIMD EFFICIENT CODE BY TRANSFERRING REGISTER STATE THROUGH COMMON MEMORY - Methods, media, and computing systems are provided. The method includes, the media are configured for, and the computing system includes a processor with control logic for allocating memory for storing a plurality of local register states for work items to be executed in single instruction multiple data hardware and for repacking wavefronts that include work items associated with a program instruction responsive to a conditional statement. The repacking is configured to create repacked wavefronts that include at least one of a wavefront containing work items that all pass the conditional statement and a wavefront containing work items that all fail the conditional statement. | 05-29-2014 |
20140156941 | Tracking Non-Native Content in Caches - The described embodiments include a cache with a plurality of banks that includes a cache controller. In these embodiments, the cache controller determines a value representing non-native cache blocks stored in at least one bank in the cache, wherein a cache block is non-native to a bank when a home for the cache block is in a predetermined location relative to the bank. Then, based on the value representing non-native cache blocks stored in the at least one bank, the cache controller determines at least one bank in the cache to be transitioned from a first power mode to a second power mode. Next, the cache controller transitions the determined at least one bank in the cache from the first power mode to the second power mode. | 06-05-2014 |
20140173210 | MULTI-CORE PROCESSING DEVICE WITH INVALIDATION CACHE TAGS AND METHODS - A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags. | 06-19-2014 |
20140181412 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests. In response to receiving a request to allocate data of a first type, a cache controller allocates the data in the cache responsive to determining a limit of an amount of data of the first type permitted in the cache is not reached. The controller maintains an amount and location information of the data of the first type stored in the cache. Additionally, the cache may be partitioned with each partition designated for storing data of a given type. Allocation of data of the first type is dependent at least upon the availability of a first partition and a limit of an amount of data of the first type in a second partition. | 06-26-2014 |
20140181417 | CACHE COHERENCY USING DIE-STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic dies implement hardware logic providing a memory interface and the coherency manager. The memory interface operates to perform memory accesses in response to memory access requests from the coherency manager and the one or more external devices. The coherency manager comprises logic to perform coherency operations for shared data stored at the stacked memory dies. Due to the integration of the logic dies and the memory dies, the coherency manager can access shared data stored in the memory dies and perform related coherency operations with higher bandwidth and lower latency and power consumption compared to the external devices. | 06-26-2014 |
20140181427 | Compound Memory Operations in a Logic Layer of a Stacked Memory - Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various data movement and address calculation operations. This functionality would allow compound memory operations—a single request communicated to the memory that characterizes the accesses and movement of many data items. This eliminates the performance and power overheads associated with communicating address and control information on a fine-grain, per-data-item basis from a host processor (or other device) to the memory. This approach also provides better visibility of macro-level memory access patterns to the memory system and may enable additional optimizations in scheduling memory accesses. | 06-26-2014 |
20140181428 | QUALITY OF SERVICE SUPPORT USING STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated QoS manager to provide centralized QoS functionality in furtherance of one or more specified QoS objectives for the sharing of the memory resources by other components of the processing system. The die-stacked memory device includes a set of one or more stacked memory dies and one or more logic dies. The logic dies implement hardware logic for a memory controller and the QoS manager. The memory controller is coupleable to one or more devices external to the set of one or more stacked memory dies and operates to service memory access requests from the one or more external devices. The QoS manager comprises logic to perform operations in furtherance of one or more QoS objectives, which may be specified by a user, by an operating system, hypervisor, job management software, or other application being executed, or specified via hardcoded logic or firmware. | 06-26-2014 |
20140181453 | Processor with Host and Slave Operating Modes Stacked with Memory - A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode. | 06-26-2014 |
20140181458 | DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION - A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device. | 06-26-2014 |
20140181460 | PROCESSING DEVICE WITH ADDRESS TRANSLATION PROBING AND METHODS - A data processing device is provided that employs multiple translation look-aside buffers (TLBs) associated with respective processors that are configured to store selected address translations of a page table of a memory shared by the processors. The processing device is configured such that when an address translation is requested by a processor and is not found in the TLB associated with that processor, another TLB is probed for the requested address translation. The probe across to the other TLB may occur in advance of a walk of the page table for the requested address or alternatively a walk can be initiated concurrently with the probe. Where the probe successfully finds the requested address translation, the page table walk can be avoided or discontinued. | 06-26-2014 |
20140181467 | HIGH LEVEL SOFTWARE EXECUTION MASK OVERRIDE - Methods, and media, and computer systems are provided. The method includes, the media includes control logic for, and the computer system includes a processor with control logic for overriding an execution mask of SIMD hardware to enable at least one of a plurality of lanes of the SIMD hardware. Overriding the execution mask is responsive to a data parallel computation and a diverged control flow of a workgroup. | 06-26-2014 |
20140181822 | Fragmented Channels - A system, method and a computer-readable medium for task scheduling using fragmented channels is provided. A plurality of fragmented channels are stored in memory accessible to a plurality of compute units. Each fragmented channel is associated with a particular compute unit. Each fragmented channel also stores a plurality of data items from tasks scheduled for processing on the associated compute unit and links to another fragmented channel in the plurality of fragmented channels. | 06-26-2014 |
20140223445 | Selecting a Resource from a Set of Resources for Performing an Operation - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation. | 08-07-2014 |
20140250312 | Conditional Notification Mechanism - The described embodiments comprise a first hardware context. The first hardware context receives, from a second hardware context, an indication of a memory location and a condition to be met by the memory location. The first hardware context then sends a signal to the second hardware context when the memory location meets the condition. | 09-04-2014 |
20140250442 | Conditional Notification Mechanism - The described embodiments include a computing device. In these embodiments, an entity in the computing device receives an identification of a memory location and a condition to be met by a value in the memory location. Upon a predetermined event occurring, the entity causes an operation to be performed when the value in the memory location meets the condition. | 09-04-2014 |
20140281234 | SERVING MEMORY REQUESTS IN CACHE COHERENT HETEROGENEOUS SYSTEMS - Apparatus, computer readable medium, and method of servicing memory requests are presented. A read request for a memory block from a requester processing having a processor type may be serviced by providing exclusive access to the requested memory block to the requester processor when the requested memory block was modified a last time it was accessed by a previous requester processor having a same processor type as the processor type of the requester processor. Exclusive access to the requested memory block may be provided to the requester processor based on whether the requested memory block was modified by a previous processor having a same type as the requester processor at least once in the last several times the memory block was in a cache of the previous processor. Exclusive access to the requested memory block may be provided to the requester processor based on a region of the memory block. | 09-18-2014 |
20140304474 | Conditional Notification Mechanism - The described embodiments comprise a computing device with a first processor core and a second processor core. In some embodiments, during operations, the first processor core receives, from the second processor core, an indication of a memory location and a flag. The first processor core then stores the flag in a first cache line in a cache in the first processor core and stores the indication of the memory location separately in a second cache line in the cache. Upon encountering a predetermined result when evaluating a condition for the indicated memory location, the first processor core updates the flag in the first cache line. Based on the update of the flag, the first processor core causes the second processor core to perform an operation. | 10-09-2014 |
20140337587 | METHOD FOR MEMORY CONSISTENCY AMONG HETEROGENEOUS COMPUTER COMPONENTS - A method, computer program product, and system is described that determines the correctness of using memory operations in a computing device with heterogeneous computer components. Embodiments include an optimizer based on the characteristics of a Sequential Consistency for Heterogeneous-Race-Free (SC for HRF) model that analyzes a program and determines the correctness of the ordering of events in the program. HRF models include combinations of the properties: scope order, scope inclusion, and scope transitivity. The optimizer can determine when a program is heterogeneous-race-free in accordance with an SC for HRF memory consistency model . For example, the optimizer can analyze a portion of program code, respect the properties of the SC for HRF model, and determine whether a value produced by a store memory event will be a candidate for a value observed by a load memory event. In addition, the optimizer can determine whether reordering of events is possible. | 11-13-2014 |
20150046652 | WRITE COMBINING CACHE MICROARCHITECTURE FOR SYNCHRONIZATION EVENTS - A method, computer program product, and system is described that enforces a release consistency with special accesses sequentially consistent (RCsc) memory model and executes release synchronization instructions such as a StRel event without tracking an outstanding store event through a memory hierarchy, while efficiently using bandwidth resources. What is also described is the decoupling of a store event from an ordering of the store event with respect to a RCsc memory model. The description also includes a set of hierarchical read/write combining buffers that coalesce stores from different parts of the system. In addition, a pool component maintains partial order of received store events and release synchronization events to avoid content addressable memory (CAM) structures, full cache flushes, as well as direct write-throughs to memory. The approach improves the performance of both global and local synchronization events since a store event may not need to reach main memory to complete. | 02-12-2015 |
20150058567 | HIERARCHICAL WRITE-COMBINING CACHE COHERENCE - A method, computer program product, and system is described that enforces a release consistency with special accesses sequentially consistent (RCsc) memory model and executes release synchronization instructions such as a StRel event without tracking an outstanding store event through a memory hierarchy, while efficiently using bandwidth resources. What is also described is the decoupling of a store event from an ordering of the store event with respect to a RCsc memory model. The description also includes a set of hierarchical read-only cache and write-only combining buffers that coalesce stores from different parts of the system. In addition, a pool component maintains partial order of received store events and release synchronization events to avoid content addressable memory (CAM) structures, full cache flushes, as well as direct write-throughs to memory. The approach improves the performance of both global and local synchronization events and reduces overhead in maintaining write-only combining buffers. | 02-26-2015 |
20150100758 | DATA PROCESSOR AND METHOD OF LANE REALIGNMENT - A data processor includes a register file divided into at least a first portion and a second portion for storing data. A single instruction, multiple data (SIMD) unit is also divided into at least a first lane and a second lane. The first and second lanes of the SIMD unit correspond respectively to the first and second portions of the register file. Furthermore, each lane of the SIMD unit is capable of data processing. The data processor also includes a realignment element in communication with the register file and the SIMD unit. The realignment element is configured to selectively realign conveyance of data between the first portion of the register file and the first lane of the SIMD unit to the second lane of the SIMD unit. | 04-09-2015 |
20150293845 | MULTI-LEVEL MEMORY HIERARCHY - Described is a system and method for a multi-level memory hierarchy. Each level is based on different attributes including, for example, power, capacity, bandwidth, reliability, and volatility. In some embodiments, the different levels of the memory hierarchy may use an on-chip stacked dynamic random access memory, (providing fast, high-bandwidth, low-energy access to data) and an off-chip non-volatile random access memory, (providing low-power, high-capacity storage), in order to provide higher-capacity, lower power, and higher-bandwidth performance. The multi-level memory may present a unified interface to a processor so that specific memory hardware and software implementation details are hidden. The multi-level memory enables the illusion of a single-level memory that satisfies multiple conflicting constraints. A comparator receives a memory address from the processor, processes the address and reads from or writes to the appropriate memory level. In some embodiments, the memory architecture is visible to the software stack to optimize memory utilization. | 10-15-2015 |
20150363903 | Wavefront Resource Virtualization - A processor comprising hardware logic configured to execute of a first wavefront in a hardware resource and stop execution of the first wavefront before the first wavefront completes. The processor schedules a second wavefront for execution in the hardware resource. | 12-17-2015 |
20160041909 | MOVING DATA BETWEEN CACHES IN A HETEROGENEOUS PROCESSOR SYSTEM - Apparatus, computer readable medium, integrated circuit, and method of moving a plurality of data items to a first cache or a second cache are presented. The method includes receiving an indication that the first cache requested the plurality of data items. The method includes storing information indicating that the first cache requested the plurality of data items. The information may include an address for each of the plurality of data items. The method includes determining based at least on the stored information to move the plurality of data items to the second cache. The method includes moving the plurality of data items to the second cache. The method may include determining a time interval between receiving the indication that the first cache requested the plurality of data items and moving the plurality of data items to the second cache. A scratch pad memory is disclosed. | 02-11-2016 |
20160062803 | SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation. | 03-03-2016 |
20160139624 | PROCESSOR AND METHODS FOR REMOTE SCOPED SYNCHRONIZATION - Described herein is an apparatus and method for remote scoped synchronization, which is a new semantic that allows a work-item to order memory accesses with a scope instance outside of its scope hierarchy. More precisely, remote synchronization expands visibility at a particular scope to all scope-instances encompassed by that scope. Remote scoped synchronization operation allows smaller scopes to be used more frequently and defers added cost to only when larger scoped synchronization is required. This enables programmers to optimize the scope that memory operations are performed at for important communication patterns like work stealing. Executing memory operations at the optimum scope reduces both execution time and energy. In particular, remote synchronization allows a work-item to communicate with a scope that it otherwise would not be able to access. Specifically, work-items can pull valid data from and push updates to scopes that do not (hierarchically) contain them. | 05-19-2016 |
Patent application number | Description | Published |
20140122471 | ORGANIZING NETWORK-STORED CONTENT ITEMS INTO SHARED GROUPS - Systems, methods, and computer-readable storage media for adding users to groups of content items organized into events based on a common attribute. An example system configured to practice the method can receive, from a client device, content items uploaded to a synced online content management system, wherein the content items are associated with an account of a first user. The system can cluster at least some of the content items as an event, wherein the event is associated with a common attribute, and identify a second user satisfying a minimum similarity threshold for the event based on the common attribute. The system can provide a suggestion to share the event with the second user. Upon receiving a confirmation of the suggestion, the system can make content items clustered in the event available to the second user. | 05-01-2014 |
20140122592 | IDENTIFYING CONTENT ITEMS FOR INCLUSION IN A SHARED COLLECTION - Systems, methods, and computer-readable storage media for managing pooled collections of content items, such as photos, in a content management system. An example system can first receive, from a first user device, images uploaded to a first account at a synchronized online content management system, and cluster at least some of the images as a collection. The system can receive, from the first user, a request to share the collection with a second user having a second account at the content management system, and generate, in response to the request, a pooled collection at the content management system from the collection. The system can transmit an invitation to the second user to join the pooled collection, and, upon acceptance, link the pooled collection to the second user account so that the first user and the second user have access to images in the persistent pooled collection and have permission to contribute content to the pooled collection. | 05-01-2014 |
20140122994 | EVENT-BASED CONTENT ITEM VIEW - Systems, methods, and computer-readable storage media for an event-based photo view in a browser are disclosed. The system can receive a request to display a set of content items associated with a user account. The system can generate a web page based on a size of the set of files, the web page providing a continuous presentation of the set of files at the web page on a device, wherein a visible portion of the web page includes a presentation of files, and wherein the files are mapped to an area in the web page that is associated with a current position within the web page. The web page can include an events-based navigation feature. The system can transmit the web page to device for display at the device. | 05-01-2014 |
20140122995 | CONTINUOUS CONTENT ITEM VIEW ENHANCED THROUGH SMART LOADING - Systems, methods, and computer-readable storage media for a continuous photo view on a browser-type application are disclosed. The system can receive a request to display a set of images associated with a user account. The system can generate a web page based on a size of the content items, the web page having a respective placeholder for each of the content items in an area of the web page that is relative to a visible portion of the web page, wherein the web page can provide a continuous presentation of the content items on a device, and wherein the web page can be configured to dynamically load and unload content items based on a current position of the web page. | 05-01-2014 |
20140181157 | INTELLIGENT CONTENT ITEM IMPORTING - Systems, methods, and computer-readable storage media for importing a new content item, such as a photo, document, video, email, or application, into a content item repository. A content item repository can contain a set of existing content item groups, and each content item group can include at least one content item. The system can calculate a profile for a new content item to be imported. Upon determining, based on the calculated profile, that the new content item exceeds a similarity threshold for an existing content item group, the system can insert the new content item into the content item group. Upon determining, based on the calculated profile, that the new content item does not exceed the similarity threshold for any existing content item group, the system can create a new content item group and insert the new content item therein. | 06-26-2014 |
20140181935 | SYSTEM AND METHOD FOR IMPORTING AND MERGING CONTENT ITEMS FROM DIFFERENT SOURCES - Systems, methods, and computer-readable storage media for importing and merging photos from different sources are disclosed. The system receives credentials from a user, who has an account with a content management system. The credentials are associated with content item storage entities such as photo repositories. The system accesses the photo repositories, using the plurality of credentials if authorization is required for data access. The system identifies source photo data in each of the photo repositories, and duplicates the source photo data in the content management system account to create consolidated photo data. | 06-26-2014 |
20140188869 | MIGRATING CONTENT ITEMS - Disclosed are systems, methods, and non-transitory computer-readable storage media for migrating content items from a source user account to a target user account. A user can specify content items in the source user account to be migrated to an existing or new target user account. A new content entry including an account identifier of the target account and a pointer to the content item can be created for each migrated content item. Further, a determination can be made as to whether a sharing link to each content item exists, and if so, the content pointer of the old content entry is modified to forward or redirect to the new content entry. An active flag associated with the old content entry can be set to false or 0 to indicate that the old content entry is no longer active. | 07-03-2014 |
20140214856 | PROVIDING A CONTENT PREVIEW - A content preview of a content item stored in an online storage system can be viewed on a client device without the content item itself being downloaded to the client device and without the use of software associated with the content item being installed on the client device. Furthermore, data storage and processing requirements can be minimized by creating and storing only one content preview for each unique content item. The content item can be identified by using the content item as a hash key in a hashing algorithm. The resulting unique identifier can be used to search a preview index that lists all created content previews and their location. A content preview is only created if one does not exist. The unique identifier can be used to locate the content preview and return it in response to a preview request by a client device. | 07-31-2014 |
20140351340 | IDENTIFYING CONTENT ITEMS FOR INCLUSION IN A SHARED COLLECTION - Systems, methods, and computer-readable storage media for managing pooled collections of content items, such as photos, in a content management system. An example system can first receive, from a first user device, images uploaded to a first account at a synchronized online content management system, and cluster at least some of the images as a collection. The system can receive, from the first user, a request to share the collection with a second user having a second account at the content management system, and generate, in response to the request, a pooled collection at the content management system from the collection. The system can transmit an invitation to the second user to join the pooled collection, and, upon acceptance, link the pooled collection to the second user account so that the first user and the second user have access to images in the persistent pooled collection and have permission to contribute content to the pooled collection. | 11-27-2014 |
20150186412 | MIGRATING CONTENT ITEMS - Disclosed are systems, methods, and non-transitory computer-readable storage media for migrating content items from a source user account to a target user account. A user can specify content items in the source user account to be migrated to an existing or new target user account. A new content entry including an account identifier of the target account and a pointer to the content item can be created for each migrated content item. Further, a determination can be made as to whether a sharing link to each content item exists, and if so, the content pointer of the old content entry is modified to forward or redirect to the new content entry. An active flag associated with the old content entry can be set to false or 0 to indicate that the old content entry is no longer active. | 07-02-2015 |
20150186432 | MIGRATING CONTENT ITEMS - Disclosed are systems, methods, and non-transitory computer-readable storage media for migrating content items from a source user account to a target user account. A user can specify content items in the source user account to be migrated to an existing or new target user account. A new content entry including an account identifier of the target account and a pointer to the content item can be created for each migrated content item. Further, a determination can be made as to whether a sharing link to each content item exists, and if so, the content pointer of the old content entry is modified to forward or redirect to the new content entry. An active flag associated with the old content entry can be set to false or 0 to indicate that the old content entry is no longer active. | 07-02-2015 |
20150317307 | PROVIDING A CONTENT PREVIEW - A content preview of a content item stored in an online storage system can be viewed on a client device without the content item itself being downloaded to the client device and without the use of software associated with the content item being installed on the client device. Furthermore, data storage and processing requirements can be minimized by creating and storing only one content preview for each unique content item. The content item can be identified by using the content item as a hash key in a hashing algorithm. The resulting unique identifier can be used to search a preview index that lists all created content previews and their location. A content preview is only created if one does not exist. The unique identifier can be used to locate the content preview and return it in response to a preview request by a client device. | 11-05-2015 |
Patent application number | Description | Published |
20100248706 | AUTONOMOUS, NON-INTERACTIVE, CONTEXT-BASED SERVICES FOR CELLULAR PHONE - Embodiments include but are not limited to cellular phones, methods practice thereon, for autonomously servicing a call or a message, on behalf of the user, without interacting with the user, are disclosed herein. In various embodiments, data about a user of the cellular phone, internal conditions of the cellular phone, or external environment of the cellular phone are locally collected and from a wireless communication network. In various embodiments, multiple agents are provided to the cellular phone, wherein each agent is configured to determine, on receipt of a received call or message, a current service context based at least in part on some of the data stored, and autonomously servicing the received call or message based on the results of the determination. Other embodiments may be described and claimed. | 09-30-2010 |
20120265738 | SEMANTIC COMPRESSION - Technology for semantic compression is disclosed. In various embodiments, the technology receives data that represents one or more physical attributes sensed by one or more sensors; employs at least one pattern or statistical feature to identify a first region and a second region in the received data; computes a first utility and a first relevant feature for the first region, and a second utility and a second relevant feature for the second region; and identifies based on at least the first utility and the second utility a first compression method to apply to the first region and a second compression method to apply to the second region wherein the first and the second compression methods have different compression rates, different feature preservation characteristics, or both. | 10-18-2012 |
20130303135 | AUTONOMOUS, NON-INTERACTIVE, CONTEXT-BASED SERVICES FOR CELLULAR PHONE - Embodiments include but are not limited to cellular phones, methods practice thereon, for autonomously servicing a call or a message, on behalf of the user, without interacting with the user, are disclosed herein. In various embodiments, data about a user of the cellular phone, internal conditions of the cellular phone, or external environment of the cellular phone are locally collected and from a wireless communication network. In various embodiments, multiple agents are provided to the cellular phone, wherein each agent is configured to determine, on receipt of a received call or message, a current service context based at least in part on some of the data stored, and autonomously servicing the received call or message based on the results of the determination. Other embodiments may be described and claimed. | 11-14-2013 |
20140279547 | AUTHENTICATION OF FINANCIAL TRANSACTIONS VIA WIRELESS COMMUNICATION LINK - Examples include autonomously authenticating a financial transaction, on behalf of the user, without interacting with the user, via wireless communication link. In various embodiments, the user's cellular phone may be configured to process a message that provides at least partial service context and autonomously authenticate the financial transaction. | 09-18-2014 |
20160044512 | AUTONOMOUS, NON-INTERACTIVE, CONTEXT-BASED SERVICES FOR CELLULAR PHONE - Examples include autonomously authenticating a financial transaction, on behalf of the user, without interacting with the user, via wireless communication link. In various embodiments, the user's cellular phone may be configured to process a message that provides at least partial service context and autonomously authenticate the financial transaction. | 02-11-2016 |