Mamidi
Anil Mamidi, Ashburn, VA US
Patent application number | Description | Published |
---|---|---|
20130166367 | SYSTEMS AND METHODS FOR ADMINISTERING MERCHANT REWARDS TO PURCHASERS WHO INCREASE SPENDING AT PARTICIPATING MERCHANTS - The disclosure relates to, among other things, systems, methods, and computer-readable media for rewarding a purchaser for increasing their purchase volume with at least one participating merchant. Embodiments may comprise offering an incentive for the purchaser to increase their purchase volume with the participating merchant. Embodiments may further comprise obtaining a first set of electronic transactions between the purchaser and the participating merchant. Embodiments may further comprise determining a merchant baseline for the purchaser based on the first set of electronic transactions. Embodiments may further comprise obtaining a second set of electronic transactions between the purchaser and the participating merchant. Embodiments may further comprise comparing the second set of electronic transactions to the merchant baseline. Embodiments may further comprise providing the reward to the purchaser based on the comparison. | 06-27-2013 |
Murthy V. Mamidi, San Jose, CA US
Patent application number | Description | Published |
---|---|---|
20110106862 | METHOD FOR QUICKLY IDENTIFYING DATA RESIDING ON A VOLUME IN A MULTIVOLUME FILE SYSTEM - A method for quickly identifying data residing on a volume in a multivolume file system. The method includes generating a file location map, the file location map containing a list of the locations of files that occupy space on each of a plurality of volumes of the file system. The file system comprises least a first volume and a second volume. The file location map is updated in accordance with changes in a file change log for the file system. Data residing on the first volume of the file system is identified by scanning the file location map. | 05-05-2011 |
20110106863 | USING A PER FILE ACTIVITY RATIO TO OPTIMALLY RELOCATE DATA BETWEEN VOLUMES - A method for identifying data for relocation in a multivolume file system. The method includes generating a file location map, the file location map containing a list of the locations of files that occupy space on each of a plurality of volumes of the file system, wherein The file system comprising least a first volume and a second volume. The method further includes updating the file location map in accordance with changes in a file change log for the file system, and identifying data residing on the first volume of the file system by scanning the file location map. Using the identified data, a ratio of per-file activity during a first time period relative to overall file system activity over a second time period is calculated to derive a file activity ratio for each of the files of the identified data. Files are then selected for relocation based on the file activity ratio. | 05-05-2011 |
Rajesh Mamidi, Bangalore IN
Patent application number | Description | Published |
---|---|---|
20150240833 | CENTRIFUGAL COMPRESSOR IMPELLER COOLING - A centrifugal compressor including: a casing; at least one impeller supported for rotation in the casing and provided with a hub, a shroud and an impeller eye; an impeller-eye sealing arrangement, for sealing the impeller in the region of said impeller eye. The centrifugal compressor further includes at least one cooling-medium portlocated at the impeller-eye sealing arrangement, arranged for delivering a cooling medium around the impeller eye. | 08-27-2015 |
Santosh Mamidi, Santa Clara, CA US
Patent application number | Description | Published |
---|---|---|
20100262860 | LOAD BALANCING AND HIGH AVAILABILITY OF COMPUTE RESOURCES - Compute resources of multiple resource cards are assigned to compute resource pools. Each compute resource pool is typically associated with a specific service (e.g., VoIP, video service, deep packet inspection, etc). Compute resource groups are created in each compute resource pool and are allocated one or more compute resources of that compute resource pool. Those compute resources in a given resource pool that are not allocated to a compute resource group are set as backup compute resources. Upon a failure of a compute resource in a compute resource pool that includes backup compute resources, a backup compute resource is selected and takes over the function of the failed compute resource. Upon a failure of a compute resource in a compute resource group of a compute resource pool that does not include a backup compute resource, the traffic is load balanced across the remaining compute resources of that compute resource group. | 10-14-2010 |
Sreenivas Mamidi, Singapore SG
Patent application number | Description | Published |
---|---|---|
20100103786 | POWER CALIBRATION IN OPTICAL DISC DRIVES - A method comprising performing a first set of power calibration procedures on an optical record carrier at a first recording speed in a first set of calibration areas and performing a further set of power calibration procedures on the optical record carrier at a recording speed different from the first recording speed, wherein the further set of power calibration procedures partly uses information from the first set of calibration areas is disclosed. The technique is useful in scenarios where the recording is required to be done with more than one speed on the same optical record carrier. The technique reduces the overall power calibration time and increases the number of power calibrations that can be done on the optical record carrier. The technique is useful for data, audio and video recorders. | 04-29-2010 |
Suman Mamidi, Austin, TX US
Patent application number | Description | Published |
---|---|---|
20130117535 | Selective Writing of Branch Target Buffer - A method includes executing a branch instruction and determining if a branch is taken. The method further includes evaluating a number of instructions associated with the branch instruction. Upon determining that the branch is taken, the method includes selectively writing an entry into a branch target buffer that corresponds to the taken branch responsive to determining that the number of instructions is less than a threshold. | 05-09-2013 |
20130185515 | Utilizing Negative Feedback from Unexpected Miss Addresses in a Hardware Prefetcher - Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined. | 07-18-2013 |
20130185516 | Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching - Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines. | 07-18-2013 |