Patent application number | Description | Published |
20090145393 | Valve Stem Seal With Gas Relief Features - A valve stem seal can include an elastomeric component having a first portion for having a sealed engagement with a valve stem, and a pressure relief lip engaging a valve guide. The pressure relief lip can have a sealing configuration and a venting configuration. The venting configuration can allow excess exhaust gases to vent from a combustion chamber. After the venting of excess exhaust gases, the pressure relief lip can close to the sealing configuration to prevent oil from entering the combustion chamber. The elastomeric body or the valve guide can include a pressure relief channel. The elastomeric body can also include a bumper engaging the valve guide. The pressure relief channel can be disposed in the bumper. | 06-11-2009 |
20090146382 | Valve Stem Seal With Gas Relief Features - A valve stem seal can include an elastomeric component having a first portion for having a sealed engagement with a valve stem, a second portion for engaging a valve guide, and a pressure relief lip extending from the second portion. The first portion can be configured to extend away from or within the second portion, and the second portion can have a channel formed therein. The pressure relief lip can have a sealing configuration and a venting configuration. The venting configuration can allow excess exhaust gases to vent from a combustion chamber. After the venting of excess exhaust gases, the pressure relief lip can close to the sealing configuration to prevent oil from entering the combustion chamber. | 06-11-2009 |
20120068419 | Zero Torque Membrane Seal - A low friction seal for sealing between a shaft and a bore and includes an inner case adapted to be mounted on the shaft. An outer case is adapted to be mounted within the bore. A seal element is mounted to the inner case and includes a base portion attached to the inner case. A membrane extends from the base portion and an axially extending leg extends from the membrane. A seal lip extends from the axially extending leg and sealingly engages a radially extending portion of the outer case. As the shaft rotates, the centrifugal forces tend to cause the membrane to flex and the torque loads applied by the dynamic seal are reduced to the point that the seal lip can lift off and apply zero torque load. | 03-22-2012 |
20120267861 | Valve Stem Seal With Gas Relief Features - A valve stem seal can include an elastomeric component having a first portion for having a sealed engagement with a valve stem, and a pressure relief lip engaging a valve guide. The pressure relief lip can have a sealing configuration and a venting configuration. The venting configuration can allow excess exhaust gases to vent from a combustion chamber. After the venting of excess exhaust gases, the pressure relief lip can close to the sealing configuration to prevent oil from entering the combustion chamber. The elastomeric body or the valve guide can include a pressure relief channel. The elastomeric body can also include a bumper engaging the valve guide. The pressure relief channel can be disposed in the bumper. | 10-25-2012 |
20130175763 | Lubricated Shaft Seal - A seal for sealingly engaging a shaft or other rotatable element includes a sealing portion having a lubricant side and a non-lubricant side and extending generally inwardly toward the shaft when the seal is installed thereon. The sealing portion has an active lip portion including a shaft engagement surface engageable with the shaft and a lubricant vent extending through at least a portion of the active lip portion. The lubricant vent provides fluid communication between opposite sides of the active lip portion, thus maintaining adequate lubrication between the shaft engagement surface and the shaft, avoiding lubricant coking, or other degradation, and extending seal life. | 07-11-2013 |
20140130766 | Valve Stem Seal With Gas Relief Features - A valve stem seal can include an elastomeric component having a first portion for having a sealed engagement with a valve stem, and a pressure relief lip engaging a valve guide. The pressure relief lip can have a sealing configuration and a venting configuration. The venting configuration can allow excess exhaust gases to vent from a combustion chamber. After the venting of excess exhaust gases, the pressure relief lip can close to the sealing configuration to prevent oil from entering the combustion chamber. The elastomeric body or the valve guide can include a pressure relief channel. The elastomeric body can also include a bumper engaging the valve guide. The pressure relief channel can be disposed in the bumper. | 05-15-2014 |
20140251253 | Pressure Support For Engine Valve Stem Seals - A valve stem seal assembly for an internal combustion engine includes an annular rigid case disposed around a valve guide and a valve stem. An annular elastomeric body is press fit within the annular rigid case and including a radially inwardly extending seal lip in sealing contact with the valve stem. The annular elastomeric body includes a first axial end facing the valve guide and a second axial end facing away from the valve guide. The annular rigid case includes a radially inwardly extending end wall opposing the second axial end of the annular elastomeric body and including a lip support extending axially from an inner portion of the radially inwardly extending end wall and opposing a radially inner surface of the radially inwardly extending seal lip. | 09-11-2014 |
Patent application number | Description | Published |
20100269828 | SYSTEM, METHOD AND APPARATUS FOR REMOVAL OF VOLATILE ANESTHETICS FOR MALIGNANT HYPERTHERMIA - Systems, methods, and apparatus for removing volatile anesthetics from an anesthesia or ventilation system to minimize the effects of malignant hyperthermia in susceptible patients. According to one aspect of the present invention, a system for removing volatile anesthetics is provided. A first filter component placed in fluid communication with an inspiratory limb of an anesthesia or ventilation system such that volatile anesthetics will pass through the first filter component during operation of the anesthesia or ventilation system. A second filter component is operably coupled to the expiration port of the anesthesia or ventilation system such that gases passing through the expiratory limb of the anesthesia or ventilation system pass through the second filter component. The first filter component and second filter component are adapted to effectively remove volatile anesthetics passing through the respective filters. | 10-28-2010 |
20120325213 | System, Method and Apparatus for Removal of Volatile Anesthetics for Malignant Hyperthermia - Systems, methods, and apparatus for removing volatile anesthetics from an anesthesia or ventilation system to minimize the effects of malignant hyperthermia in susceptible patients. According to one aspect of the present invention, a system for removing volatile anesthetics is provided. A first filter component placed in fluid communication with an inspiratory limb of an anesthesia or ventilation system such that volatile anesthetics will pass through the first filter component during operation of the anesthesia or ventilation system. A second filter component is operably coupled to the expiration port of the anesthesia or ventilation system such that gases passing through the expiratory limb of the anesthesia or ventilation system pass through the second filter component. The first filter component and second filter component are adapted to effectively remove volatile anesthetics passing through the respective filters. | 12-27-2012 |
20130231520 | IMAG1 EYES MAGANETICS - The invention provides solutions to the problem connected with detached eye retinas and to methods that can promote the healing of the repaired retina. Instead of injecting gas bubbles or silicone oil into the eyeball, the invention proposes to inject magnetic particles inside the eye and use external magnets to urge the magnetic particles to move against the repaired area of the retina, thus forcing the retina against the eyeball walls, thus promoting the healing. Optional eye movement sensors can optimize the distribution of the magnetic particles, to optimize the force holding the retina in place, thus optimizing the healing benefits of using the proposed system. The magnetic particles can be bio-inert and/or degradable. Other helpful devices are proposed as well to reduce the stresses on the patient's neck muscles in case he/she still needs to hold the head in a certain position for a long period of time. | 09-05-2013 |
Patent application number | Description | Published |
20130012730 | HIGH-PURITY EPOXY COMPOUND AND METHOD OF PRODUCING THEREOF - An epoxy compound of high-purity N,N,N′,N′-tetraglycidyl-3,4′-diaminodiphenyl ether is produced by: an addition reaction step of reacting 3,4′-diaminodiphenyl ether with epichloro-hydrin in a polar protic solvent at 65 to 100° C. for | 01-10-2013 |
20150141583 | BENZOXAZINE RESIN COMPOSITION, PREPREG, AND FIBER-REINFORCED COMPOSITE MATERIAL - The embodiments herein relate to a benzoxazine resin composition, a prepreg, and a carbon fiber-reinforced composite material. More specifically, the embodiments herein relate to a benzoxazine resin composition that provides a carbon fiber-reinforced composite material that is suitable for use as a manufacture material due to its superior mechanical strength in extreme use environments, such as high temperature and high moisture, as well as a prepreg, and a carbon fiber-reinforced composite material. An embodiment comprises a benzoxazine resin composition having a multifunctional benzoxazine resin; a multifunctional epoxy resin that is a liquid at 40° C. and has three or more glycidyl groups; a sulfonate ester; and optionally at least one thermoplastic resin. The resin may include an interpenetrating network structure after curing. | 05-21-2015 |
Patent application number | Description | Published |
20090070533 | Content network global replacement policy - This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner. | 03-12-2009 |
20100036954 | Global load balancing on a content delivery network - The invention relates to systems and methods of global load balancing in a content delivery network having a plurality of edge servers which may be distributed across multiple geographic locations. According to one aspect of the invention, a global load balancing system includes a first load balancing server for receiving a packet requesting content to be delivered to a client, selecting one of the plurality of edge servers to deliver the requested content to the client, and forwarding the packet across a network connection to a second load balancing server, which forwards the packet to the selected edge server. The selected edge server, in response to receiving the packet, sends across a network connection the requested content with an address for direct delivery to the client, thereby allowing the requested content to be delivered to the client while bypassing a return path through the first load balancing server. | 02-11-2010 |
20100275125 | CONTENT NETWORK GLOBAL REPLACEMENT POLICY - This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner. | 10-28-2010 |
20110087844 | CONTENT NETWORK GLOBAL REPLACEMENT POLICY - This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner. | 04-14-2011 |
20120198045 | GLOBAL LOAD BALANCING ON A CONTENT DELIVERY NETWORK - The invention relates to systems and methods of global load balancing in a content delivery network having a plurality of edge servers which may be distributed across multiple geographic locations. According to one aspect of the invention, a global load balancing system includes a first load balancing server for receiving a packet requesting content to be delivered to a client, selecting one of the plurality of edge servers to deliver the requested content to the client, and forwarding the packet across a network connection to a second load balancing server, which forwards the packet to the selected edge server. The selected edge server, in response to receiving the packet, sends across a network connection the requested content with an address for direct delivery to the client, thereby allowing the requested content to be delivered to the client while bypassing a return path through the first load balancing server. | 08-02-2012 |
20120215915 | Global Load Balancing on a Content Delivery Network - The invention relates to systems and methods of global load balancing in a content delivery network having a plurality of edge servers which may be distributed across multiple geographic locations. According to one aspect of the invention, a global load balancing system includes a first load balancing server for receiving a packet requesting content to be delivered to a client, selecting one of the plurality of edge servers to deliver the requested content to the client, and forwarding the packet across a network connection to a second load balancing server, which forwards the packet to the selected edge server. The selected edge server, in response to receiving the packet, sends across a network connection the requested content with an address for direct delivery to the client, thereby allowing the requested content to be delivered to the client while bypassing a return path through the first load balancing server. | 08-23-2012 |
Patent application number | Description | Published |
20120072608 | Scalability and Redundancy Enhancements for Content Streaming - Some embodiments provide methods and systems for improving the scalability and redundancy of a distributed content streaming system. Such scalability and redundancy is provided with zero configuration changes to the addressing used by content providers to publish content and zero configuration changes to existing servers of the system. The system includes ingest servers and edge servers. Content providers supply content streams to the ingest servers using a virtual or load balanced address that distributes the content streams across the ingest servers. Accordingly, ingest servers can be added or removed without changing content provider configurations. The ingest servers are configured to notify the edge servers of which content streams are available for streaming at which ingest server. When an ingest server is added to the system, its functionality may be assimilated without modifying the configurations of the other servers. Some embodiments also provide multiple caching layers. | 03-22-2012 |
20120120800 | Request Modification for Transparent Capacity Management in a Carrier Network - Some embodiments provide a capacity management agent that modifies content requests to adjust bandwidth consumption when streaming requested content from a content provider to a requesting user. The modifications include modifying a URL or header information of the request. The agent performs a process that receives a request for content of a content provider. The process identifies a parameter of the carrier network and modifies the request when the parameter satisfies a threshold. The process passes the request to the content provider and the content provider provides content that consumes a first set of resources in response to an unmodified request and a second set of resources in response to a modified request. When the parameter identifies congestion, the first set of resources is greater than the second set of resources. When the condition parameter identifies underutilization, the first set of resources is less than the second set of resources. | 05-17-2012 |
20120120818 | Bandwidth Modification for Transparent Capacity Management in a Carrier Network - Some embodiments provide a capacity management agent that modifies bandwidth that is allocated between an end user and a carrier network by caching requested content that is streamed at a first rate and then providing the cached content to the end user through the carrier network at a second rate. The agent performs a process that includes receiving data intended for a service region of the carrier network from an external data network. The process identifies resource availability at the service region. Next, the process passes the data to the service region at the first rate when the resource availability at the service region is not less than a threshold amount and caches the data for passing to the service region at the second rate that consumes fewer carrier network resource than the first rate when the resource availability at the service region is less than the threshold amount. | 05-17-2012 |
20120124184 | Discrete Mapping for Targeted Caching - Some embodiments provide systems and methods for implementing discrete mapping for targeted caching in a carrier network. In some embodiments, discrete mapping is implemented using a method that caches content from a content provider to a caching server. The method modifies a DNS entry at a particular DNS server to resolve a request that identifies either a hostname or a domain for the content provider to an address of the caching server so that the requested content is passed from the cached content of the caching server and not the source content provider. In some embodiments, the particular DNS server is a recursive DNS server, a local DNS server of the carrier network, or a DNS server that is not authoritative for the hostname or domain of the content provider. | 05-17-2012 |
20120239725 | Network Connection Hand-off Using State Transformations - Some embodiments provide a director agent, a server agent, and a specialized hand-off protocol for improving scalability and resource usage within a server farm. A first network connection is established between a client and the director agent in order to receive a content request from the client from which to select a server from a set of servers that is responsible for hosting the requested content. A second network connection is established between the server agent that is associated with the selected server and a protocol stack of the selected server. The first network connection is handed-off to the server agent using the specialized hand-off protocol. The server agent performs network connection state parameter transformations between the two connections to create a network connection through which content can be passed from the selected server to the client without passing through the director. | 09-20-2012 |
20130046807 | Systems and Methods for Invoking Commands Across a Federation - Some embodiments provide different frameworks for seamlessly issuing and executing commands across servers of different federation participants. Each framework facilitates issuance and execution of a command that originates from a first federation participant and that is intended for execution at servers of a second federation participant. In some embodiments, a framework implements a method for enabling command interoperability between distributed platforms that each operate a set of servers on behalf of content providers. The method involves receiving a command targeting a particular configuration that a first distributed platform deploys to a server that is operated by a second distributed platform. The method identifies the server of the second distributed platform that is deployed with the particular configuration. The method communicably couples to a command invocation system of the second distributed platform and issues the command to the command invocation system for issuance of the command to the identified server. | 02-21-2013 |
20130265873 | Request Modification for Transparent Capacity Management in a Carrier Network - Some embodiments provide a capacity management agent that modifies content requests to adjust bandwidth consumption when streaming requested content from a content provider to a requesting user. The modifications include modifying a URL or header information of the request. The agent performs a process that receives a request for content of a content provider. The process identifies a parameter of the carrier network and modifies the request when the parameter satisfies a threshold. The process passes the request to the content provider and the content provider provides content that consumes a first set of resources in response to an unmodified request and a second set of resources in response to a modified request. When the parameter identifies congestion, the first set of resources is greater than the second set of resources. When the condition parameter identifies underutilization, the first set of resources is less than the second set of resources. | 10-10-2013 |
20130268616 | Discrete Mapping for Targeted Caching - Some embodiments provide systems and methods for implementing discrete mapping for targeted caching in a carrier network. In some embodiments, discrete mapping is implemented using a method that caches content from a content provider to a caching server. The method modifies a DNS entry at a particular DNS server to resolve a request that identifies either a hostname or a domain for the content provider to an address of the caching server so that the requested content is passed from the cached content of the caching server and not the source content provider. In some embodiments, the particular DNS server is a recursive DNS server, a local DNS server of the carrier network, or a DNS server that is not authoritative for the hostname or domain of the content provider. | 10-10-2013 |
20140043970 | Bandwiddth Modification for Transparent Capacity Management in a Carrier Network - Some embodiments provide a capacity management agent that modifies bandwidth that is allocated between an end user and a carrier network by caching requested content that is streamed at a first rate and then providing the cached content to the end user through the carrier network at a second rate. The agent performs a process that includes receiving data intended for a service region of the carrier network from an external data network. The process identifies resource availability at the service region. Next, the process passes the data to the service region at the first rate when the resource availability at the service region is not less than a threshold amount and caches the data for passing to the service region at the second rate that consumes fewer carrier network resource than the first rate when the resource availability at the service region is less than the threshold amount. | 02-13-2014 |
20140047085 | Configuration Management Repository for a Distributed Platform - Some embodiments provide a repository that manages configurations for a distributed platform and that automatedly configures servers of the distributed platform with different hierarchical sets of configurations while ensuring integrity and consistency across the servers and in the repository. In some embodiment, the repository includes a data store that stores configurations for a first set of servers that are operated by a first service provider and a second set of servers that are operated by a second service provider. The data store also identifies different sets of configurations to deploy to different sets of servers from the first and second sets of servers. The repository also includes a function processor to automatedly deploy the different sets of configurations to the different sets of servers and to perform functions for updating the configurations in a manner that ensures integrity and consistency. | 02-13-2014 |
20140095592 | Network Connection Hand-Off and Hand-Back - Some embodiments provide a director agent, a server agent, and a specialized hand-off protocol for improving scalability and resource usage within a server farm. A first network connection is established between a client and the director agent in order to receive a content request from the client from which to select a server from a set of servers that is responsible for hosting the requested content. A second network connection is established between the server agent that is associated with the selected server and a protocol stack of the selected server. The first network connection is handed-off to the server agent using the specialized hand-off protocol. The server agent performs network connection state parameter transformations between the two connections to create a network connection through which content can be passed from the selected server to the client without passing through the director. | 04-03-2014 |
20140143415 | Optimized Content Distribution Based on Metrics Derived from the End User - Some embodiments provide systems and methods for determining a server of a distributed hosting system to optimally distribute content to an end user. The method includes identifying an IP address of the end user. Based on the IP address, a set of servers send packets to the end user to derive performance metrics. The performance metrics are used to determine a server from the set of servers that optimally distributes content to the end user. The method modifies a configuration for resolving end user requests such that the optimal server is identified to the end user when the end user requests content from the hosting system. Some embodiments determine the optimal server by providing downloadable content that is embedded with a monitoring tool. The monitoring tool causes the end user to derive performance metrics for the hosting system when downloading a particular object from a set of servers. | 05-22-2014 |
20140195600 | Network Connection Hand-off Using State Transformations - Some embodiments provide a director agent, a server agent, and a specialized hand-off protocol for improving scalability and resource usage within a server farm. A first network connection is established between a client and the director agent in order to receive a content request from the client from which to select a server from a set of servers that is responsible for hosting the requested content. A second network connection is established between the server agent that is associated with the selected server and a protocol stack of the selected server. The first network connection is handed-off to the server agent using the specialized hand-off protocol. The server agent performs network connection state parameter transformations between the two connections to create a network connection through which content can be passed from the selected server to the client without passing through the director. | 07-10-2014 |
20140280803 | Optimized Content Distribution Based on Metrics Derived from the End User - Some embodiments provide systems and methods for determining a server of a distributed hosting system to optimally distribute content to an end user. The method includes identifying an IP address of the end user. Based on the IP address, a set of servers send packets to the end user to derive performance metrics. The performance metrics are used to determine a server from the set of servers that optimally distributes content to the end user. The method modifies a configuration for resolving end user requests such that the optimal server is identified to the end user when the end user requests content from the hosting system. Some embodiments determine the optimal server by providing downloadable content that is embedded with a monitoring tool. The monitoring tool causes the end user to derive performance metrics for the hosting system when downloading a particular object from a set of servers. | 09-18-2014 |
20140355431 | Request Modification for Transparent Capacity Management in a Carrier Network - Some embodiments provide a capacity management agent that modifies content requests to adjust bandwidth consumption when streaming requested content from a content provider to a requesting user. The modifications include modifying a URL or header information of the request. The agent performs a process that receives a request for content of a content provider. | 12-04-2014 |
20140380454 | WHITE-LIST FIREWALL BASED ON THE DOCUMENT OBJECT MODEL - Some embodiments provide firewalls and methods for guarding against attacks by leveraging the Document Object Model (DOM). The firewall renders the DOM tree to produce a white-list rendering of the data which presents the non-executable elements of the data and, potentially, outputs of the executable elements of the data without the executable elements that could be used to carry a security threat. Some embodiments provide control over which nodes of the DOM tree are included in producing the white-list rendering. Specifically, a configuration file is specified to white-list various nodes from the DOM tree and the white-list rendering is produced by including the DOM tree nodes that are specified in the white-list of the configuration file while excluding those nodes that are not in the white-list. Some embodiments provide a hybrid firewall that executes a set of black-list rules over white-listed nodes of the DOM tree. | 12-25-2014 |
Patent application number | Description | Published |
20140003427 | NETWORK SYSTEM, AND MANAGEMENT APPARATUS AND SWITCH THEREOF | 01-02-2014 |
20140115277 | METHOD AND APPARATUS FOR OFFLOADING STORAGE WORKLOAD - Exemplary embodiments provide a technique to offload storage workload. In one aspect, a computer comprises: a memory; and a controller operable to manage a relationship among port information of an initiator port, information of a logical volume storing data from the initiator port, and port information of a target port to be used for storing data from the initiator port to the logical volume, and to cause another computer to process a storage function of a storage system including the logical volume and the target port by creating a virtual machine for executing the storage function and by configuring the relationship on said another computer, said another computer sending the data to the logical volume of the storage system after executing the storage function. In specific embodiments, by executing the storage function on said another computer, the workload of executing the storage function on the storage system is eliminated. | 04-24-2014 |
20140181804 | METHOD AND APPARATUS FOR OFFLOADING STORAGE WORKLOAD - An aspect of the invention is directed to a storage management computer for managing offloading of storage workload between a storage controller of a storage system and one or more host computers. The storage management computer comprises: a memory; and a controller operable to request a virtual machine management computer to register the storage controller as a host computer, and to send, to the virtual machine management computer, storage processes information of storage processes in the storage system which can be offloaded as virtual machines in order for the virtual machine management computer to register the storage processes as virtual machines. | 06-26-2014 |
20140317366 | METHOD AND APPARATUS FOR REMOTE STORAGE PERFORMANCE DATA COPY - A storage system comprises: a storage device; and a controller operable to manage a primary volume in the storage system of a remote copy pair with a secondary volume of another storage system by using a storage area of the storage device, and send a first type copy data to said another storage system according to a remote copy procedure of the remote copy pair, so that said another storage system can update the secondary volume based on the first type copy data. The controller is operable to create a second type copy data by using performance data of the primary volume, and to send the second type copy data to said another storage system according to the remote copy procedure, so that said another storage system can use the performance data of the primary volume for performance data of the secondary volume based on the second type copy data. | 10-23-2014 |
Patent application number | Description | Published |
20120036073 | INTELLIGENT ESTIMATES IN AUTHORIZATION - Techniques for providing an intelligent estimated amount for authorization include receiving a request to calculate an estimated amount for a transaction where the final amount is not known at the time of authorization. A payment processing network calculates the estimated amount based on several factors associated with the transaction and provides the estimated amount to an issuer for authorization. | 02-09-2012 |
20120041881 | SECURING EXTERNAL SYSTEMS WITH ACCOUNT TOKEN SUBSTITUTION - Systems, apparatuses, and methods for providing an account token to an external entity during the lifecycle of a payment transaction. In some embodiments, an external entity may be a merchant computer requesting authorization of a payment message. In other embodiments, the external entity may be a support computer providing a payment processing network or a merchant support functions. | 02-16-2012 |
20130054465 | LEAST COST ROUTING AND MATCHING - Systems and methods are disclosed for routing a transaction based on an assessment of costs associated with multiple payment processing networks. A transaction broker server determines how to route a received authorization request message for a transaction based on a first cost associated with processing the transaction via a first payment processing network and a second cost associated with processing the transaction via a second payment processing network. If the first cost is less than or equal to the second cost, the authorization request message is routed to the first payment processing network. If the first cost exceeds the second cost, at least one rule is applied to determine whether the first cost is to be reduced. If the at least one rule is satisfied, the first cost is reduced and the authorization request message is routed to the first payment processing network. | 02-28-2013 |
20130197991 | SYSTEMS AND METHODS TO PROCESS PAYMENTS BASED ON PAYMENT DEALS - In one aspect, an account identifier is associated with a deal purchased by a user from a deal site. The deal has a face value applicable to a transaction with a predetermined merchant, if the transaction satisfies one or more predetermined criteria. The user pays the deal site an amount smaller than the face value. When the account identifier is used at a transaction terminal to initiate a payment transaction, a transaction handler determines whether the payment transaction satisfies the one or more predetermined criteria; and if so, the transaction handler provides the transaction terminal with an authorization response identifying the remaining balance, which is determined by deducting the face value from the payment transaction. The account identifier may be a one-time account number generated specifically for the deal purchased by the user, or an account number of the user used to purchase the deal from the deal site. | 08-01-2013 |
20130198075 | PROCESSING MONITOR SYSTEM AND METHOD - Systems and methods for monitoring transaction data and providing an indication regarding a performance parameter to a payment processing entity. Transaction data associated with a plurality of transactions conducted during a time interval is received. A server computer determines that the received transaction data meets a threshold. It is further determined whether a previous indication that the threshold has been met was provided to a payment processing entity, the previous indication being associated with a plurality of previous transactions conducted during a previous time interval. If the previous indication was not provided, an indication that the threshold has been met is provided to the payment processing entity, the indication including information regarding a performance parameter of the payment processing entity. | 08-01-2013 |
Patent application number | Description | Published |
20090099178 | INDAZOLE COMPOUNDS AND METHODS OF USE THEREOF - This invention is directed to Indazole Compounds or pharmaceutically acceptable salts, solvates and hydrates thereof. The Indazole Compounds have utility in the treatment or prevention of a wide range of diseases and disorders that are responsive to the inhibition, modulation or regulation of kinases, such as inflammatory diseases, abnormal angiogenesis and diseases related thereto, cancer, atherosclerosis, a cardiovascular disease, a renal disease, an autoimmune condition, macular degeneration, disease-related wasting, an asbestos-related condition, pulmonary hypertension, diabetes, obesity, pain and others. Thus, methods of treating or preventing such diseases and disorders are also disclosed, as are pharmaceutical compositions comprising one or more of the Indazole Compounds. This invention is based, in part, upon the discovery of a novel class of 5-triazolyl substituted indazole molecules that have potent activity with respect to the modulation of protein kinases. Thus, the invention encompasses orally active molecules as well as parenterally active molecules which can be used at lower doses or serum concentrations for treating diseases or disorders associated with protein kinase signal transduction. | 04-16-2009 |