Patent application number | Description | Published |
20130346227 | Performance-Based Pricing for Cloud Computing - Described are performance-based pricing models for pricing execution of a client job in a cloud service. Client-provided performance-related parameters are used to determine a price. The price may be a minimum bid price that is evaluated against a bid received from client bidder to accept or reject the bid. Alternatively, the price may be returned as a quote. For batch application-type jobs, performance parameters include a work volume parameter and a deadline or the like. For an interactive-type application job, example performance-related parameters may include an average load parameter, a peak load parameter, an acceptance rate parameter, a minimum capacity parameter, a maximum capacity parameter, and/or a time window parameter over which load is specified. | 12-26-2013 |
20130346572 | PROCESS MIGRATION IN DATA CENTER NETWORKS - There is provided a method and system for process migration in a data center network. The method includes selecting processes to be migrated from a number of overloaded servers within a data center network based on an overload status of each overloaded server. Additionally, the method includes selecting, for each selected process, one of a number of underloaded servers to which to migrate the selected process based on an underload status of each underloaded server, and based on a parameter of a network component by which the selected process is to be migrated. The method also includes migrating each selected process to the selected underloaded server such that a migration finishes within a specified budget. | 12-26-2013 |
20140006861 | PROBLEM INFERENCE FROM SUPPORT TICKETS | 01-02-2014 |
20140006862 | MIDDLEBOX RELIABILITY | 01-02-2014 |
20140136684 | CHARACTERIZING SERVICE LEVELS ON AN ELECTRONIC NETWORK - The described implementations relate to processing of electronic data. One implementation is manifest as a system that that can include an event analysis component and one or more processing devices configured to execute the event analysis component. The event analysis component can be configured to obtain events from event logs, the events reflecting failures by one or more network devices in one or more data centers and characterize a service level of an application or a network device based on the events. For example, the event analysis component can be configured to characterize the availability of an application based on one or more network stamps of the application. | 05-15-2014 |
20140136690 | Evaluating Electronic Network Devices In View of Cost and Service Level Considerations - The described implementations relate to processing of electronic data. One implementation is manifest as one or more computer-readable storage devices comprising instructions which, when executed by one or more processing devices, cause the one or more processing devices to perform acts. The acts can include determining service levels provided by multiple network configurations, determining costs associated with the multiple network configurations, and evaluating the multiple network configurations based on both the costs and the service levels. The multiple network configurations can include redundantly-deployed devices. Furthermore, some implementations may determine cost/service level metrics that can be used to compare devices based on expected costs to provide a particular service level. | 05-15-2014 |
20140379895 | NETWORK EVENT PROCESSING AND PRIORITIZATION - The described implementations relate to processing of electronic data. One implementation is manifest as a system that can include an event analysis component and one or more processing devices configured to execute the event analysis component. The event analysis component can be configured to obtain multiple events that are generated by network devices in a networking environment. The event analysis component can also be configured to identify impactful events from the multiple events. The impactful events can have associated device-level or link-level impacts. The event analysis component can also be configured to determine one or more failure metrics for an individual impactful event. The one or more failure metrics can include at least a first redundancy-related failure metric associated with redundant failovers in the networking environment. | 12-25-2014 |
20150113118 | HIERARCHICAL NETWORK ANALYSIS SERVICE - A hierarchical network analytics system operated by a computing device or system is described. In some example techniques, the analytics system may determine results of a plurality of first level analyses each based at least in part on results of a respective plurality of data queries that return respective subsets of a plurality of types of network data. The analytics system may determine a result of a second level analysis based at least in part on results of the plurality of first level analyses. | 04-23-2015 |
20150186193 | GENERATION OF CLIENT-SIDE APPLICATION PROGRAMMING INTERFACES - Techniques for generating a client-side Application Programming Interface (API) are described herein. The techniques may include analyzing source code that is related to an API of a service provider and/or content that describes the source code and/or the API of the service provider. The analysis may identify characteristics of the API of the service provider, such as routines, characteristics of the routines, data constructs, characteristics of the data constructs, and so on. The techniques may also include generating a representation to represent the characteristics of the API of the service provider and generating a client-side API based on the representation. The client-side API may include a library of client-side routines and/or data constructs that provide access to routines and/or data constructs that are made available via the API of the service provider. | 07-02-2015 |
20150271008 | IDENTIFYING TROUBLESHOOTING OPTIONS FOR RESOLVING NETWORK FAILURES - Described herein are various technologies pertaining to providing assistance to an operator in a data center with respect to failures in the data center. An alarm is received, and a failing device is identified based upon content of the alarm. Failure conditions of the alarm are mapped to a failure symptom that may be exhibited by the failing device, and troubleshooting options previously employed to mitigate the failure symptom are retrieved from historical data. Labels are respectively assigned to the troubleshooting options, where a label is indicative of a probability that a troubleshooting option to which the label has been assigned will mitigate the failure symptom. | 09-24-2015 |
20160036837 | DETECTING ATTACKS ON DATA CENTERS - The claimed subject matter includes a system and method for detecting attacks on a data center. The method includes sampling a packet stream by coordinating at multiple levels of data center architecture, based on specified parameters. The method also includes processing the sampled packet stream to identify one or more data center attacks. Further, the method includes generating attack notifications for the identified data center attacks. | 02-04-2016 |
20160036838 | DATA CENTER ARCHITECTURE THAT SUPPORTS ATTACK DETECTION AND MITIGATION - Described herein are various technologies pertaining to identification of inbound and outbound network and application attacks with respect to a data center. Commodity servers are used to monitor ingress and egress traffic flows, and anomalies are detected in the traffic flows. Responsive to detecting an anomaly, a mitigation strategy is executed to mitigate damage caused by a cyber-attack. | 02-04-2016 |
20160070784 | IDENTIFYING MATHEMATICAL OPERATORS IN NATURAL LANGUAGE TEXT FOR KNOWLEDGE-BASED MATCHING - Disclosed herein is a system and method for taking a snapshot or input from a source and identifying appropriate documents in a knowledge base that are applicable to the input. The system identifies documents that are applicable to the query by identifying comparative features/statements found in the natural language text documents and evaluating those comparative features with the conditions of the input. When the conditions of the comparative features evaluate with the input conditions the document is considered a match. The system processes the documents through a value type filter to understand the mathematical equivalent of the comparative feature and uses this mathematical equivalent in the evaluation of the document and input. | 03-10-2016 |
20160094413 | Network Resource Governance in Multi-Tenant Datacenters - Bandwidth requirement specifications in a multi-tenant datacenter are implemented using resource-bundle level queues and tenant level queues. Data is transmitted via the resource-bundle level queues and the tenant level queues according to the bandwidth requirement specifications, such that minimum bandwidth requirements are maintained for data being transmitted and for data being received. | 03-31-2016 |
Patent application number | Description | Published |
20110239010 | MANAGING POWER PROVISIONING IN DISTRIBUTED COMPUTING - One or more computers manage power consumption in a plurality of computers by repeatedly evaluating power consumption of pluralities of computers such that any given plurality of computers is evaluated by aggregating indicia of power consumption of the individual computers in the given plurality. The evaluation identifies or predicts pluralities of computers that are over-consuming power and identifies pluralities of computers that are under-consuming power. A first plurality of computers identified as over-consuming power are sent messages to instruct some of its comprising computers or virtual machines (VMs) to lower their computational workload. A second plurality of computers identified as under-consuming power are sent messages instructing the other computers to increase their computation workload. | 09-29-2011 |
20110276951 | MANAGING RUNTIME EXECUTION OF APPLICATIONS ON CLOUD COMPUTING SYSTEMS - Instances of a same application execute on different respective hosts in a cloud computing environment. Instances of a monitor application are distributed to concurrently execute with each application instance on a host in the cloud environment, which provides user access to the application instances. The monitor application may be generated from a specification, which may define properties of the application/cloud to monitor and rules based on the properties. Each rule may have one or more conditions. Each monitor instance running on a host, monitors execution of the corresponding application instance on that host by obtaining from the host information regarding values of properties on the host per the application instance. Each monitor instance may evaluate the local host information or aggregate information collected from hosts running other instances of the monitor application, to repeatedly determine whether a rule condition has been violated. On violation, a user-specified handler is triggered. | 11-10-2011 |
20110282982 | DYNAMIC APPLICATION PLACEMENT BASED ON COST AND AVAILABILITY OF ENERGY IN DATACENTERS - An optimization framework for hosting sites that dynamically places application instances across multiple hosting sites based on the energy cost and availability of energy at these sites, application SLAs (service level agreements), and cost of network bandwidth between sites, just to name a few. The framework leverages a global network of hosting sites, possibly co-located with renewable and non-renewable energy sources, to dynamically determine the best datacenter (site) suited to place application instances to handle incoming workload at a given point in time. Application instances can be moved between datacenters subject to energy availability and dynamic power pricing, for example, which can vary hourly in day-ahead markets and in a time span of minutes in realtime markets. | 11-17-2011 |
20110320520 | DYNAMIC PARTITIONING OF APPLICATIONS BETWEEN CLIENTS AND SERVERS - Optimization mechanism that dynamically splits the computation in an application (e.g., cloud), that is, which parts run on a client (e.g., mobile) and which parts run on servers in a datacenter. This optimization can be based on application characteristics, network connectivity (e.g., latency, bandwidth, etc.) between the client and the datacenter, power or energy available at the client, size of the application objects, load in the datacenter, security and privacy concerns (e.g., cannot share all data on the client with the datacenter), and other criteria, as desired. | 12-29-2011 |
20120109705 | DATA CENTER SYSTEM THAT ACCOMMODATES EPISODIC COMPUTATION - A data center system is described which includes multiple data centers powered by multiple power sources, including any combination of renewable power sources and on-grid utility power sources. The data center system also includes a management system for managing execution of computational tasks by moving data components associated with the computational tasks within the data center system, in lieu of, or in addition to, moving power itself. The movement of data components can involve performing pre-computation or delayed computation on data components within any data center, as well as moving data components between data centers. The management system also includes a price determination module for determining prices for performing the computational tasks based on different pricing models. The data center system also includes a “stripped down” architecture to complement its use in the above-summarized data-centric environment. | 05-03-2012 |
20120130554 | DYNAMICALLY PLACING COMPUTING JOBS - This document describes techniques for dynamically placing computing jobs. These techniques enable reduced financial and/or energy costs to perform computing jobs at data centers. | 05-24-2012 |
20120158447 | PRICING BATCH COMPUTING JOBS AT DATA CENTERS - This document describes techniques for pricing batch computing jobs based at least in part on temporally- or spatially-dependent costs. By so doing, prices offered to perform a batch computing job better reflect the costs to perform that batch computing job. | 06-21-2012 |
20120330711 | RESOURCE MANAGEMENT FOR CLOUD COMPUTING PLATFORMS - A system for managing allocation of resources based on service level agreements between application owners and cloud operators. Under some service level agreements, the cloud operator may have responsibility for managing allocation of resources to the software application and may manage the allocation such that the software application executes within an agreed performance level. Operating a cloud computing platform according to such a service level agreement may alleviate for the application owners the complexities of managing allocation of resources and may provide greater flexibility to cloud operators in managing their cloud computing platforms. | 12-27-2012 |
20120331113 | RESOURCE MANAGEMENT FOR CLOUD COMPUTING PLATFORMS - A system for managing allocation of resources based on service level agreements between application owners and cloud operators. Under some service level agreements, the cloud operator may have responsibility for managing allocation of resources to the software application and may manage the allocation such that the software application executes within an agreed performance level. Operating a cloud computing platform according to such a service level agreement may alleviate for the application owners the complexities of managing allocation of resources and may provide greater flexibility to cloud operators in managing their cloud computing platforms. | 12-27-2012 |
20130007753 | ELASTIC SCALING FOR CLOUD-HOSTED BATCH APPLICATIONS - An elastic scaling cloud-hosted batch application system and method that performs automated elastic scaling of the number of compute instances used to process batch applications in a cloud computing environment. The system and method use automated elastic scaling to minimize job completion time and monetary cost of resources. Embodiments of the system and method use a workload-driven approach to estimate a work volume to be performed. This is based on task arrivals and job execution times. Given the work volume estimate, an adaptive controller dynamically adapts the number of compute instances to minimize the cost and completion time. Embodiments of the system and method also mitigate startup delays by computing a work volume in the near future and gradually starting up additional compute instances before they are needed. Embodiments of the system and method also ensure fairness among batch applications and concurrently executing jobs. | 01-03-2013 |
20130151683 | LOAD BALANCING IN CLUSTER STORAGE SYSTEMS - Methods and systems for load balancing in a cluster storage system are disclosed herein. The method includes identifying a source node within the cluster storage system from which to move a number of data objects, wherein the source node includes a node with a total load exceeding a threshold value. The method also includes selecting the data objects to move from the source node, wherein the data objects are chosen such that the total load of the source node no longer exceeds the threshold value. The method further includes determining a target node within the cluster storage system based on a proximity to the source node and the total load of the target node and moving the data objects from the source node to the target node. | 06-13-2013 |
20130179371 | SCHEDULING COMPUTING JOBS BASED ON VALUE - A plurality of requests for execution of computing jobs on one or more devices that include a plurality of computing resources may be obtained, the one or more devices configured to flexibly allocate the plurality of computing resources, each of the computing jobs including job completion values representing a worth to a respective user that is associated with execution completion times of each respective computing job. The computing resources may be scheduled based on the job completion values associated with each respective computing job. | 07-11-2013 |
20130232382 | METHOD AND SYSTEM FOR DETERMINING THE IMPACT OF FAILURES IN DATA CENTER NETWORKS - There is provided a method and system for determining an impact of failures in a data center network. The method includes identifying failures for the data center network based on data about the data center network and grouping the failures into failure event groups, wherein each failure event group includes related failures for a network element. The method also includes estimating the impact of the failures for each of the failure event groups by correlating the failures with traffic for the data center network. | 09-05-2013 |
20130246208 | ALLOCATION OF COMPUTATIONAL RESOURCES WITH POLICY SELECTION - A method for adaptively allocating resources to a plurality of jobs. The method comprises selecting a first policy from a plurality of policies for a first job in the plurality of jobs by using a policy selection mechanism, allocating at least one resource to the first job in accordance with the first policy, and in response to completion of the first job, updating the policy selection mechanism to obtain an updated policy selection mechanism by using at least one processor. Updating the policy selection mechanism comprises evaluating the performance of the first policy with respect to the first job by calculating a value of a metric of utility for the first policy based on conditions associated with execution of the first job and updating the policy selection mechanism based on the calculated value and a delay of execution of the first job. | 09-19-2013 |
20140365402 | DATA CENTER SYSTEM THAT ACCOMMODATES EPISODIC COMPUTATION - A data center system is described which includes multiple data centers powered by multiple power sources, including any combination of renewable power sources and on-grid utility power sources. The data center system also includes a management system for managing execution of computational tasks by moving data components associated with the computational tasks within the data center system, in lieu of, or in addition to, moving power itself. The movement of data components can involve performing pre-computation or delayed computation on data components within any data center, as well as moving data components between data centers. The management system also includes a price determination module for determining prices for performing the computational tasks based on different pricing models. The data center system also includes a “stripped down” architecture to complement its use in the above-summarized data-centric environment. | 12-11-2014 |
Patent application number | Description | Published |
20120144177 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 06-07-2012 |
20120144178 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 06-07-2012 |
20120144179 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 06-07-2012 |
20130346734 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 12-26-2013 |
20140331035 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 11-06-2014 |
20150234666 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 08-20-2015 |
Patent application number | Description | Published |
20090254631 | DEFINING CLIPPABLE SECTIONS OF A NETWORK DOCUMENT AND SAVING CORRESPONDING CONTENT - A system and a method may be provided. The system may include a server and one or more user processing devices. The user processing devices may execute an application, such as, for example, a browser. Via the application, a user may define clippable sections of a network document without executing any scripts. The defined clippable sections may be stored on a user processing device or on a server. When viewing a network document, a user may select a portion of the network document corresponding to a defined clippable section to cause corresponding content to be saved to a list. The list may be stored on the user processing device or on the server. When the list is stored on the server, the list may be made shareable with other users. | 10-08-2009 |
20160065626 | CROSS DEVICE TASK CONTINUITY - Systems and methods for cross device and/or cross operating system task continuity between devices for frictionless task engagement and reengagement. Task continuity can provide for simple detection and selection of recently viewed and/or modified tasks. Task continuity can provide for simple engagement of new tasks in applications and/or websites, the new tasks being related to recently presented and/or modified tasks. Responsive to selection of the recently presented and/or modified task, the task can be seamlessly reengaged from the point at which it was last presented and/or modified. Responsive to selection of a new task, the task can be engaged from a starting point. Upon completion of the task on one device, the task can be closed across devices. Task continuity can be enabled on a single device or across a plurality of devices. Task continuity can be enabled on a single operating system, or across a plurality of operating systems. | 03-03-2016 |
20160077685 | Operating System Virtual Desktop Techniques - Operating system virtual desktop techniques are described. In one or more implementations, a plurality of virtual desktops are implemented by a single operating system of a computing device. Each of the virtual desktops includes a user interface that is configured to have an associated collection of windows that correspond to applications. Access to the plurality of virtual desktops is managed by the operating system that is navigable by a user to interact with associated collection of representations of application and windows corresponding to the applications by switching between the plurality of virtual desktops. | 03-17-2016 |
20160085388 | Desktop Environment Differentiation in Virtual Desktops - Desktop environment differentiation in virtual desktop techniques are described. In one or more implementations, a user is logged into a corresponding single user account of an operating system of a computing device. Functionality accessible via the single user account is exposed to implement a plurality of virtual desktops associated with the single user account. Each of the plurality of virtual desktops including a user interface that is configured to have an associated collection of windows corresponding to applications and desktop environments that are differentiated, one from another. | 03-24-2016 |
Patent application number | Description | Published |
20110238546 | MANAGING COMMITTED PROCESSING RATES FOR SHARED RESOURCES - Commitments against various resources can be dynamically adjusted for customers in a shared-resource environment. A customer can provision a data volume with a committed rate of Input/Output Operations Per Second (IOPS) and pay only for that commitment (plus any overage), for example, as well as the amount of storage requested. The customer can subsequently adjust the committed rate of IOPS by submitting an appropriate request, or the rate can be adjusted automatically based on any of a number of criteria. Data volumes for the customer can be migrated, split, or combined in order to provide the adjusted rate. The interaction of the customer with the data volume does not need to change, independent of adjustments in rate or changes in the data volume, other than the rate at which requests are processed. | 09-29-2011 |
20110238857 | COMMITTED PROCESSING RATES FOR SHARED RESOURCES - Customers of a shared-resource environment can provision resources in a fine-grained manner that meets specific performance requirements. A customer can provision a data volume with a committed rate of Input/Output Operations Per Second (IOPS) and pay only for that commitment (plus any overage), and the amount of storage requested. The customer will then at any time be able to complete at least the committed rate of IOPS. If the customer generates submissions at a rate that exceeds the committed rate, the resource can still process at the higher rate when the system is not under pressure. Even under pressure, the system will deliver at least the committed rate. Multiple customers can be provisioned on the same resource, and more than one customer can have a committed rate on that resource. Customers without committed or guaranteed rates can utilize the uncommitted portion, or committed portions that are not being used. | 09-29-2011 |
20150026430 | VIRTUAL DATA STORAGE SERVICE WITH SPARSE PROVISIONING - Virtual data stores may be sparsely provisioned by virtual data storage services in a manner that controls risk of implementation resource shortages. Relationships between requested data storage space size, data storage server capacity, allocated data storage space size and/or allocated data storage space utilization may be tracked on a per data store, per customer, per data storage server, and/or a per virtual data storage service basis. For each such basis, a set of constraints may be specified to control the relationships. The set of constraints may be enforced during implementation resource allocation, and by migration of data storage space portions to different implementation resources as part of a sparse provisioning load balancing. Sparse provisioning details may be made explicit to virtual data storage service customers to varying degrees including explicit, aggregate on a per customer basis, and aggregate on a per virtual data storage service basis. | 01-22-2015 |
Patent application number | Description | Published |
20140180862 | MANAGING OPERATIONAL THROUGHPUT FOR SHARED RESOURCES - Usage of shared resources can be managed by enabling users to obtain different types of guarantees at different times for various types and/or levels of resource capacity. A user can select to have an amount or rate of capacity dedicated to that user. A user can also select reserved capacity for at least a portion of the requests, tasks, or program execution for that user, where the user has priority to that capacity but other users can utilize the excess capacity during other periods. Users can alternatively specify to use the excess capacity or other variable, non-guaranteed capacity. The capacity can be for any appropriate functional aspect of a resource, such as computational capacity, throughput, latency, bandwidth, and storage. Users can submit bids for various types and combinations of excess capacity, and winning bids can receive dedicated use of the excess capacity for at least a period of time. | 06-26-2014 |
20150106331 | DATA SET CAPTURE MANAGEMENT WITH FORECASTING - A set of virtualized computing services may include multiple types of virtualized data store differentiated by characteristics such as latency, throughput, durability and cost. A sequence of captures of a data set from one data store to another may be scheduled to achieve a variety of virtualized computing service user and provider goals such as lowering a probability of data loss, lowering costs, and computing resource load leveling. Data set captures may be scheduled according to policies specifying fixed and flexible schedules and conditions including flexible scheduling windows, target capture frequencies, probability of loss targets and/or cost targets. Capture lifetimes may also be managed with capture retention policies, which may specify fixed and flexible lifetimes and conditions including cost targets. Such data set capture policies may be specified with a Web-based administrative interface to a control plane of the virtualized computing services. | 04-16-2015 |
Patent application number | Description | Published |
20090222448 | ELEMENTS OF AN ENTERPRISE EVENT FEED - An enterprise-based social networking application. The events pool for the social networking application may be automatically populated without requiring direct individual participation in the social networking application. Furthermore, networks may be established automatically, without an expressed invitation. The default network may be based on a participant's communication history and/or organization context within the enterprise. The participant may then edit or expand the network without necessarily requesting permission for the individuals being added, and without necessarily being part of that individual's network. | 09-03-2009 |
20090222750 | ENTERPRISE SOCIAL NETWORKING SOFTWARE ARCHITECTURE - An enterprise-based social networking application. Events for individuals may be collected from various enterprise-based information systems automatically using adaptors that are specially tailored for particular types of information systems. Such events may then be used to populate event feeds regarding individuals in that enterprise. A filtering model for formulating event feeds identifies events by individual, event type, and event time. The filter also identifies which individuals are in which group of a participant, and identifies which groups correspond to which event types. Incoming events may then be filtered into the event feeds depending on the group to which the individual belongs. A user interface for a participant to view and edit group membership is also provided. | 09-03-2009 |
20100115033 | DO NOT DISTURB FILTER FOR ELECTRONIC MESSAGES - Data is received defining a time period during which a notification of receipt should not be provided when an electronic message is received. Data may also be received defining certain types of messages for which notification of receipt should be provided during the time period. During the duration of the time period, no notification of receipt is provided for received electronic messages that are not within one of the specified types. After the time period has elapsed, notification of receipt is provided for electronic messages received during the time period and for which no notification of receipt was previously provided. Electronic messages may be sent during the time period and electronic messages received prior to the time period may be displayed for reading during the time period. | 05-06-2010 |
20140324977 | ENTERPRISE SOCIAL NETWORKING SOFTWARE ARCHITECTURE - An enterprise-based social networking application. Events for individuals may be collected from various enterprise-based information systems automatically using adaptors that are specially tailored for particular types of information systems. Such events may then be used to populate event feeds regarding individuals in that enterprise. A filtering model for formulating event feeds identifies events by individual, event type, and event time. The filter also identifies which individuals are in which group of a participant, and identifies which groups correspond to which event types. Incoming events may then be filtered into the event feeds depending on the group to which the individual belongs. Filtering an event from an events pool to formulate an event feed is also provided. | 10-30-2014 |
Patent application number | Description | Published |
20150154506 | BROWSER-BASED SELECTION OF CONTENT REQUEST MODES - Features are disclosed for generating request decision models for use by client computing devices to determine request paths or modes for content requests. The request modes may correspond to direct requests (e.g., requests made from a client device directly to a content server hosting requested content) or to indirect requests (e.g., requests made from the client device to the content server via an intermediary system). The request decision models may be trained by a machine learning algorithm using performance data (e.g., prior content load times), contextual information (e.g., state information associated with devices at times content requests are executed), and the like. | 06-04-2015 |
20150156279 | BROWSER-BASED ANALYSIS OF CONTENT REQUEST MODE PERFORMANCE - Features are disclosed for selecting preferred content request modes on a client computing device when initiating content requests. The request modes may correspond to direct requests (e.g., requests made from a client device directly to a content sever hosting requested content) or to indirect requests (e.g., requests made from the client device to the content server via an intermediary system). The preferred request modes made be based on a statistical analysis of performance data (e.g., prior content load times) observed or recorded by the client computing device in connection with prior content requests. Randomly selected request modes may be used to provide additional data for performance analysis. | 06-04-2015 |
20150156280 | PERFORMANCE-BASED DETERMINATION OF REQUEST MODES - Features are disclosed for determining preferred content request modes for client computing devices when initiating content requests. The request modes may correspond to direct requests (e.g., requests made from a client device directly to a content sever hosting requested content) or to indirect requests (e.g., requests made from the client device to the content server via an intermediary system). The preferred request modes made be based on a statistical analysis of performance data (e.g., prior content load times) obtained from one or more client computing devices for a given content item, group of content items (e.g., domain), and the like. | 06-04-2015 |
Patent application number | Description | Published |
20110231385 | OBJECT ORIENTED DATA AND METADATA BASED SEARCH - An object oriented search mechanism extracts structural metadata and data based on type of document contents and data sources connected to the documents. Relationships between textual and non-textual elements within documents as well as metadata associated with the elements and data sources are utilized to generate a unified object model with the addition of semantic information derived from metadata and taxonomy, which are used to enhance search indexing, ranking of search results, and dynamic adjustment of result rendering user interface with fine tuned relevancy. Additional data from data sources connected to the documents may also be used to unlock hidden data such as data that has been filtered out in an original document. | 09-22-2011 |
20110238653 | PARSING AND INDEXING DYNAMIC REPORTS - A parsing and indexing mechanism for dynamically generated reports is provided. Upon detection of a dynamically generated report, a data source for the dynamically generated report may be identified based on metadata or other information associated with the report. Crawleable or machine readable metadata and data may be generated using the data source such that data represented in the report and/or other relevant data from the data source can be indexed and searched. | 09-29-2011 |
20130106914 | VIRTUALIZED DATA PRESENTATION IN A CAROUSEL PANEL | 05-02-2013 |
20130159453 | Pre-Provisioned Web Application Platform Site Collections - A pre-provisioned application platform may be provided. First, a plurality of parameters may be received. Then a plurality of pre-provisioned tenants may be created based upon the received plurality of parameters. A request for service may be received and then an actual tenant may be assigned to one of the plurality of pre-provisioned tenants in response to the received request. | 06-20-2013 |
20130282693 | OBJECT ORIENTED DATA AND METADATA BASED SEARCH - An object oriented search mechanism extracts structural metadata and data based on type of document contents and data sources connected to the documents. Relationships between textual and non-textual elements within documents as well as metadata associated with the elements and data sources are utilized to generate a unified object model with the addition of semantic information derived from metadata and taxonomy, which are used to enhance search indexing, ranking of search results, and dynamic adjustment of result rendering user interface with fine tuned relevancy. Additional data from data sources connected to the documents may also be used to unlock hidden data such as data that has been filtered out in an original document. | 10-24-2013 |
20140285529 | VIRTUALIZED DATA PRESENTATION IN A CAROUSEL PANEL - Embodiments are directed to displaying data items in a carousel display panel and to efficiently presenting virtualized data in a carousel display panel. In one example, a computer system accesses a list of data items that include at least a first data item and a last data item which are to be displayed in a carousel display panel. The computer system displays the selected portion of data items in the carousel display panel and receives a user input indicating that the last data item in the list is to be displayed in the carousel display panel. The computer system then rotates the data items displayed in the carousel display panel to the last data item. The last data item is thus displayed, along with at least a portion of a second-to-last data item and the first data item in the list. | 09-25-2014 |
Patent application number | Description | Published |
20120016864 | HIERARCHICAL MERGING FOR OPTIMIZED INDEX - Methods, systems, and media are provided for an optimized search engine index. The optimized index is formed by merging small lower level indexes of fresh documents together into a hierarchical cluster of multiple higher level indexes. The optimized index of fresh documents is formed via a single threaded process, while a fresh index serving platform concurrently serves fresh queries. The hierarchy of higher level indexes is formed by merging lower and/or higher level indexes with similar expiration times together. Therefore, as some indexes expire, the remaining un-expired indexes can be re-used and merged with new incoming indexes. The single threaded process provides fast serving of fresh documents, while also providing time to integrate the fresh indexes into a long term primary search engine index, prior to expiring. | 01-19-2012 |
20120257246 | RECEIVING INDIVIDUAL DOCUMENTS TO SERVE - Methods and systems for quickly serving documents are provided. Documents may be served to users, for example, in response to search query inputs. Documents may be individually communicated to a document server prior to batching the documents. By individually communicating documents to document servers, the document experiences sub-second latency before it is available to a user. The documents may also be modified individually such that real-time serving is not interrupted. | 10-11-2012 |
20120260124 | RECOVERY OF A DOCUMENT SERVING ENVIRONMENT - Methods and systems for quickly serving documents are provided. Documents may be served to users, for example, in response to search query inputs. Documents may be individually communicated to a document server individually prior to batching the documents. In such a real-time serving system, serving components may fail. To ensure real-time serving despite the failure, spares are utilized to replace the failing serving components such that the spare can immediately begin receiving documents. The spare can also be synchronized with other serving components to obtain the memory of the failing serving component prior to the failure. | 10-11-2012 |
20140181122 | GENERATING AND USING A CUSTOMIZED INDEX - In various embodiments, systems and methods are provided for generating and using a customized index. In embodiments, an index structure is constructed to efficiently utilize machines containing index portions. In this regard, the index structure for a particular application is customizable such that a number of virtual index units for a particular index type and/or a number of machines associated with the virtual index units for the particular index type can be optimized for machine and/or system performance and efficiency. Utilizing the constructed index structure, documents can be distributed to various index units, virtual index units, and/or machines in real-time or near real-time. Further, the customized index structure can be used to efficiently serve search results in response to search queries. | 06-26-2014 |