CONCURIX CORPORATION Patent applications |
Patent application number | Title | Published |
20150254172 | Security Alerting Using N-Gram Analysis of Program Execution Data - N-grams of input streams or functions executed by an application may be analyzed to identify security breaches or other anomalous behavior. A histogram of n-grams representing sequences of executed functions or input streams may be generated through baseline testing or production use. An alerting system may compare real time n-gram observations to the histogram of n-grams to identify security breaches or other changes in application behavior that may be anomalous. An alert may be generated that identifies the anomalous behavior. The alerting system may be trained using known good datasets and may identify deviations as bad behavior. The alerting system may be trained using known bad datasets and may identify matching behavior as bad behavior. | 09-10-2015 |
20150254165 | Automated Regression Testing for Software Applications - Regression testing of an application may gather performance tests for multiple functions within an application and determine when performance changes from one version of the application to another. The analysis may be further broken down by input sequences that may be processed by various functions. A detailed regression analysis may be presented as a heat map or other visualizations. A regression testing system may be launched during a build process by automatically launching a set of performance tests against an application. In many cases, the application may be executed in a system with a known or consistent performance capabilities. The application may be executed and tested in a new version and at least one prior version on the same hardware and software execution environment, so that results may be normalized from one execution run to another. A regression testing system may be deployed as a paid-for service that may integrate into a source code repository. | 09-10-2015 |
20150254163 | Origin Trace Behavior Model for Application Behavior - A behavior model for a software application may identify a set of execution sequences that begin from a set of origins. The sequences may be further defined by a set of exits. In some cases, the sequences may be decomposed into subsequences or n-grams. The execution sequences and their frequencies may define a usage or behavior model for the application. The sequences may be defined by semantic level operations of an application, which may be defined by functions, call backs, API calls, or other blocks of code execution. The behavior model may be used for determining code coverage, comparing versions of applications, and other uses. | 09-10-2015 |
20150254162 | N-Gram Analysis of Software Behavior in Production and Testing Environments - Execution sequence information may be analyzed and quantified using n-gram analysis of functions executed by an application. The sequences of functions may be represented by n-grams, and the frequency of the various n-grams may indicate the behavior of the application in production, which may be compared to a test suite whose coverage may be quantified using a similar n-gram analysis. A coverage factor may compare the observed behavior of the application in production to the test suite for the application. The n-grams may be further quantified or prioritized by resource utilization, and several visualizations may be generated from the data. | 09-10-2015 |
20150254161 | Regression Evaluation Using Behavior Models of Software Applications - Comparisons of different versions of an application may be compared using a behavior model of the application. A behavior model may be derived from n-gram analysis of observations of the application in production. The behavior model may include sequences of inputs received by the application or functions performed by the application, where each sequence is an n-gram observed in tracer data. Each n-gram may be coupled with a resource consumption to give a behavior model with performance data. A regression analysis may apply a behavior model derived from a first version of an application to the performance observations of a new version to create an expected performance metric for the new version. A similarly calculated metric from a previous version may be compared to the metric from a new version to determine an improvement or degradation of performance. | 09-10-2015 |
20150254151 | N-Gram Analysis of Inputs to a Software Application - Input sequence information may be analyzed and quantified using n-gram analysis of inputs received by an application. The sequences of inputs may be represented by n-grams, and the frequency of the various n-grams may indicate the ‘real world’ uses of the application in production, which may be compared to a test suite whose coverage may be quantified using a similar n-gram analysis. A coverage factor may compare the observed inputs to the application in production to the test suite for the application. The n-grams may be further quantified or prioritized by resource utilization and several visualizations may be generated from the data. | 09-10-2015 |
20150161385 | Memory Management Parameters Derived from System Modeling - Optimized memory management settings may be derived from a mathematical model of an execution environment. The settings may be optimized for each application or workload, and the settings may be implemented per application, per process, or with other granularity. The settings may be determined after an initial run of a workload, which may observe and characterize the execution. The workload may be executed a second time using the optimized settings. The settings may be stored as tags for the executable code, which may be in the form of a metadata file or as tags embedded in the source code, intermediate code, or executable code. The settings may change the performance of memory management operations in both interpreted and compiled environments. The memory management operations may include memory allocation, garbage collection, and other related functions. | 06-11-2015 |
20150052406 | Combined Performance Tracer and Snapshot Debugging System - A tracing and debugging system may collect both performance related tracer data and snapshot data. The tracer data may contain aggregated performance and operational data, while the snapshot data may contain call stack, source code, and other information that may be useful for debugging and detailed understanding of an application. The snapshot data may be stored in a separate database from the tracer data, as the snapshot data may contain data that may be private or sensitive, while the tracer data may be aggregated information that may be less sensitive. A debugging user interface may be used to access, display, and browse the stored snapshot data. | 02-19-2015 |
20150052403 | Snapshotting Executing Code with a Modifiable Snapshot Definition - A tracing and debugging system may take a snapshot of an application in response to an event, and may continue executing the program after the snapshot is captured. The snapshot may be stored and retrieved later in a debugging tool where a programmer may browse the snapshot or the snapshot may have some other analysis performed. The snapshot may contain a subset of the state of the application, such as call stacks, portions of source code, the values of local and global variables, and various metadata. The snapshot may be defined in a snapshot configuration that may include an event description and data to be collected. | 02-19-2015 |
20150052400 | Breakpoint Setting Through a Debugger User Interface - A debugging system may display snapshot information that may be collected in response to an event identified while an application executes. The debugging system may allow a user to browse the various data elements in the snapshot, and may allow the user to modify a snapshot configuration by including or excluding various data elements within the snapshot data. The user interface may have a mechanism for including or excluding data elements that may be presented during browsing, as well as options to change the events that may trigger a snapshot. The updated snapshot configuration may be saved for future execution when the event conditions are satisfied. | 02-19-2015 |
20150033172 | Timeline Charts with Subgraphs - A timeline chart may represent multiple data sets gathered from multiple sequences of a process by placing sub-graphs within timeline bars. The sub-graphs may represent summarized data related to each event represented by a timeline bar. The timeline chart may present an overall view of a sequence of process steps with insights to the shape or distribution of the underlying observations. The timeline chart may be an instance of an event chain diagram, where the elements within the event chains are displayed with respect to time. The timeline chart may be presented as representing the aggregated dataset of multiple runs, as well as a representation of a single observed sequence. In both cases, sub-graphs may be included in a timeline bar to represent different views of the aggregated dataset. | 01-29-2015 |
20150029193 | Event Chain Visualization of Performance Data - An event chain visualization of performance data may show the execution of monitored elements as bars on a timeline, with connections or other relationships connecting the various bars into a sequential view of an application. The visualization may include color, shading, or other highlighting to show resource utilization or performance metrics. The visualization may be generated by monitoring many events processed by an application, where each bar on a timeline may reflect multiple instances of a monitored element and, in some case, the aggregated performance. | 01-29-2015 |
20140298304 | Transmission Point Pattern Extraction from Executable Code in Message Passing Environments - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 10-02-2014 |
20140282597 | Bottleneck Detector for Executing Applications - A bottleneck detector may analyze individual workloads processed by an application by logging times when the workload may be processed at different checkpoints in the application. For each checkpoint, a curve fitting algorithm may be applied, and the fitted curves may be compared between different checkpoints to identify bottlenecks or other poorly performing sections of the application. A real time implementation of a detection system may compare newly captured data points against historical curves to detect a shift in the curve, which may indicate a bottleneck. In some cases, the fitted curves from neighboring checkpoints may be compared to identify sections of the application that may be a bottleneck. An automated system may apply one set of checkpoints in an application, identify an area for further investigation, and apply a second set of checkpoints in the identified area. Such a system may recursively search for bottlenecks in an executing application. | 09-18-2014 |
20140281726 | Bottleneck Detector Application Programming Interface - An application programming interface may receive workload identifiers and checkpoint identifiers from which bottleneck detection may be performed. Workloads may be tracked through various checkpoints in an application and timestamps collected at each checkpoint. From these data, bottlenecks may be identified in real time or by analyzing the data in a subsequent analysis. The workloads may be processed by multiple devices which may comprise a large application. In some cases, the workloads may be processed by different devices in sequence or in a serial fashion, while in other cases workloads may be processed in parallel by different devices. The application programming interface may be part of a bottleneck detection service which may be sold on a pay-per-use model, a subscription model, or some other payment scheme. | 09-18-2014 |
20140189650 | Setting Breakpoints Using an Interactive Graph Representing an Application - Breakpoints may be set by selecting nodes on a graph depicting code elements and relationships between code elements. The graph may be derived from tracing data, and may reflect the observed code elements and the observed interactions between code elements. In many cases, the graph may include performance indicators. The breakpoints may include conditions which depend on performance related metrics, among other things. In some embodiments, the nodes may reflect individual instances of specific code elements, while other embodiments may present nodes as the same code elements that may be utilized by different threads. The breakpoints may include parameters or conditions that may be thread-specific. | 07-03-2014 |
20140026142 | Process Scheduling to Maximize Input Throughput - A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices. | 01-23-2014 |
20140025572 | Tracing as a Service - An instrumented execution environment may connect to an execution environment to provide detailed tracing and logging of an application as it runs. The instrumented execution environment may be configured as a standalone service that can be configured and purchased. The instrumented execution environment may be deployed with various authentication systems, administrative user interfaces, and other components. The instrumented execution environment may engage a customer's system through a distributor that may manage a workload and distribute work to the instrumented execution environment as well as other worker systems. A marketplace may provide multiple preconfigured execution environments that may be selected, further configured, and deployed to address specific data collection objectives. | 01-23-2014 |
20140019985 | Parallel Tracing for Performance and Detail - A parallel tracer may perform detailed or heavily instrumented analysis of an application in parallel with a performance or lightly instrumented version of the application. Both versions of the application may operate on the same input stream, but with the heavily instrumented version having different performance results than the lightly instrumented version. The tracing results may be used for various analyses, including optimization and debugging. | 01-16-2014 |
20140019879 | Dynamic Visualization of Message Passing Computation - A message passing compute environment may be visualized by illustrating messages passed within the environment. The messages may contain data consumed by a function or other computational element, and may be used to launch or spawn various computational elements. One visualization may be a force directed graph that has each function as a node, with messages passed as edges of the graph. In some embodiments, the edges may display the number of messages, quantity of data, or other metric by showing the edges as wider or thinner, or by changing the color of the displayed edge. The nodes may be illustrated with different colors, size, or shape to show different aspects. Some embodiments may have a mechanism for storing and playing back changes to the graph over time. | 01-16-2014 |
20140019756 | Obfuscating Trace Data - A tracer may obfuscate trace data such that the trace data may be used in an unsecure environment even though raw trace data may contain private, confidential, or other sensitive information. The tracer may obfuscate using irreversible or lossy hash functions, look up tables, or other mechanisms for certain raw trace data, rendering the obfuscated trace data acceptable for transmission, storage, and analysis. In the case of parameters passed to and from a function, trace data may be obfuscated as a group or as individual parameters. The obfuscated trace data may be transmitted to a remote server in some scenarios. | 01-16-2014 |
20140019598 | Tracing with a Workload Distributor - A load balanced system may incorporate instrumented systems within a group of managed devices and distribute workload among the devices to meet both load balancing and data collection. A workload distributor may communicate with and configure several managed devices, some of which may have instrumentation that may collect trace data for workload run on those devices. Authentication may be performed between the managed devices and the workload distributor to verify that the managed devices are able to receive the workloads and to verify the workloads prior to execution. The workload distributor may increase or decrease the amount of instrumentation in relation to the workload experienced at any given time. | 01-16-2014 |
20130283247 | Optimization Analysis Using Similar Frequencies - Tracer objectives in a distributed tracing system may be compared to identify input parameters that may have a high statistical relevancy. An iterative process may traverse multiple input objects by comparing results of multiple tracer objectives and scoring possible input objects as being possibly statistically relevant. With each iteration, statistically irrelevant input objects may be discarded from a tracer objective and other potentially relevant objects may be added. The iterative process may converge on a set of statistically relevant input objects for a given measured value without a priori knowledge of an application being traced. | 10-24-2013 |
20130283246 | Cost Analysis for Selecting Trace Objectives - A tracing system may perform cost analysis to identify burdensome or costly trace objectives. For a burdensome objective, two or more objectives may be created that can be executed independently. The cost analysis may include processing, storage, and network performance factors, which may be budgeted to collect data without undue performance or financial drains on the application under test. A larger objective may be recursively analyzed to break the larger objective into smaller objectives which may be independently deployed. | 10-24-2013 |
20130283240 | Application Tracing by Distributed Objectives - A tracing system may divide trace objectives across multiple instances of an application, then deploy the objectives to be traced. The results of the various objectives may be aggregated into a detailed tracing representation of the application. The trace objectives may define specific functions, processes, memory objects, events, input parameters, or other subsets of tracing data that may be collected. The objectives may be deployed on separate instances of an application that may be running on different devices. In some cases, the objectives may be deployed at different time intervals. The trace objectives may be lightweight, relatively non-intrusive tracing workloads that, when results are aggregated, may provide a holistic view of an application's performance. | 10-24-2013 |
20130283102 | Deployment of Profile Models with a Monitoring Agent - A distributed tracing system may use independent trace objectives for which a profile model may be created. The profile model may be deployed as a monitoring agent on non-instrumented devices to evaluate the profile models. As the profile models operate with statistically significant results, the sampling frequencies may be adjusted. The profile models may be deployed as a verification mechanism for testing models created in a more highly instrumented environment, and may gather performance related results that may not have been as accurate using the instrumented environment. In some cases, the profile models may be distributed over large numbers of devices to verify models based on data collected from a single or small number of instrumented devices. | 10-24-2013 |
20130232452 | Force Directed Graph with Time Series Data - A force directed graph may display time series data using a set of playback controls to pause, play, reverse, fast forward, slow down, or otherwise control the display of the time series data. The playback controls may be used in a real time or near real time application to which data sets are displayed and the speed with which the data sets may be displayed. In one architecture, the force directed graph may be deployed using a rendering engine that receives data and renders the data into a graph. A playback controller may send updates to the rendering engine according to user inputs from the playback controls. | 09-05-2013 |
20130232433 | Controlling Application Tracing using Dynamic Visualization - A force directed graph may serve as a part of a user control for a tracer. The tracer may collect data while monitoring an executing application, then the data may be processed and displayed on a force directed graph. A user may be able to select individual nodes, edges, or other elements, then cause the tracer to change what data may be collected. The user may be able to select individual nodes, edges, or groups of elements on the graph, then perform updates to the tracer using the selected elements. The selection mechanisms may include clicking and dragging a window to select nodes that may be related, as well as selecting from a legend or other grouping. | 09-05-2013 |
20130232174 | Highlighting of Time Series Data on Force Directed Graph - A force directed graph may display recent activities of a message passing system as highlighted features over a larger graph. The force directed graph may display a superset of nodes and edges representing processes and message routes, then display recent activities as highlighted elements within the larger superset. The highlighted elements may display messages passed or computation performed during a recent time element of a time series. In some embodiments, the effects of activities may be displayed by decaying the highlighted visual elements over time. | 09-05-2013 |
20130229416 | Transformation Function Insertion for Dynamically Displayed Tracer Data - A visualization system for a tracer may include a processing pipeline that may generate tracing data, preprocess the data, and visualize the data. The preprocessing step may include a mechanism to process user-defined expressions or other executable code. The executable code may perform various functions including mathematical, statistical, aggregation with other data, and others. The preprocessor may perform malware analysis, test the functionality, then implement the executable code. A user may be presented with an editor or other text based user interface component to enter and edit the executable code. The executable code may be saved and later recalled as a selectable transformation for use with other data streams. | 09-05-2013 |
20130219372 | Runtime Settings Derived from Relationships Identified in Tracer Data - An analysis system may perform network analysis on data gathered from an executing application. The analysis system may identify relationships between code elements and use tracer data to quantify and classify various code elements. In some cases, the analysis system may operate with only data gathered while tracing an application, while other cases may combine static analysis data with tracing data. The network analysis may identify groups of related code elements through cluster analysis, as well as identify bottlenecks from one to many and many to one relationships. The analysis system may generate visualizations showing the interconnections or relationships within the executing code, along with highlighted elements that may be limiting performance. | 08-22-2013 |
20130219057 | Relationships Derived from Trace Data - An analysis system may perform network analysis on data gathered from an executing application. The analysis system may identify relationships between code elements and use tracer data to quantify and classify various code elements. In some cases, the analysis system may operate with only data gathered while tracing an application, while other cases may combine static analysis data with tracing data. The network analysis may identify groups of related code elements through cluster analysis, as well as identify bottlenecks from one to many and many to one relationships. The analysis system may generate visualizations showing the interconnections or relationships within the executing code, along with highlighted elements that may be limiting performance. | 08-22-2013 |
20130117759 | Network Aware Process Scheduling - A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices. | 05-09-2013 |
20130117753 | Many-core Process Scheduling to Maximize Cache Usage - A process scheduler for multi-core and many-core processors may place related executable elements that share common data on the same cores. When executed on a common core, sequential elements may store data in memory caches that are very quickly accessed, as opposed to main memory which may take many clock cycles to access the data. The sequential elements may be identified from messages passed between elements or other relationships that may link the elements. In one embodiment, a scheduling graph may be constructed that contains the executable elements and relationships between those elements. The scheduling graph may be traversed to identify related executable elements and a process scheduler may attempt to place consecutive or related executable elements on the same core so that commonly shared data may be retrieved from a memory cache rather than main memory. | 05-09-2013 |
20130085882 | Offline Optimization of Computer Software - An offline optimization for computer software may involve creating optimized parameters or components for a software product, and charging customers for the optimization service. The software product may be distributed under one licensing regime and the optimization components may be distributed under a second licensing regime. In some embodiments, a low cost or no-cost monitoring system may be provided, which may interface with a remote service that optimizes the software product for its current workload. A user may pay for the remote optimization service through a subscription, pay-per-use, pay-for-performance, or other payment models. | 04-04-2013 |
20130081005 | Memory Management Parameters Derived from System Modeling - Optimized memory management settings may be derived from a mathematical model of an execution environment. The settings may be optimized for each application or workload, and the settings may be implemented per application, per process, or with other granularity. The settings may be determined after an initial run of a workload, which may observe and characterize the execution. The workload may be executed a second time using the optimized settings. The settings may be stored as tags for the executable code, which may be in the form of a metadata file or as tags embedded in the source code, intermediate code, or executable code. The settings may change the performance of memory management operations in both interpreted and compiled environments. The memory management operations may include memory allocation, garbage collection, and other related functions. | 03-28-2013 |
20130080761 | Experiment Manager for Manycore Systems - An execution environment may have a monitoring, analysis, and feedback loop that may configure and tune the execution environment for currently executing workloads. A monitoring or instrumentation system may collect operational and performance data from hardware and software components within the system. A modeling system may create an operational model of the execution environment, then may determine different sets of parameters for the execution environment. A feedback loop may change various operational characteristics of the execution environment. The monitoring, analysis, and feedback loop may optimize the performance of a computer system for various metrics, including throughput, performance, energy conservation, or other metrics based on the applications that are currently executing. The performance model of the execution environment may be persisted and applied to new applications to optimize the performance of applications that have not been executed on the system. | 03-28-2013 |
20130080760 | Execution Environment with Feedback Loop - An execution environment may have a monitoring, analysis, and feedback loop that may configure and tune the execution environment for currently executing workloads. A monitoring or instrumentation system may collect operational and performance data from hardware and software components within the system. A modeling system may create an operational model of the execution environment, then may determine different sets of parameters for the execution environment. A feedback loop may change various operational characteristics of the execution environment. The monitoring, analysis, and feedback loop may optimize the performance of a computer system for various metrics, including throughput, performance, energy conservation, or other metrics based on the applications that are currently executing. The performance model of the execution environment may be persisted and applied to new applications to optimize the performance of applications that have not been executed on the system. | 03-28-2013 |
20130074093 | Optimized Memory Configuration Deployed Prior to Execution - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed prior to runtime. A compiler or other pre-execution system may detect a memory allocation boundary and decorate the code. During execution, the decorated code may be used to look up memory allocation and management settings from a database or to deploy optimized settings that may be embedded in the decorations. | 03-21-2013 |
20130074092 | Optimized Memory Configuration Deployed on Executing Code - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed at runtime. An execution environment may capture a memory allocation boundary, look up the boundary in a configuration file, and apply the settings when the settings are available. When the settings are not available, a default set of settings may be used. The execution environment may deploy the optimized settings without modifying the executing code. | 03-21-2013 |
20130074058 | Memoization from Offline Analysis - Memoization may be deployed using a configuration file or database that identifies functions to memorize, and in some cases, includes input and result values for those functions. The configuration file or database may be created by profiling target code and offline or otherwise separate analysis of the profiling results. The configuration file may be used by an execution environment to identify which functions to memorize during execution. The offline or separate analysis of the profiling results may enable more sophisticated analysis than could otherwise be performed in parallel with executing the target code, including historical analysis of multiple instances of the target code and sophisticated cost/benefit analysis. | 03-21-2013 |
20130074057 | Selecting Functions for Memoization Analysis - A function may be selected for memoization when the function indicates that memoization may result in a performance improvement. Impure functions may be identified and ranked based on operational data, which may include length of execution. A function may be selected from a ranked list and analyzed for memoization. The memoization analysis may include side effect analysis and consistency analysis. In some cases, the optimization process may perform optimization on one function at a time so as to not overburden a running system. | 03-21-2013 |
20130074056 | Memoizing with Read Only Side Effects - A function may be memoized when a side effect is a read only side effect. Provided that the read only side effect does not mutate a memory object, the side effect may be considered as an input to a function for purity and memoization analysis. When a read only side effect may be encountered during memoization analysis, the read only side effect may be treated as an input to a function for memoization analysis. In some cases, such side effects may enable an impure function to behave as a pure function for the purposes of memoization. | 03-21-2013 |
20130074055 | Memoization Configuration File Consumed at Compile Time - Memoization may be deployed using a configuration file or database that identifies functions to memorize, and in some cases, includes input and result values for those functions. At compile time, functions defined in the configuration file may be captured and memoized. During compilation or other pre-execution analysis, the executable code may be modified or otherwise decorated to include memoization code. The memoization code may store results from a function during the first execution, then merely look up the results when the function may be called again. The memoized value may be stored in the configuration file or in another data store. In some embodiments, the modified executable code may operate in conjunction with an execution environment, where the execution environment may optionally perform the memoization. | 03-21-2013 |
20130074049 | Memoization Configuration File Consumed at Runtime - Memoization may be deployed using a configuration file or database that identifies functions to memorize, and in some cases, includes input and result values for those functions. As an application is executed, functions defined in the configuration file may be captured and memoized. During the first execution of the function, the return value may be captured and stored in the configuration file. For subsequent executions of the function, the return value may be stored in the configuration file. In some cases, the configuration file may be distributed with the return values to client computers. The configuration file may be created by one device and deployed to other devices in some deployments. | 03-21-2013 |
20130073837 | Input Vector Analysis for Memoization Estimation - A function's purity may be estimated by comparing a new input vector to previously analyzed input vectors. When a new input vector is within a confidence boundary, the new input vector may be treated as a known vector, even when that vector has not been evaluated. The input vector may reflect the input parameters passed to a function, and the function may be analyzed to determine whether to memoize with the input vector. The function may be a function that behaves as a pure function in some circumstances and with some input vectors, but not with others. By memoizing the function when possible, the function may be executed much faster, thereby improving performance. | 03-21-2013 |
20130073829 | Memory Usage Configuration Based on Observations - A computer software execution system may have a configurable memory allocation and management system. A configuration file or other definition may be created by analyzing a running application and determining an optimized set of settings for the application on the fly. The settings may include memory allocated to individual processes, memory allocation and deallocation schemes, garbage collection policies, and other settings. The optimization analysis may be performed offline from the execution system. The execution environment may capture processes during creation, then allocate memory and configure memory management settings for each individual process. | 03-21-2013 |
20130073604 | Optimized Settings in a Configuration Database with Boundaries - A set of optimizations may be defined in a configuration database. The configuration database may be defined with a set of boundaries that may define conditions under which the optimizations may be valid. When the conditions are not met, a new configuration database may be requested from an optimization server. The system may be used to distribute and manage optimizations for an application, which may be deployed in interpreted or runtime scenarios or in pre-execution or compiled scenarios. | 03-21-2013 |
20130073523 | Purity Analysis Using White List/Black List Analysis - Memoizable functions may be identified by analyzing a function's side effects. The side effects may be evaluated using a white list, black list, or other definition. The side effects may also be classified into conditions which may or may not permit memoization. Side effects that may have de minimus or trivial effects may be ignored in some cases where the accuracy of a function may not be significantly affected when the function may be memoized. | 03-21-2013 |
20130067445 | Determination of Function Purity for Memoization - The purity of a function may be determined after examining the performance history of a function and analyzing the conditions under which the function behaves as pure. In some cases, a function may be classified as pure when any side effects are de minimis or are otherwise considered trivial. A control flow graph may also be traversed to identify conditions in which a side effect may occur as well as to classify the side effects as trivial or non-trivial. The function purity may be used to identify functions for memoization. In some embodiments, the purity analysis may be performed by a remote server and communicated to a client device, where the client device may memoize the function. | 03-14-2013 |
20120324454 | Control Flow Graph Driven Operating System - An operating system may be reconfigured during execution by adding new components to a control flow graph defining a system's executable flow. The operating system may use a control flow graph that defines executable elements and relationships between those elements. The operating system may traverse the control flow graph during execution to monitor execution flow and prepare executable elements for processing. By placing new components in memory then modifying the control flow graph, the operating system functionality may be updated or changed. In some embodiments, a lightweight version of an operating system may be deployed, then additional features or capabilities may be added. | 12-20-2012 |
20120317587 | Pattern Matching Process Scheduler in Message Passing Environment - Processes in a message passing system may be unblocked when messages having data patterns match data patterns of a function on a receiving process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 12-13-2012 |
20120317577 | Pattern Matching Process Scheduler with Upstream Optimization - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 12-13-2012 |
20120317557 | Pattern Extraction from Executable Code in Message Passing Environments - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 12-13-2012 |
20120317421 | Fingerprinting Executable Code - Executable code may be fingerprinted by inserting NOP codes into the executable code in a pattern that may reflect a fingerprint. The NOP codes may be single instructions or groups of instructions that perform no operation. A dictionary of NOP codes and their corresponding portion of a fingerprint may be used to create a series of NOP codes which may be embedded into executable code. The fingerprinted executable code may be fully executable and the presence of the NOP codes may not be readily identifiable. The fingerprinting mechanism may be used to authenticate executable code in various scenarios. | 12-13-2012 |
20120317389 | Allocating Heaps in NUMA Systems - Processes may be assigned heap memory within locally accessible memory banks in a multiple processor NUMA architecture system. A process scheduler may deploy a process on a specific processor and may assign the process heap memory from a memory bank associated with the selected processor. The process may be a functional process that may not change state of other memory objects, other than the input or output memory objects defined in the functional process. | 12-13-2012 |
20120317371 | Usage Aware NUMA Process Scheduling - Processes may be assigned to specific processors when memory objects consumed by the processes are located in memory banks closely associated with the processors. When assigning processes to threads operating in a multiple processor NUMA architecture system, an analysis of the memory objects accessed by a process may identify processor or group of processors that may minimize the memory access time of the process. The selection may take into account the connections between memory banks and processors to identify the shortest communication path between the memory objects and the process. The processes may be pre-identified as functional processes that make little or no changes to memory objects other than information passed to or from the processes. | 12-13-2012 |
20120233601 | Recompiling with Generic to Specific Replacement - Executable code may be recompiled so that generic portions of code may be replaced with specific portions of code. The recompilation may customize executable code for a specific use or configuration, making the code lightweight and executing faster. The replacement mechanism may replace variable names with fixed values, replace conditional branches with only those branches which are known to be executed, and may eliminate executable code portions that are not executed. The replacement mechanism may comprise identifying known values defined in the executable code for variables, and replacing those variables with the constant value. Once the constants are substituted, the code may be analyzed to identify branches that may be evaluated using the constant values. Those branches may be reformed using the constant value and the rest of the conditional code that may not be accessed may be removed. | 09-13-2012 |
20120233592 | Meta Garbage Collection for Functional Code - An execution environment for functional code may treat application segments as individual programs for memory management. A larger program of application may be segmented into functional blocks that receive an input and return a value, but operate without changing state of other memory objects. The program segments may have memory pages allocated to the segments by the operating system as other full programs, and may deallocate memory pages when the segments finish operating. Functional programming languages and imperative programming languages may define program segments explicitly or implicitly, and the program segments may be identified at compile time or runtime. | 09-13-2012 |
20120227040 | Hybrid Operating System - A hybrid operating system may allocate two sets of resources, one to a first operating system and one to a second operating system. Each operating system may have a memory manager, process scheduler, and other components that are aware of each other and cooperate. The hybrid operating system may allow one operating system to provide one set of services and a second operating system to provide a second set of services. For example, the first operating system may have monitoring applications, user interfaces, and other services, while the second operating system may be a lightweight, high performance operating system that may not provide the same services as the first operating system. | 09-06-2012 |
20120222043 | Process Scheduling Using Scheduling Graph to Minimize Managed Elements - A process scheduler may use a scheduling graph to determine which processes, threads, or other execution elements of a program may be scheduled. Those execution elements that have not been invoked or may be waiting for input may not be considered for scheduling. A scheduler may operate by scheduling a current set of execution elements and attempting to schedule a number of generations linked to the currently executing elements. As new elements are added to the scheduled list of execution elements, the list may grow. When the scheduling graph indicates that an execution element will no longer be executed, the execution element may be removed from consideration by a scheduler. In some embodiments, a secondary scan of all available execution elements may be performed on a periodic basis. | 08-30-2012 |
20120222019 | Control Flow Graph Operating System Configuration - An operating system may be configured using a control flow graph that defines relationships between each executable module. The operating system may be configured by analyzing an application and identifying the operating system modules called from the application, then building a control flow graph for the configuration. The operating system may be deployed to a server or other computer containing only those components identified in the control flow graph. Such a lightweight deployment may be used on a large scale for datacenter servers as well as for small scale deployments on sensors and other devices with little processing power. | 08-30-2012 |