24th week of 2010 patent applcation highlights part 73 |
Patent application number | Title | Published |
20100153889 | ELECTRONIC TEXT READING ENVIRONMENT ENHANCEMENT METHOD AND APPARATUS - An apparatus, method and article of manufacture of the present invention provide an enhanced user interface for a computer system that maximizes a reader's ability to rapidly comprehend a text. The invention provides a dynamically presented outline of the text, such that the reader maintains a sense of location within the entire text. Additional information about the text and results of operations on the text are presented on the corresponding portions of the outline. | 2010-06-17 |
20100153890 | Method, Apparatus and Computer Program Product for Providing a Predictive Model for Drawing Using Touch Screen Devices - An apparatus for providing a predictive model for use with touch screen devices may include a processor. The processor may be configured to identify a stroke event received at a touch screen display, evaluate an environmental parameter corresponding to the touch screen display to determine a scenario based on the environmental parameter, and generate a graphic output corresponding to the identified stroke event for the scenario determined. A corresponding method and computer program product are also provided. | 2010-06-17 |
20100153891 | METHOD, DEVICE AND PROGRAM FOR BROWSING INFORMATION ON A DISPLAY - In one embodiment, a program is provided for browsing information on a hand-held device having a display. The program includes ( | 2010-06-17 |
20100153892 | METHODS TO OBTAIN A FEASIBLE INTEGER SOLUTION IN A HIERARCHICAL CIRCUIT LAYOUT OPTIMIZATION - An approach that obtains a feasible integer solution in a hierarchical circuit layout optimization is described. In one embodiment, a hierarchical circuit layout and ground rule files are received as input. Constraints in the hierarchical circuit layout are represented as an original integer linear programming problem. A relaxed linear programming problem is derived from the original integer linear programming problem by relaxing integer constraints and using relaxation variables on infeasible constraints. The relaxed linear programming problem is solved to obtain a linear programming solution. Variables are then clustered, and at least one variable from each cluster is rounded to an integer value according to the linear programming solution. Next, it is determined whether all the variables are rounded to integer values. Unrounded variables are iterated back through the deriving of the integer linear programming problem, solving of the relaxed linear programming problem, and rounding of a subset of variables. A modified hierarchical circuit layout is generated in response to a determination that all the variables are rounded to integer values. | 2010-06-17 |
20100153893 | CONSTRAINT MANAGEMENT AND VALIDATION FOR TEMPLATE-BASED CIRCUIT DESIGN - A technique for constraint management and validation for template-based device designs is disclosed. The technique includes generating a template-level representation of an electronic device design based on a transistor-level representation of the electronic device design. The template-level representation includes one or more hierarchies of templates. Each template represents a corresponding portion of the electronic device design. The technique further includes determining constraint declarations associated with the electronic device design and verifying whether there is a functional equivalence between the template-level representation to a register-transfer-level (RTL) representation of the electronic device design. The technique additionally includes verifying whether the constraint declarations are valid and verifying the electronic device design responsive to verifying the functional equivalence and verifying the constraint declarations. | 2010-06-17 |
20100153894 | METHOD AND SYSTEM FOR SEMICONDUCTOR DESIGN HIERARCHY ANALYSIS AND TRANSFORMATION - A method and apparatus for partitioning of the input design into repeating patterns called template cores for the application of reticle enhancement methods, design verification for manufacturability and design corrections for optical and process effects is accomplished by hierarchy analysis to extract cell overlap information. Also hierarchy analysis is performed to extract hierarchy statistics. Finally template core candidates are identified. This allows to the design to be made amenable for design corrections or other analyses or modifications that are able to leverage the hierarchy of the design since the cell hierarchy could otherwise be very deep or cells could have significant overlap with each other. | 2010-06-17 |
20100153895 | TIMING ERROR SAMPLING GENERATOR, CRITICAL PATH MONITOR FOR HOLD AND SETUP VIOLATIONS OF AN INTEGRATED CIRCUIT AND A METHOD OF TIMING TESTING - A timing error sampling generator, a path monitor, an IC, a method of performing timing tests and a library of cells. In one embodiment, the timing error sampling generator includes: (1) a hold delay element having an input and an output and configured to provide a hold violation delayed signal at said output by providing a first predetermined delay to a clock signal received at said input, said first predetermined delay corresponding to a hold violation time for a path to be monitored and (2) a hold logic element having a first input coupled to said input of said hold delay element, a second input coupled to said output of said hold delay element and an output at which said hold logic element is configured to respond to said first and second inputs to provide a clock hold signal when logic levels at said first and second inputs are at a same level. | 2010-06-17 |
20100153896 | REAL-TIME CRITICAL PATH MARGIN VIOLATION DETECTOR, A METHOD OF MONITORING A PATH AND AN IC INCORPORATING THE DETECTOR OR METHOD - A margin violation detector for detecting margin violations of critical paths, a method of monitoring data paths and an IC. In one embodiment, the margin violation detector includes: (1) a monitor flip-flop having a monitor input couplable to a critical path input of a capture flip-flop of a critical path, (2) an exclusive OR gate having a first input couplable to an output of the capture flip-flop and a second input couplable to an output of the monitor flip-flop and (3) a violation detect flip-flop having a detection input couplable to an output of the exclusive OR gate. | 2010-06-17 |
20100153897 | SYSTEM AND METHOD FOR EMPLOYING SIGNOFF-QUALITY TIMING ANALYSIS INFORMATION CONCURRENTLY IN MULTIPLE SCENARIOS TO REDUCE LEAKAGE POWER IN AN ELECTRONIC CIRCUIT AND ELECTRONIC DESIGN AUTOMATION TOOL INCORPORATING THE SAME - A leakage power recovery system and method, and a electronic design automation (EDA) tool incorporating either or both of the system and the method. In one embodiment, the timing signoff tool includes: (1) a power recovery module configured to carry out an instance of an initial power recovery process in each of multiple scenarios concurrently, the initial power recovery process including making first conditional replacements of cells in at least one path in a circuit design with lower leakage cells and estimating a delay and a slack of the at least one path based on the first conditional replacements and (2) a speed recovery module associated with the power recovery module and configured to carry out a speed recovery process in each of the multiple scenarios concurrently, the speed recovery process including determining whether the first conditional replacements cause a timing violation with respect to the at least one path and making second conditional replacements with higher leakage cells until the timing violation is removed. | 2010-06-17 |
20100153898 | MODEL BUILD IN THE PRESENCE OF A NON-BINDING REFERENCE - One or more hardware description language (HDL) files describe a plurality of hierarchically arranged design entities defining a digital design to be simulated and a plurality of configuration entities not belonging to the digital design that logically control settings of a plurality of configuration latches in the digital design. The HDL file(s) are compiled to obtain a simulation executable model of the digital design and an associated configuration database. The compiling includes parsing a configuration statement that specifies an association between an instance of a configuration entity and a specified configuration latch, determining whether or not the specified configuration latch is described in the HDL file(s), and if not, creating an indication in the configuration database that the instance of the configuration latch had a specified association to a configuration latch to which it failed to bind. | 2010-06-17 |
20100153899 | METHODS AND APPARATUSES FOR DESIGNING LOGIC USING ARITHMETIC FLEXIBILITY - Methods and apparatuses for designing logic are described. In one embodiment, a method includes determining a directive which specifies a format for data in a data processing operation and creating a representation of logic to perform the data processing operation, wherein the creating uses the directive as a minimum format, rather than an exact or required format, for at least a portion of the representation of logic. Other methods are disclosed, and systems and machine readable media are also disclosed. | 2010-06-17 |
20100153900 | AUTOMATED CIRCUIT DESIGN PROCESS FOR GENERATION OF STABILITY CONSTRAINTS FOR GENERICALLY DEFINED ELECTRONIC SYSTEM WITH FEEDBACK - A method is described that involves accepting a description of an electronic system having feedback. The method further includes expressing a real root of the electronic system's transfer function and expressing a real part of a complex root of the electronic system's transfer function. The method further includes expressing a time parameter as a maximum of the real root and the real part of a complex root. The method further involves expressing a settling time of the electronic system with the time parameter and using the settling time to automatically generate a design for the electronic system. | 2010-06-17 |
20100153901 | Determining manufacturability of lithographic mask by reducing target edge pairs used in determining a manufacturing penalty of the lithographic mask - The manufacturability of a lithographic mask employed in fabricating instances of a semiconductor device is determined. Target edge pairs are selected from mask layout data of the lithographic mask to determine a manufacturing penalty in making the lithographic mask. The mask layout data includes polygons, where each polygon has edges, and where each target edge pair is defined by two of the edges of one or more of the polygons. The number of the target edge pairs is reduced to decrease computational volume in determining the manufacturing penalty in making the lithographic mask. The manufacturability of the lithographic mask, including the manufacturing penalty in making the lithographic mask, is determined based on the target edge pairs as reduced in number. The manufacturability of the lithographic mask is output. The manufacturability of the lithographic mask is dependent on the manufacturing penalty in making the lithographic mask. | 2010-06-17 |
20100153902 | DETERMINING MANUFACTURABILITY OF LITHOGRAPHIC MASK BY SELECTING TARGET EDGE PAIRS USED IN DETERMINING A MANUFACTURING PENALTY OF THE LITHOGRAPHIC MASK - The manufacturability of a lithographic mask employed in fabricating instances of a semiconductor device is determined. Target edges are selected from mask layout data of the lithographic mask. The mask layout data includes polygons distributed over cells, where each polygon has edges. The cells include a center cell, two vertical cells above and below the center cell, and two horizontal cells to the left and right of the center cell. Target edge pairs are selected for determining a manufacturing penalty in making the lithographic mask, in a manner that decreases the computational volume in determining the manufacturing penalty. The manufacturability of the lithographic mask, including the manufacturing penalty in making the lithographic mask, is determined based on the target edge pairs selected. The manufacturability of the lithographic mask is output. The manufacturability of the lithographic mask is dependent on the manufacturing penalty in making the lithographic mask. | 2010-06-17 |
20100153903 | DETERMINING MANUFACTURABILITY OF LITHOGRAPHIC MASK USING CONTINUOUS DERIVATIVES CHARACTERIZING THE MANUFACTURABILITY ON A CONTINUOUS SCALE - The manufacturability of a lithographic mask employed in fabricating instances of a semiconductor device is determined. Target edge pairs are selected from mask layout data of the lithographic mask, for determining a manufacturing penalty in making the lithographic mask. The mask layout data includes polygons, where each polygon has a number of edges. Each target edge pair is defined by two of the edges of one or more of the polygons. The manufacturability of the lithographic mask, including the manufacturing penalty in making the lithographic mask, is determined. Determining the manufacturing penalty is based on the target edge pairs as selected. Determining the manufacturability of the lithographic mask uses continuous derivatives characterizing the manufacturability of the lithographic mask on a continuous scale. The manufacturability of the lithographic mask is output. The manufacturability of the lithographic mask is dependent on the manufacturing penalty in making the lithographic mask. | 2010-06-17 |
20100153904 | Model-based pattern characterization to generate rules for rule-model-based hybrid optical proximity correction - A system and method are provided for analyzing layout patterns via simulation using a lithography model to characterize the patterns and generate rules to be used in rule-based optical proximity correction (OPC). The system and method analyze a series of layout patterns conforming to a set of design rules by simulation using a lithography model to obtain a partition of the pattern spaces into one portion that requires only rule-based OPC and another portion that requires model-based OPC. A corresponding hybrid OPC system and method are also introduced that utilize the generated rules to correct an integrated circuit (IC) design layout which reduces the OPC output complexity and improves turnaround time. | 2010-06-17 |
20100153905 | PATTERN LAYOUT DESIGNING METHOD, SEMICONDUCTOR DEVICE MANUFACTURING METHOD, AND COMPUTER PROGRAM PRODUCT - A graph is created in which mask patterns adjacent to one another at a distance in which desired printing resolution cannot be obtained in a lithography process among mask patterns generated based on a pattern layout design drawing are set as nodes connected to one another by edges. An odd number loop formed by an odd number of nodes is selected from closed loops. When the selected odd number loop is not isolated, based on whether a closed loop group in which a plurality of closed loops including the odd number loop are connected includes an even number loop formed by an even number of nodes, rearrangement target nodes are selected from the odd number loop included in the closed loop group according to different selection references. The layout of patterns described in the pattern layout design drawing is rearranged corresponding to the selected rearrangement target nodes. | 2010-06-17 |
20100153906 | CAPTURING INFORMATION ACCESSED, UPDATED AND CREATED BY SERVICES AND USING THE SAME FOR VALIDATION OF CONSISTENCY - Techniques for extending a service model with specification of information consumed. The service model includes specification of at least one exposed interface. A receiving operation receives specification of information consumed by a service implementation of the service model. The information consumed is information that is or needs be utilized by the service implementation without being passed through the exposed interface. A generating operation automatically generates an extended service model using a computer processor. The extended service model includes specification of the exposed interface and specification of the information consumed by the service implementation. | 2010-06-17 |
20100153907 | Configurable Unified Modeling Language Building Blocks - Illustrative embodiments provide a computer-implemented method for configurable Unified Modeling Language building blocks. The computer-implemented method obtains a Unified Modeling Language specification and generates a set of logical units from the Unified Modeling Language specification to form a set of building blocks. The computer-implemented method further fetches desired blocks from the set of building blocks according to predefined criteria to form a set of desired blocks, and presents the set of desired building blocks to a requestor for execution of functions provided by the set of desired building blocks to complete a defined task. | 2010-06-17 |
20100153908 | IMPACT ANALYSIS OF SOFTWARE CHANGE REQUESTS - In one example, a system is provided to determine the impact of implementing a change request on a software program. The system may include an architecture model of the software program that includes components. Each of the components may have attributes that may be used by the system to determine a degree of effort to modify each respective one of the components. Components may be associated with keywords. The system may search the change request for the keywords to identify components that may be impacted by the change request. The system may determine the degree of effort to modify any impacted component based on the architecture model. The system may determine the overall impact on the software program based on the degree of effort determined for the impacted components. | 2010-06-17 |
20100153909 | Method and System for Building and Application - A method and system for building an application are provided. The method includes: generating a user model relating to a new application to be built, the user model including at least one role with one or more associated tasks. A task list is compiled for the tasks in the user model, including removing any duplications of tasks. A task to application component mapping is accessed, wherein the application components to which the tasks are mapped are spread over one or more existing applications. The application components mapped to by the tasks are retrieved and compiled in the new application. | 2010-06-17 |
20100153910 | SUBGRAPH EXECUTION CONTROL IN A GRAPHICAL MODELING ENVIRONMENT - Exemplary embodiments allow subgraph execution control within a graphical modeling or graphical programming environment. In an embodiment, a subgraph may be identified as a subset of blocks within a graphical model, or graphical program, or both. A subgraph initiator may explicitly execute the subgraph while maintaining data dependencies within the subgraph. Explicit signatures may be defined for the subgraph initiator and the subgraph either graphically or textually. Execution control may be branched wherein the data dependencies within the subgraph are maintained. Execution control may be joined together wherein the data dependencies within the subgraph are maintained. | 2010-06-17 |
20100153911 | Optimized storage of function variables - Optimized storage of function variables in compiled code is disclosed. It is determined that a variable of a first function is required to be available for use by a second function subsequent to return of the first function. Machine code is generated to escape the variable from a storage location in a stack memory to a storage location in a heap memory, prior to the variable being removed from the stack memory, in connection with return of the first function. | 2010-06-17 |
20100153912 | Variable type knowledge based call specialization - Variable type knowledge based call specialization is disclosed. An indication is received that a variable that is an argument of a function or operation the behavior of which depends at least in part on a data type of the argument is of a first data type. Machine code that implements a first behavior that corresponds to the first data type, but not a second behavior that corresponds to a second data type other than the first data type, is generated for the function or operation. | 2010-06-17 |
20100153913 | Systems and Methods for Executing Object-Oriented Programming Code Invoking Pre-Existing Objects - Methods, computer-readable media, and systems are provided for executing programming code. In one embodiment, a server may store running objects to be used by the programming code. The server may provide a code development console through which the programming code may be input at a remote terminal. The server may receive the programming code inputted into the code development console, execute the programming code by using operations of the running objects, and transmit an execution result of the programming code to the remote terminal for display in the code development console. | 2010-06-17 |
20100153914 | SERVICE RE-FACTORING METHOD AND SYSTEM - A service re-factoring method and system. The method includes selecting by a computing system, a first service comprising a first name. The computing system receives a second name for a second service to be generated from the first service. The computing system executes a service refactoring software application, adjusts a granularity of the first service, and generates the second service. The computing system retrieves first traceability links associated with the first service and a first value associated with a first service identification technique. The first traceability links are created within the second service. The computing system generates a second value associated with a second service identification technique. The first service, the first name, and the first value are removed from the computing system. The computing system stores the second service, the second name, the second value, and the first traceability link. | 2010-06-17 |
20100153915 | UNIQUE CONTEXT-BASED CODE ENHANCEMENT - Unique context-based code enhancement of the core functionality of standard source code objects is performed at any position in the code. Desired insertion/replacement position(s) input by a user trigger the generation of a unique context for an enhancement. The unique context is based on characteristics of the code in the standard source code objects, such as the statements proximate to the insertion/replacement position(s). The unique context is associated with one or more extension source code objects that, when integrated into the existing source code at the insertion/replacement position(s), will provide the enhancement. At compile-time, the unique context used to unambiguously locate the insertion/replacement position(s). The extension source code objects can include industry or customer extensions, add-ons, plug-ins, and the like. | 2010-06-17 |
20100153916 | METHOD AND SYSTEM FOR TOPOLOGY MODELING - A computer program product is provided. The computer program product includes a computer useable medium having a computer readable program. The computer readable program when executed on a computer causes the computer to generate a topology role in a topology role tier that is included in a topology pattern. Further, the computer readable program when executed on a computer causes the computer to create a component in a component tier that is defined in the topology pattern such that the component corresponds to the topology role. In addition, the computer readable program when executed on a computer causes the computer to map the topology role to a deployment target. | 2010-06-17 |
20100153917 | SOFTWARE CONFIGURATION CONTROL WHEREIN CONTAINERS ARE ASSOCIATED WITH PHYSICAL STORAGE OF SOFTWARE APPLICATION VERSIONS IN A SOFTWARE PRODUCTION LANDSCAPE - According to some embodiments, a source version of a software product may be established in connection with a software production landscape. A first container, representing a first uniquely addressable physical location in the software production landscape, may then be associated with the source version. An executable derivative version of the software product may be built from the source version, and a second container, representing a second uniquely addressable physical location in the software production landscape, may be associated with the executable derivative version. Software configuration information may then be automatically provided to a user based at least in part on a relationship between the first and second containers at a given point in time. | 2010-06-17 |
20100153918 | COMPOUND VERSIONING AND IDENTIFICATION SCHEME FOR COMPOSITE APPLICATION DEVELOPMENT - The present invention provides a method, a system and a computer program product for defining a version identifier of a service component. The method includes determining various specification levels corresponding to the service component. Thereafter, the determined specification levels are integrated according to a predefined hierarchy to obtain the version identifier of the service component. The present invention also enables the identification of the service components. The service components are identified from one or more service components on the basis of one or more user requirements. | 2010-06-17 |
20100153919 | SYSTEMS AND METHODS FOR TRACKING SOFTWARE STANDS IN A SOFTWARE PRODUCTION LANDSCAPE - According to some embodiments, a first container, representing a first uniquely addressable physical location in a software production landscape, may be associated with a first series of stand snippets related to a software product. Similarly, a second container, representing a second uniquely addressable physical location in the software production landscape, may be associated with a second series of stand snippets related to the software product. Information about a sequence of stand snippets may then be automatically provided to a user, wherein the sequence may include stand snippets from both the first and second containers. | 2010-06-17 |
20100153920 | METHOD FOR BUILDING AND PACKAGING SOFWARE - A method and apparatus for building a source code based on a project object model (POM) from a source control and for tracking a build environment of the source code is described. Plugins to complete the build as configured in the POM are downloaded from an external plugin repository. A local plugin repository is scanned to determine which plugins have already been downloaded. The local plugin repository is rescanned to determine whether any additional plugins and associated plugins POM files were downloaded during the build as build dependencies. Information of one or more referenced files is inserted into a database wherein the referenced files are identified as build dependencies. Information about the newly-built plugins and associated plugins POM files in the output directory are extracted and added to the database for use by subsequent builds. | 2010-06-17 |
20100153921 | SYSTEM AND METHOD FOR SOFTWARE DEBUGGING USING VARIABLE LOCATION - This disclosure provides software hat identifies a variable in a computer program as a target variable. The software automatically processes a first source code statement in the computer program for the target variable. The software determines if the target variable is not found in the particular processed statement and progresses through preceding statements until the target variable is found. The software determines if the particular statement involves an indirect assignment to the target variable and can return that particular statement as the origination statement. Additionally, the software determines if the particular statement involves a direct assignment to the target variable from a second variable. If the particular statement involves a direct assignment to the target variable from a second variable, the software can change the target variable to the second variable and can progress through preceding statements until the new target variable is found in a particular of the statements. | 2010-06-17 |
20100153922 | METHOD OF DETECTING MEMORY LEAK CAUSING PORTION AND EXECUTION PROGRAM THEREOF - With regard to a plurality of data stored in a memory, relationship of data is grasped twice after a time interval therebetween. Next, increased data C | 2010-06-17 |
20100153923 | METHOD, COMPUTER PROGRAM AND COMPUTER SYSTEM FOR ASSISTING IN ANALYZING PROGRAM - A method for grouping algorithms included in a program into groups and thus for assisting in analyzing the program. The method includes the steps of: converting each of the algorithms into a directed graph; judging, as to each representative directed graph stored in a storage unit of a computer system, whether or not the directed graph obtained by the conversion is similar to the representative directed graph; and determining a group to which the directed graph obtained by the conversion belongs from among groups stored in the storage unit in accordance with the similarity judgment. A computer system for performing the above method and a computer program for causing a computer system to perform the above method are also described. | 2010-06-17 |
20100153924 | Method and System for Performing Software Verification - Described is a method, system, and computer program product that provides control of a hardware/software system, and allows deterministic execution of the software under examination. According to one approach, a virtual machine for testing software is used with a tightly synchronized stimulus for the software being tested. A verification tool external to the virtual machine is used to provide test stimulus to and to collect test information from the virtual machine. Test stimulus from the verification tool that is external to the virtual machine provides the stimulation that incrementally operates and changes the state of the virtual machine. The stimulus is created and coverage is collected from outside the virtual machine by first stopping the virtual machine, depositing stimulus, and then reading coverage directly from the virtual machine memory while the machine is stopped. | 2010-06-17 |
20100153925 | Systems and methods for enhanced profiling of computer applications - Systems, methods, and computer-readable media are disclosed for enhanced profiling. An exemplary method includes initiating an execution of a software application which includes a plurality of routines, storing information related to data inputs to the plurality of routines during the execution of the software application, storing resource consumption information for the plurality of routines during the execution of the software application, correlating the resource consumption information for the plurality of routines to a size of the data inputs, and analyzing the correlated resource consumption information to determine a subset of the plurality of routines that exhibit at least a threshold amount of resource consumption with increasing size of the data inputs. | 2010-06-17 |
20100153926 | OPERATING SYSTEM AIDED CODE COVERAGE - A method, system, and computer program product for operating system (OS) aided code coverage are provided. The method includes reading context information associated with a software process in response to a context switching event in an OS, the OS initiating the reading of the context information and controlling scheduling of the software process. The method further includes determining coverage information for code implementing the software process as a function of the context information in response to the OS reading the context information, and storing the coverage information as coverage data. | 2010-06-17 |
20100153927 | TRANSFORMING USER SCRIPT CODE FOR DEBUGGING - User script code that is developed to be run in a host application, for example, as a macro can be transformed into debuggable code so that the host application may continue to operate during a debugging stop operation. Traceback methods can be created that call back into the host application to allow the host application to cooperatively operate and update its user-interface. The user script code can be transformed by injecting callbacks to the traceback methods at respective locations in the code where a stopping operation may be installed during debugging. Further, two or more debugging features can be combined into a single user script code transform using an iterator pattern function. | 2010-06-17 |
20100153928 | Developing and Maintaining High Performance Network Services - A network service runtime module executing on a processor is configured to accept a directed acyclic service graph representing elements of a network service application. During execution of the service graph, runtime events are stored. The service graph may by optimized by generating alternate service graphs, and simulating performance of the alternate service graphs in a simulator using the stored runtime events. A hill climber algorithm may be used in conjunction with the simulator to vary alternate service graphs and determine which alternate service graphs provide the greatest utility. Once determined, an alternate service graph with the greatest utility may be loaded into the network service runtime module for execution. | 2010-06-17 |
20100153929 | Converting javascript into a device-independent representation - A device-independent intermediate representation of a source code is generated and stored, e.g., in a memory or other storage mechanism. The stored intermediate representation of the source code is used to generate a device-specific machine code corresponding to the source code. The stored intermediate representation may be updated, e.g., periodically, for example by obtaining an updated version of the source code and compiling the updated source code to generate an updated intermediate representation. The stored intermediate representation may be based on source code received from a device that is synchronized with which a compiling device that generates the device-specific machine code. In some cases, the stored intermediate representation may be used to generate for each of a plurality of devices a corresponding device-specific machine code. | 2010-06-17 |
20100153930 | CUSTOMIZABLE DYNAMIC LANGUAGE EXPRESSION INTERPRETER - Embodiments described herein are directed to allowing a user to extend the functionality of a software code interpretation system. In one embodiment, a computer system receives user-defined conversion rules from a user for converting dynamic language code to continuation-based abstract memory representations. The computer system identifies portions of software code that are to be converted from dynamic language abstract memory representations into continuation-based abstract memory representations, where the identified code portions include undefined, extensible input primitives. The computer system also generates a dynamic, extensible set of output primitives interpretable by a continuation-based code interpretation system using the received conversion rules and converts the identified code portions including the undefined, extensible input primitives from dynamic language abstract memory representations into continuation-based abstract memory representations using the generated set of output primitives. | 2010-06-17 |
20100153931 | Operand Data Structure For Block Computation - In response to receiving pre-processed code, a compiler identifies a code section that is not a candidate for acceleration and a code block that is a candidate for acceleration. The code block specifies an iterated operation having a first operand and a second operand, where each of multiple first operands and each of multiple second operands for the iterated operation has a defined addressing relationship. In response to the identifying, the compiler generates post-processed code containing lower level instruction(s) corresponding to the identified code section and creates and outputs an operand data structure separate from the post-processed code. The operand data structure specifies the defined addressing relationship for the multiple first operands and for the multiple second operands. The compiler places a block computation command in the post-processed code that invokes processing of the operand data structure to compute operand addresses. | 2010-06-17 |
20100153932 | MANAGING SET MEMBERSHIP - The present invention extends to methods, systems, and computer program products for managing set membership. A set definition is translated into one or more membership conditions. Each membership condition includes statements about the attributes of a resource that are to be true if the resource is to be included in the set. For any given resource request, resources touched by the request are compared to membership conditions applicable to the touched resources. Thus, embodiments of the invention minimize the work that is done to determine which sets a resource may or may not belong to whenever a resource is modified. Accordingly, based on available resources, embodiments of the invention can scale to accommodate larger numbers of sets and larger numbers of potential members of sets. | 2010-06-17 |
20100153933 | Path Navigation In Abstract Syntax Trees - The subject matter disclosed herein provides methods and apparatus, including computer program products, for navigating abstract syntax trees. In one aspect there is provided a method. The method may include receiving a plurality of nodes, the nodes configured as an abstract syntax tree representing program code. The method may also include identifying at least one node from the plurality of nodes by navigating the plurality of nodes using a path expression. Related systems, apparatus, methods, and/or articles are also described. | 2010-06-17 |
20100153934 | Prefetch for systems with heterogeneous architectures - A compiler for a heterogeneous system that includes both one or more primary processors and one or more parallel co-processors is presented. For at least one embodiment, the primary processors(s) include a CPU and the parallel co-processor(s) include a GPU. Source code for the heterogeneous system may include code to be performed on the CPU but also code segments, referred to as “foreign macro-instructions”, that are to be performed on the GPU. An optimizing compiler for the heterogeneous system comprehends the architecture of both processors, and generates an optimized fat binary that includes machine code instructions for both the primary processor(s) and the co-processor(s). The optimizing compiler compiles the foreign macro-instructions as if they were predefined functions of the CPU, rather than as remote procedure calls. The binary is the result of compiler optimization techniques, and includes prefetch instructions to load code and/or data into the GPU memory concurrently with execution of other instructions on the CPU. Other embodiments are described and claimed. | 2010-06-17 |
20100153935 | Delayed insertion of safepoint-related code - Delayed insertion of safepoint related code is disclosed. Optimization processing is performed with respect to an intermediate representation of a source code. The optimized intermediate representation is analyzed programmatically to identify a safepoint and insert safepoint related code associated with the safepoint. In some embodiments, analyzing the optimized intermediate representation programmatically comprises determining where to place the safepoint within a program structure of the source code as reflected in the intermediate representation. | 2010-06-17 |
20100153936 | Deferred constant pool generation - Deferred constant pool generation is disclosed. Optimization processing is performed with respect to an intermediate representation of a source code. The optimized intermediate representation is used to generate a constant pool. In some embodiments, the source code comprises JavaScript, which is used to generate an LLVM or other intermediate representation (IR), which intermediate representation is optimized prior to a constant pool being generated. | 2010-06-17 |
20100153937 | SYSTEM AND METHOD FOR PARALLEL EXECUTION OF A PROGRAM - A computer system for executing a computer program on parallel processors, the system having a compiler for identifying within a computer program concurrency markers that indicate that code between them can be executed in parallel and should be executed with delayed side-effects; and an execution system that is operable to execute the code identified by the concurrency markers to generate a queue of side-effects and after execution of that code is completed, sequentially execute the queue of side-effects. | 2010-06-17 |
20100153938 | Computation Table For Block Computation - In response to receiving pre-processed code, a compiler identifies a code section that is not candidate for acceleration and identifying a code block specifying an iterated operation that is a candidate for acceleration. In response to identifying the code section, the compiler generates post-processed code containing one or more lower level instructions corresponding to the identified code section, and in response to identifying the code block, the compiler creates and outputs an operation data structure separate from the post-processed code that identifies the iterated operation. The compiler places a block computation command in the post-processed code that invokes processing of the operation data structure to perform the iterated operation and outputs the post-processed code. | 2010-06-17 |
20100153939 | REMAPPING DEBUGGABLE CODE - User script code that has been developed for execution in a host application can be remapped to debuggable script code, based on explicit debugging gestures, allowing for appropriate debugging coverage for the code while mitigating execution (in)efficiency issues. Capabilities of an application virtual machine used for the host application can be determined, and the user script code can be instrumented with guards for detecting explicit debugging gestures based on a virtual machine's (VM′) capabilities. The instrumented user script code can be executed in a runtime environment, for example, by a just-in-time compilation service. If an explicit debugging gesture is detected, a function where the gesture was detected can be transformed into debuggable script code, in one embodiment, based on the debuggable gesture detected. | 2010-06-17 |
20100153940 | TRANSPORTABLE REFACTORING OBJECT - According to some embodiments, a refactoring object is determined in connection with modification of at least one code-based object. The refactoring object may be transported to a set of systems in a distributed system landscape, and modifications of code-based objects may be performed at each of the set of systems in the system landscape. | 2010-06-17 |
20100153941 | FLEXIBLE CONTENT UPDATE VIA DEPLOYMENT ORDER TEMPLATE - Described herein are a system and a method for multi-functional software solution updates, which use a deployment order template. The deployment order template contains a plurality of action instructions and deployable component definitions. The method calculates an individual deployment sequence for each specific update scenario according to a plurality of deployable components available for update and the deployment order template. If some deployable components require as a prerequisite the execution of some steps, these steps are executed prior to the update of the depending components. If some deployable components require the execution of some steps after they have been updated, these steps are executed after the update occurs. If some of the deployable components specified in the deployment order template are not available for update, the deployment sequence skips the steps associated with these deployable component. | 2010-06-17 |
20100153942 | METHOD AND A SYSTEM FOR DELIVERING LATEST HOTFIXES WITH A SUPPORT PACKAGE STACK - A method and a system are described that involve delivering latest hotfixes with a support package stack. In one embodiment, the method includes receiving a selection of a stack of update components, the stack to be applied on a product and receiving correction data for an update component of the stack, the correction data being available on a software provider system. The method also includes detecting the correction data for the update component. Further, the method includes applying the correction data on the update component in the stack. | 2010-06-17 |
20100153943 | System and Method for Distributing Software Updates - A system includes a control server, a data package server, a home storage device, and a set-top box. The control server is configured to provide information related to a data package. The data package server is configured to provide the data package. The home storage device is configured to receive the data package as a multicast from the data package server. Additionally, the home storage device is configured to store the data package in a storage. The set-top box is configured to receive information related to the data package from the control server and retrieve the data package from the home storage device. | 2010-06-17 |
20100153944 | SOFTWARE INSTALLATION SYSTEM AND METHOD - A software installation system and method using a first mobile electronic device and a second mobile electronic device records an installation record of a software application of the first mobile electronic device and generates an installation list of the software application. The system and method further transmits the installation list to a server, and accesses the installation list by a second mobile electronic device and sends a request to the server for installing the software application. Furthermore, the system and method transmits the installation file to the second mobile electronic device, in response to the determination that the software application is available at no cost, and installs the software application in the second mobile electronic device according to the installation file and the installation record. | 2010-06-17 |
20100153945 | SHARED RESOURCE SERVICE PROVISIONING USING A VIRTUAL MACHINE MANAGER - A virtual machine manager (VMM) enables provisioning of services to multiple clients via a single data processing system configured as multiple virtual machines. The VMM performs several management functions, including: configuring/assigning each virtual machine (VM) for/to a specific, single client; scheduling the time and order for completing client services via the assigned client VM; instantiating a client VM at a scheduled time and triggering the execution of services tasks required for completing the specific client services on the client VM; monitoring and recording historical information about the actual completion times of services on a client VM; and updating a scheduling order for sequential instantiating of the multiple client VMs and corresponding client services, based on one or more of (i) pre-established time preferences, (ii) priority considerations, and (iii) historical data related to actual completion times of client services at a client VM. | 2010-06-17 |
20100153946 | DESKTOP SOURCE TRANSFER BETWEEN DIFFERENT POOLS - A method, apparatus, and system of desktop source transfer between different pools are disclosed. In one embodiment, a machine-readable medium includes determining that a transfer request is associated with a desktop source, accessing the desktop source from a source pool, and automatically transferring the desktop source from the source pool to a destination pool. | 2010-06-17 |
20100153947 | INFORMATION SYSTEM, METHOD OF CONTROLLING INFORMATION, AND CONTROL APPARATUS - Provided is an information system including a server apparatus having a virtualization control unit which implements a virtual machine, and a storage apparatus having a virtual logical volume management unit which provides a virtual logical volume (VLU) configured by using a real logical volume (RLU). In the system, the VLU is allocated to each of the virtual machines implemented in the same server apparatus, the RLUs configuring the VLU for each virtual machine differ depending on the virtual machine, an identifier of the virtual machine is added to an I/O request of the virtual machine, the I/O request with the identifier is transmitted to the storage apparatus, and the RLU as a target in the I/O request is identified based on the identifier by the storage apparatus. | 2010-06-17 |
20100153948 | COMBINED WEB AND LOCAL COMPUTING ENVIRONMENT - A system and method enabling two way communication between a virtual hosted operating system running in a web page and the local operating system and applications in order to allow a user to combine the advantages of both systems. | 2010-06-17 |
20100153949 | LIVE STREAMING MEDIA AND DATA COMMUNICATION HUB - A method for delivering multimedia services by providing a virtual machine having preconfigured components unique for a client and saving software image of the virtual machine under a special code that serves as a tag that uniquely identifies a networking site of the client. A local partition on the client's physical machine such as a laptop is isolated and the isolate local partition is virtualized to one or many virtual machines to allow the transport of media to a web server of choice that has the capability of streaming back to the interface constantly and instantly for full round trip interactions. The isolated partition of a user can become a live T.V. or radio station via a virtual channel. | 2010-06-17 |
20100153950 | POLICY MANAGEMENT TO INITIATE AN AUTOMATED ACTION ON A DESKTOP SOURCE - A method, apparatus, and system of policy management to initiate an automated action on a desktop source are disclosed. In one embodiment, a machine-readable medium embodying a set of instructions is disclosed. An event is detected. The event associated with a desktop source is automatically determined. A category of the event is determined. A policy is associated to the event based on the category. The policy is applied to the desktop source. Desktop sources may be reshuffled based on the policy. The internal event may be determined as a load balancing issue in which the desktop source may reside in a pool having maximum utilization. The desktop source may be transferred to anther pool having less utilization based on the policy. | 2010-06-17 |
20100153951 | OPERATING SYSTEM SHUTDOWN REVERSAL AND REMOTE WEB MONITORING - A method is disclosed for reversing operating system shutdown, including: detecting, by a monitoring program, an attempt by a user to log off, shut down, or restart a computer containing an operating system capable of running a plurality of program windows; determining if any program window is still open in the operating system; automatically cancelling, by the monitoring program, the logoff, shutdown, or restart request if it is determined that a program window is still open; and attempting to close any open program window by the monitoring program. | 2010-06-17 |
20100153952 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR MANAGING BATCH OPERATIONS IN AN ENTERPRISE DATA INTEGRATION PLATFORM ENVIRONMENT - Methods, system, and computer program products for managing batch operations are provided. A method includes defining a window of time in which a batch will run by entering a batch identifier into a batch table, the batch identifier specifying a primary key of the batch table and is configured as a foreign key to a batch schedule table. The time is entered into the batch schedule table. The method further includes entering extract-transform-load (ETL) information into the batch table. The ETL information includes a workflow identifier, a parameter file identifier, and a location in which the workflow resides. The method includes retrieving the workflow from memory via the workflow identifier and location, retrieving the parameter file, and processing the batch, according to the process, workflow, and parameter file. | 2010-06-17 |
20100153953 | UNIFIED OPTIMISTIC AND PESSIMISTIC CONCURRENCY CONTROL FOR A SOFTWARE TRANSACTIONAL MEMORY (STM) SYSTEM - A method and apparatus for unified concurrency control in a Software Transactional Memory (STM) is herein described. A transaction record associated with a memory address referenced by a transactional memory access operation includes optimistic and pessimistic concurrency control fields. Access barriers and other transactional operations/functions are utilized to maintain both fields of the transaction record, appropriately. Consequently, concurrent execution of optimistic and pessimistic transactions is enabled. | 2010-06-17 |
20100153954 | Apparatus and Methods for Adaptive Thread Scheduling on Asymmetric Multiprocessor - Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described. | 2010-06-17 |
20100153955 | SAVING PROGRAM EXECUTION STATE - Techniques are described for managing distributed execution of programs. In at least some situations, the techniques include decomposing or otherwise separating the execution of a program into multiple distinct execution jobs that may each be executed on a distinct computing node, such as in a parallel manner with each execution job using a distinct subset of input data for the program. In addition, the techniques may include temporarily terminating and later resuming execution of at least some execution jobs, such as by persistently storing an intermediate state of the partial execution of an execution job, and later retrieving and using the stored intermediate state to resume execution of the execution job from the intermediate state. Furthermore, the techniques may be used in conjunction with a distributed program execution service that executes multiple programs on behalf of multiple customers or other users of the service. | 2010-06-17 |
20100153956 | Multicore Processor And Method Of Use That Configures Core Functions Based On Executing Instructions - A multiprocessor system having plural heterogeneous processing units schedules instruction sets for execution on a selected of the processing units by matching workload processing characteristics of processing units and the instruction sets. To establish an instruction set's processing characteristics, the homogeneous instruction set is executed on each of the plural processing units with one or more performance metrics tracked at each of the processing units to determine which processing unit most efficiently executes the instruction set. Instruction set workload processing characteristics are stored for reference in scheduling subsequent execution of the instruction set. | 2010-06-17 |
20100153957 | SYSTEM AND METHOD FOR MANAGING THREAD USE IN A THREAD POOL - A method and system for managing a thread pool of a plurality of first type threads and a plurality of second type threads in a computer system using a thread manager, specifically, a method for prioritizing, cancelling, balancing the work load between first type threads and second type threads, and avoiding deadlocks in the thread pool. A queue stores a first type task and a second type task, the second type task being executable by at least one of the plurality of second type threads. The availability of at least one of the plurality of first type threads is determined, and if none are available, the availability of at least one of the plurality of second type threads is determined. An available second type thread is selected to execute the first type task. | 2010-06-17 |
20100153958 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR APPLYING CONDITIONAL RESOURCE THROTTLES TO FACILITATE WORKLOAD MANAGEMENT IN A DATABASE SYSTEM - A system, method, and computer-readable medium that facilitate workload management in a computer system are provided. A workload's system resource consumption is adjusted against a target consumption level thereby facilitating maintenance of the consumption to the target consumption within an averaging interval by dynamically controlling workload concurrency levels. System resource consumption is compensated during periods of over or under-consumption by adjusting workload consumption to a larger averaging interval. Further, mechanisms for limiting, or banding, dynamic concurrency adjustments to disallow workload starvation or unconstrained usage at any time are provided. Disclosed mechanisms provide for category of work prioritization goals and subject-area resource division management goals, allow for unclaimed resources due to a lack of demand from one workload to be used by active workloads to yield full system utilization at all times, and provide for monitoring success in light of the potential relative effects of workload under-demand, and under/over-consumption management. | 2010-06-17 |
20100153959 | CONTROLLING AND DYNAMICALLY VARYING AUTOMATIC PARALLELIZATION - A system and method for automatically controlling run-time parallelization of a software application. A buffer is allocated during execution of program code of an application. When a point in program code near a parallelized region is reached, demand information is stored in the buffer in response to reaching a predetermined first checkpoint. Subsequently, the demand information is read from the buffer in response to reaching a predetermined second checkpoint. Allocation information corresponding to the read demand information is computed and stored the in the buffer for the application to later access. The allocation information is read from the buffer in response to reaching a predetermined third checkpoint, and the parallelized region of code is executed in a manner corresponding to the allocation information. | 2010-06-17 |
20100153960 | METHOD AND APPARATUS FOR RESOURCE MANAGEMENT IN GRID COMPUTING SYSTEMS - A method for resource management in grid computing systems includes defining user's demands on execution of a task as SLA (Service Level Agreements) information; monitoring states of resources in a grid to store the states as resource state information; calculating for each resource in the grid, based on the resource state information, an expected completion time of the task and an expected profit to be obtained by completing the task; creating an available resource cluster by using the expected execution time and the expected profit; and determining, if the SLA information is satisfied by the available resource cluster, a task processing policy for executing the task by using at least one resource in the available resource cluster. The available resource cluster is a set of resources having the expected completion time within a deadline of the task and the expected profit being positive. | 2010-06-17 |
20100153961 | STORAGE SYSTEM HAVING PROCESSOR AND INTERFACE ADAPTERS THAT CAN BE INCREASED OR DECREASED BASED ON REQUIRED PERFORMANCE - A storage system is comprised of an interface unit | 2010-06-17 |
20100153962 | Method and system for controlling distribution of work items to threads in a server - A system and method are presented to control distribution of work items to threads in a server. The system and method include a permit dispenser that keeps track of permits, and a plurality of thread pools each including a queue with a configurable size, being configured with a desired concurrency and a size of the queue that is equal to a total number of work items to be executed by pool threads in the thread pool. The number of permits specifies a total number of threads available for executing the work items in the server. Each pool thread executes a work item in the thread pool, determines whether a thread surplus or a thread deficit exists, and shrinks or grows the thread pool respectively. | 2010-06-17 |
20100153963 | Workload management in a parallel database system - Embodiments of the present invention are directed to a workload management service component of a parallel database-management system that monitors usage of computational resources in the parallel database-management system and that provides a query-processing-task-management interface and a query-execution engine that receives query-processing requests associated with one of a number of services from host computers and accesses the workload-management-services component to determine whether to immediately launch execution of query-processing tasks corresponding to the received query-processing requests or to place the query-processing requests on wait queues for subsequent execution based on the current usage of computational resources within the parallel database-management system. | 2010-06-17 |
20100153964 | LOAD BALANCING OF ADAPTERS ON A MULTI-ADAPTER NODE - Load balancing of adapters on a multi-adapter node of a communications environment. A task executing on the node selects an adapter resource unit to be used as its primary port for communications. The selection is based on the task's identifier, and facilitates a balancing of the load among the adapter resource units. Using the task's identifier, an index is generated that is used to select a particular adapter resource unit from a list of adapter resource units assigned to the task. The generation of the index is efficient and predictable. | 2010-06-17 |
20100153965 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON INTER-THREAD COMMUNICATIONS - A technique for operating a high performance computing (HPC) cluster includes monitoring communication between threads assigned to multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more of the threads are moved to a different one of the multiple processors based on the communication between the threads. | 2010-06-17 |
20100153966 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER USING LOCAL JOB TABLES - A technique for operating a high performance computing cluster includes monitoring workloads of multiple processors. The high performance computing cluster includes multiple nodes that each include two or more of the multiple processors. Workload information for the multiple processors is periodically updated in respective local job tables maintained in each of the multiple nodes. Based on the workload information in the respective local job tables, one or more threads are periodically moved to a different one of the multiple processors. | 2010-06-17 |
20100153967 | PERSISTENT LOCAL STORAGE FOR PROCESSOR RESOURCES - Local storage may be allocated for each processing resource in a process of a computer system. Each processing resource may be virtualized and may have a one-to-one or a many-to-one correspondence with with physical processors. The contents of each local storage persist across various execution contexts that are executed by a corresponding processing resource. Each local storage may be accessed without synchronization (e.g., locks) by each execution context that is executed on a corresponding processing resource. The local storages provide the ability to segment data and store and access the data without synchronization. The local storages may be used to implement lock-free techniques such as a generalized reduction where a set of values is combined through an associative operator. | 2010-06-17 |
20100153968 | EXTERNAL RENDERING OF CLIPBOARD DATA - Systems, software, and computer implemented methods are described for rendering data into a clipboard and for automatically converting that data from an initial format to a target format. A computer program product is encoded on a tangible machine-readable medium, where the product comprises instructions for causing one or more processors to perform operations. These operations can include receiving a request to copy information from a first application to a clipboard, with the clipboard configured to provide subsequent transfer of the data to target applications. The information is automatically converted into a target format associated with a second application disparate from the first application. The computer program product can further execute operations such as storing the converted information in the target format in memory for use by the clipboard. | 2010-06-17 |
20100153969 | COMMUNICATION INTERFACE SELECTION ON MULTI-HOMED DEVICES - Configurable selection of communication interfaces on a multi-homed computing device. Application programs executing on the computing device define preferences, policies, and/or restrictions for use of the various communication interfaces. Responsive to a request from one of the application programs to communicate with a destination computing device, a list of the communication interfaces is created based on the preferences defined by the application program. The application program iteratively attempts to establish a connection to the destination computing device using each of the communication interfaces on the list. | 2010-06-17 |
20100153970 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING MULTI-DIMENSIONAL MANIPULATIONS TO CONTEXT MODELS - An apparatus for providing multi-dimensional manipulations to context models may include a processor. The processor may be configured to generate a context model including an object representation for objects stored in one or more devices, enable provision of a context value to a calling application via a value interface, and provide an extension to the value interface to enable multiple context values to be associated with each object. A corresponding method and computer program product are also provided. | 2010-06-17 |
20100153971 | Getting Performance Saturation Point Of An Event Driven System - A method and apparatus for regulating the input speed of events to an event processing system. In one embodiment, the method includes measuring a rate of events being outputted by the event processing system and computing an event transmission rate to be used to transmit received events to the event processing system based on the measured rate of events. The method further includes receiving an event with a speed controller to be processed by the event processing system and transmitting the received event by the speed controller to the event processing system according to the computed event transmission rate. | 2010-06-17 |
20100153972 | AUTOMATED LAMP STACK MIGRATION - Embodiments of the present invention provide a method, system and computer program product for automated LAMP stack data migration. In an embodiment of the invention, a method for automated LAMP stack data migration can be provided. The method can include retrieving a profile for a LAMP stack executing in a source operating platform, selecting a LAMP stack for deployment onto a target operating platform and deploying the selected LAMP stack onto the target operating platform. The method further can include translating the retrieved profile for compatibility with the selected LAMP stack, directing a reboot of the target operating platform, and applying the translated profile to the target operating platform. | 2010-06-17 |
20100153973 | Ultra-Wideband Radio Controller Driver (URCD)-PAL Interface - Various embodiments provide a two-way interface between a URC driver (URCD) and various Protocol Adaption Layer (PAL) drivers. The two-way interface can enable bandwidth to be shared and managed among multiple different PALs. The two-way interface can also be used to implement common radio functionality such as beaconing, channel selection, and address conflict resolution. In at least some embodiments, the two-way interface can be utilized for power management to place PALs in lower power states to conserve power and to support remote wake-up functionality. Further, at least some embodiments can enable vendor-specific PALs to interact with vendor-specific hardware. | 2010-06-17 |
20100153974 | OBTAIN BUFFERS FOR AN INPUT/OUTPUT DRIVER - Disclosed is a computer implemented method, computer program product, and apparatus to obtain buffers in a multiprocessor system. A software component receives a call from an I/O device driver for a buffer, the call including at least one parameter, and walks a bucket data structure to a current bucket. The software component then determines whether the current bucket is free, and obtains a buffer list contained with the current bucket. Responsive to a determination that the current bucket is free, the software component determines whether sufficient buffers are obtained based on the parameter. Upon determining there are sufficient buffers obtained, the software component provides the current bucket and a second bucket as a single buffer list to the I/O device driver. | 2010-06-17 |
20100153975 | Multi-pathing with SCSI I/O referrals - The present invention is a method for providing multi-pathing via Small Computer System Interface Input/Output (SCSI I/O) referral between an initiator and a storage cluster which are communicatively coupled via a network, the storage cluster including at least a first target device and a second target device. The method includes receiving an input/output (I/O) at the first target device from the initiator via the network. The I/O includes a data request. The method further includes transmitting a SCSI I/O referral list to the initiator when data included in the data request is not stored on the first target device, but is stored on the second target device. The referral list includes first and second port identifiers for identifying first and second ports of the second target device respectively. The first and second ports of the target device are identified as access ports for accessing the data requested in the data request. | 2010-06-17 |
20100153976 | Generic Data List Manager - Example methods and apparatus for storing and providing application runtime data are disclosed. An example method include receiving, at a data list manager, a set of identifiers associated, respectively, with one or more persistently stored structured data records. The example method further includes storing, by the data list manager, the set of identifiers. The example method also includes receiving, at the data list manager, a request for one or more of the structured data records and retrieving, by the data list manager, the one or more requested structured data records. The example method still further includes storing, by the data list manager, the retrieved data records in correspondence with their respective identifiers and providing, by the data list manager, the retrieved data records for display to a user. | 2010-06-17 |
20100153977 | Creating Step Descriptions for Application Program Interfaces - Among other disclosed subject matter, a computer program product is tangibly embodied in a computer-readable storage medium and includes instructions that when executed by a processor perform a method for interfacing with an application program. The method includes receiving, from an application program that has an interface, an interface description defining how to make an input into the application program using the interface. The method includes generating a screen for a user to define a step corresponding to a task to be performed in the application program by another user, the screen generated using the interface description. The method includes forwarding a step description for receipt by the application program, the step description created using a definition made under guidance of the screen, and configured consistently with the interface for the application program to create the task. | 2010-06-17 |
20100153978 | DISK CHUCKING DEVICE - A disk chucking device is disclosed. In accordance with an embodiment of the present invention, the disk chucking device coupling a disk to a rotor of a motor such that the disk can be mounted and demounted can include a boss, which is coupled with the rotor, a first elastic body, which includes an inner circumference surrounding the boss and in which the first elastic body has elasticity in a radial direction from a center of the boss, a plurality of second elastic bodies, which are radially disposed from the first elastic body and in which the plurality of second elastic bodies are elastically supported by the first elastic body, and a plurality of chuck pins, which press the disk and in which each of the plurality of chuck pins is elastically supported by each of the plurality of second elastic bodies. | 2010-06-17 |
20100153979 | Electronic apparatus - Disclosed is an electronic apparatus including a main body portion, a tray main body, a tray auxiliary portion, a first top panel, and a second top panel. The tray main body is capable of being taken into and out of the main body portion and includes a mounted portion, a drive portion, an optical system, and a first store portion. The tray auxiliary portion includes a second store portion that is integrated with the first store portion of the tray main body to form a space capable of storing the optical disc mounted on the mounted portion. | 2010-06-17 |
20100153980 | Optical-Means Driving Device - In a conventional optical-means driving device mounting a plurality of objective lenses, a lens holder is provided with introduction holes to serve thereinto inner yokes, so that it has been difficult to enhance stiffness and to increase secondary resonance. | 2010-06-17 |
20100153981 | DISK DRIVE - A disk drive is disclosed. The disk drive can include includes a spindle motor, which can rotate a disk; an encoder, which can detect the rotational speed of the disk; an encoder holder, which secures the encoder; and a base plate, which supports the spindle motor, and in which a holder indentation is formed for inserting the encoder holder in. Certain embodiments of the invention allow easy height adjustments for the encoder holder, so that the encoder may maintain a particular distance from the disk, and also allow the encoder to be fitted onto the base plate, even in cases where the base plate has a limited mounting area. | 2010-06-17 |
20100153982 | METHODS AND APPARATUS FOR MEDIA SOURCE IDENTIFICATION AND TIME SHIFTED MEDIA CONSUMPTION MEASUREMENTS - Methods and apparatus for media source identification and time shifted media consumption measurements are disclosed. A disclosed method identifies a time shift associated with one of a plurality of media sources local to a media delivery device by generating a library of first signature information local to the media delivery device, wherein the library of first signature information includes records, each of which contains a time stamp, a signature value and a source identifier associated with a respective one of the plurality of media sources, generating second signature information based on media presented via the media delivery device, generating a collection of matching signature information based on the first and second signature information, and performing a time shift analysis on the collection of matching signature information to identify the time shift associated with the one of the plurality of media sources local to the media delivery device. | 2010-06-17 |
20100153983 | AUTOMATED PRESENCE FOR SET TOP BOXES - Exemplary automated presence detection systems comprise set top box components equipped with a Bluetooth receiver, or another receiver configured to receive data from a personal identification device via a wireless and automatic radio frequency standard. In various embodiments, the Bluetooth receiver may be integrated into the set top box or may operate as an adjunct to an existing set top box. The wireless radio frequency receiver in the set top box will poll to determine the presence of previously paired personal identification devices. If any such device is present, then the set top box will track and record the presence of a viewer that is associated with the device and correlate the viewer's presence with content displayed on the television or other local content display component. Alternatively, based on detected viewer presence, some embodiments of an automated presence detection system may be configured to push and/or restrict specific content. Further, some embodiments are configured to gather statistical data concerning viewer behavior and/or exposure to displayed content. | 2010-06-17 |
20100153984 | User Feedback Based Highlights of Recorded Programs - A television program is recorded by multiple client devices. After feedback regarding playback of the television program by at least a threshold number of other users has been analyzed, a highlight version of the television program is obtained by a client device. The highlight version of the television program is one or more portions of the television program. A user request to playback the highlight version of the television program is received, and in response to this user request, the highlight version of the television program is played back. | 2010-06-17 |
20100153985 | METHOD AND APPARATUS FOR MANAGING ACCESS PLANS - A system that incorporates teachings of the present disclosure may include, for example, a television having a controller to determine an access plan associated with a mobile communication device that is capable of wirelessly receiving broadcast video content, present one or more options for adjusting the access plan where the one or more options include wireless access for the television to the broadcast video content, and receive a selection of the one or more options, wherein the access plan is adjusted based on the selection. Other embodiments are disclosed. | 2010-06-17 |
20100153986 | INTERACTIVE TELEVISION SYSTEMS WITH CONFLICT MANAGEMENT CAPABILITIES - An interactive television system is provided in which a user can use an interactive television application to establish time-based settings. The user may set television program reminders, advance-order pay-per-view programs, schedule programs for recording, and establish parental controls to prevent television viewing during certain times on certain channels. The interactive television application may be used to support video recorder functions such as personal video recorder functions implemented locally on the user's set-top box or other equipment and network-based video recorder functions implemented using servers at cable television headends and other network locations. The interactive television application may also be used to deliver video-on-demand content. When the user requests that video be delivered, conflicts may arise between the requested video delivery and the previously-established time-based settings. The interactive television application may provide the user with on-screen options that allow the user to select how to resolve these conflicts. | 2010-06-17 |
20100153987 | DATA BROADCAST METHOD - A system for providing requested data sets of broadcast data service transmitted as part of a broadcast signal, including a broadcast headend configured to receive a data request from a receiver, and configured to broadcast requested data sets to the receiver in response to the data request from the receiver, a processor configured to periodically extract all of the requested data sets of the broadcast data service from a broadcast carousel included in the broadcast signal, a memory configured to store all of the requested data sets of the broadcast data service, defining a plurality of digital-audio/video-data-sets including television clips, a first controller configured to allow selection from a list of the plurality of sets of the digital-audio/video-data-sets, and a second controller responsive to a user initiated selection signal to cause the memory to output a user selected one of the plurality of digital-audio/video-data sets selected from the list, wherein the processor converts the digital-audio/video-data of the requested data sets of the broadcast data service into real time audio/video data. | 2010-06-17 |
20100153988 | Unknown - A VOD server refers to advertisement delivery information, and inserts a stream of advertising content in a stream of video content of a main part, based on advertisement inserting position information indicating an inserting position of the advertising content to be inserted into the video content of the main part, for delivery to a client terminal. When making this delivery, at least time information, such as time management information of reproduced output or decoding, to be added to the stream of the video content of the main part and the stream of the advertising content to be delivered to the client terminal, is replaced by time information in accordance with an order of the streams to be delivered to the client terminal. | 2010-06-17 |