Patent application number | Description | Published |
20130024720 | Creation of Highly Available Pseudo-Clone Standby Servers for Rapid Failover Provisioning - Near clones for a set of targeted computing systems are provided by determining a highest common denominator set of components among the computing systems, producing a pseudo-clone configuration definition, and realizing one or more pseudo-clone computing systems as partially configured backups for the targeted computing systems. Upon a planned failover, actual failure, or quarantine action on a targeted computing system, a difference configuration is determined to complete the provisioning of the pseudo-clone system to serve as a replacement system for the failed or quarantined system. Failure predictions can be used to implement the pseudo-clone just prior to an expected first failure of any of the targeted systems. The system can also interface to an on-demand provisioning management system to effect automated workflows to realize pseudo-clones and replacement systems automatically, as needed. | 01-24-2013 |
20130027811 | SYSTEMS AND METHODS FOR PROTECTING A SENSITIVE DEVICE FROM CORROSION - A product according to one embodiment includes a tape having an applicator portion for applying an organic coating to a magnetic head for reducing exposure of the head to oxidation promoting materials; the organic coating on the applicator portion of the tape; and a lubricant on a data portion of the tape. A product according to another embodiment includes a tape having a data portion along a portion of a length thereof, and a cleaning portion along another portion of the length of the tape, the cleaning portion being for removing an organic coating from a magnetic head. A magnetic storage system according to one embodiment includes a magnetic head having a removable organic coating thereon in an amount sufficient for reducing exposure of the head to oxidation promoting materials. | 01-31-2013 |
20130036341 | COLLECTING FAILURE INFORMATION ON ERROR CORRECTION CODE (ECC) PROTECTED DATA - Methods and means of error correction code (ECC) debugging may comprise detecting whether a bit error has occurred; determining which bit or bits were in error; and using the bit error information for debug. The method may further comprise comparing ECC syndromes against one or more ECC syndrome patterns. The method may allow for accumulating bit error information, comparing error bit failures against a pattern, trapping data, counting errors, determining pick/drop information, or stopping the machine for debug. | 02-07-2013 |
20130037885 | SEMICONDUCTOR-ON-INSULATOR (SOI) STRUCTURES INCLUDING GRADIENT NITRIDED BURIED OXIDE (BOX) - A semiconductor-on-insulator structure includes a buried dielectric layer interposed between a base semiconductor substrate and a surface semiconductor layer. The buried dielectric layer comprises an oxide material that includes a nitrogen gradient that peaks at the interface of the buried dielectric layer with at least one of the base semiconductor substrate and surface semiconductor layer. The interface of the buried dielectric layer with the at least one of the base semiconductor substrate and surface semiconductor layer is abrupt, providing a transition in less than about 5 atomic layer thickness, and having less than about 10 angstroms RMS interfacial roughness. A second dielectric layer comprising an oxide dielectric material absent nitrogen may be located interposed between the buried dielectric layer and the surface semiconductor layer. | 02-14-2013 |
20130038998 | PRINTED CIRCUIT BOARD HAVING A NON-PLATED HOLE WITH LIMITED DRILL DEPTH - A printed circuit board having one or more holes that are controllably drilled to extend into the printed circuit board substrate to a predetermined depth intermediate first and second faces. A mechanical locating pin is received into each of the one or more holes to mechanically align a first component for electronically interfacing with the printed circuit board substrate. A second component is installed on the second face directly opposite of the one or more holes such that the second component is in electronic communication with conductive traces or interconnects formed on the second face directly opposite of the hole. | 02-14-2013 |
20130044540 | PROGRAMMING AT LEAST ONE MULTI-LEVEL PHASE CHANGE MEMORY CELL - An apparatus for programming at least one multi-level Phase Change Memory (PCM) cell having a first terminal and a second terminal A programmable control device controls the PCM cell to have a respective cell state by applying at least one current pulse to the PCM cell, the control device controlling the at least one current pulse by applying a respective first pulse to the first terminal and a respective second pulse applied to the second terminal of the PCM cell. The respective cell state is defined by a respective resistance level. The control device receives a reference resistance value defining a target resistance level for the cell, and further receives an actual resistance value of said PCM cell such that the applying the respective first pulse and said respective second pulse is based on said actual resistance value of the PCM cell and said received reference resistance value. | 02-21-2013 |
20130047235 | Authenticating a rich client from within an existing browser session - A user authenticates to a Web- or cloud-based application from a browser-based client. The browser-based client has an associated rich client. After a session is initiated from the browser-based client (and a credential obtained), the user can discover that the rich client is available and cause it to obtain the credential (or a new one) for use in authenticating the user to the application (using the rich client) automatically, i.e., without additional user input. An application interface provides the user with a display by which the user can configure the rich client authentication operation, such as specifying whether the rich client should be authenticated automatically if it detected as running, whether and what extent access to the application by the rich client is to be restricted, if and when access to the application by the rich client is to be revoked, and the like. | 02-21-2013 |
20130055219 | OVERLAY IDENTIFICATION OF DATA PROCESSING TARGET STRUCTURE - A method, system, and computer program product for identifying an overlay of a data processing target structure in a computing environment is provided. At least one of examining a mapping macro for the target structure with a set of valid ranges, comparing the set of valid ranges with the target structure to identify a string of at least one first invalid value and a last invalid value and locate invalid regions of the target structure, and examining executable code associated with the target structure, comparing at least one unchanged module against at least one additional module exhibiting an overlay characteristic to identify the string of the at least one first invalid value and the last invalid value and locate invalid regions of the target structure, is performed. | 02-28-2013 |
20130057634 | INDICATION OF PRINT MEDIA QUALITY TO PRINTER USERS - A printing system for indicating print media quality to printer users includes a printing assembly configured to route print media along a pathway for printing. In an example, the system includes, but is not limited to, a thermal printer having a thermal print head for printing onto paper. The system also includes a light meter configured to detect light reflected from the print media, such as the paper. The light meter also measures a characteristic of the detected light. An indicator is coupled to the light meter, and configured to present a quality level of the print media to a user based on the measured characteristic of the detected light. | 03-07-2013 |
20130061082 | BALANCING POWER CONSUMPTION AND HIGH AVAILABILITY IN AN INFORMATION TECHNOLOGY SYSTEM - A method is disclosed for balancing the requirements of high availability achieved by redundant active components and power saving achieved by less active components. The requirement for high availability can be expressed by the recovery time objective (RTO) which specifies the amount of time it takes to recover from a failure in the system. Based on the configured RTO, the system configures the most appropriate power mode. | 03-07-2013 |
20130061227 | PRESERVING CHANGES TO A CONFIGURATION OF A RUNNING VIRTUAL MACHINE - A method is provided for preserving changes to a configuration of a running virtual machine. The method includes reading an initial configuration, starting the virtual machine under application of the initial configuration, modifying the configuration of the virtual machine during runtime, storing the modified configuration of the virtual machine during shutdown, and reading the modified configuration at re-start of the virtual machine and re-starting the virtual machine under application of the modified configuration. | 03-07-2013 |
20130061238 | OPTIMIZING THE DEPLOYMENT OF A WORKLOAD ON A DISTRIBUTED PROCESSING SYSTEM - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes. | 03-07-2013 |
20130062037 | COLD AIR CONTAINMENT SYSTEM IN A DATA CENTRE - A method is provided for containing cold air in a corridor created on the side of a computer cabinet. The method includes providing a base mountable on the computer cabinet, and providing a panel fastened to the base and moveable with respect to the base, the panel being suitable for separating the cold air in the corridor below the panel from the hot air above the panel. | 03-14-2013 |
20130066938 | PERFORMING COLLECTIVE OPERATIONS IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system that includes a plurality of compute nodes and a plurality of tasks, each task is assigned a unique rank, and each compute node is coupled for data communications by at least two different networking topologies. At least one of the two networking topologies is a tiered tree topology having a root task and at least two child tasks and the at least two child tasks are peers of one another in the same tier. Embodiments include for each task, sending at least a portion of data corresponding to the task to all child tasks of the task through the tree topology; and sending at least a portion of the data corresponding to the task to all peers of the task at the same tier in the tree topology through the second topology. | 03-14-2013 |
20130067169 | DYNAMIC CACHE QUEUE ALLOCATION BASED ON DESTINATION AVAILABILITY - An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available. | 03-14-2013 |
20130067194 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 03-14-2013 |
20130067436 | ENHANCING FUNCTIONAL TESTS COVERAGE USING TRACEABILITY AND STATIC ANALYSIS - A method that may include: building a dependencies graph representing dependencies between code elements of a computer code; associating portions of the computer code with corresponding design specifications or requirements derived from a design specifications document or a requirements document respectively which is associated with the computer code, to yield a design specifications or requirements-code tracing map; and analyzing the design specifications or requirements-code tracing map based on the dependencies graph to yield an ordered list of design specifications or requirements respectively, wherein the order is selected such that functional tests written for the computer code and addressing design specifications or requirements of a higher order, will yield a higher level of functional test coverage of the computer code in terms of design specifications or requirements. | 03-14-2013 |
20130067592 | SYSTEM AND METHOD FOR ROLE BASED ANALYSIS AND ACCESS CONTROL - A system and method for program access control includes, for a typestate, providing typestate properties and assigning a role to the typestate in a program in accordance with the typestate properties. Access to operations is limited for the typestate in the program based on the role assigned to the typestate and an access permission level. | 03-14-2013 |
20130068441 | DATA CENTER COOLING WITH AN AIR-SIDE ECONOMIZER AND LIQUID-COOLED ELECTRONICS RACK(S) - A cooling apparatus and method are provided for cooling an electronics rack. The cooling apparatus includes an air-cooled cooling station, which has a liquid-to-air heat exchanger and ducting for directing a cooling airflow across the heat exchanger. A cooling subsystem is associated with the electronics rack, and includes a liquid-cooled condenser facilitating immersion-cooling of electronic components of the electronics rack, a liquid-cooled structure providing conductive cooling to electronic components of the electronics rack, or an air-to-liquid heat exchanger associated with the rack and cooling airflow passing through the electronics rack. A coolant loop couples the cooling subsystem to the liquid-to-air heat exchanger. In operation, heat is transferred via circulating coolant from the electronics rack, and rejected in the liquid-to-air heat exchanger of the cooling station to the cooling airflow passing across the liquid-to-air heat exchanger. In one embodiment, the cooling airflow is outdoor air. | 03-21-2013 |
20130070712 | MACRO DIVERSITY IN A MOBILE DATA NETWORK WITH EDGE BREAKOUT - Macro diversity is managed at the edge in a mobile data network with edge data Macro diversity is managed at the edge in a mobile data network with edge data breakout with a component in a Mobile Internet Optimization Platform (MIOP) referred to as MIOP@NodeB. A set of NodeBs that are in simultaneous communication with user equipment are defined as an active set. One of the MIOP@NodeBs of the active set is selected as a master, with the remaining MIOP@NodeBs in the active set being designated slaves. During uplink of signaling data, the signaling data is sent from the UE to all the NodeBs in the active set, which send the signaling data to each of their corresponding MIOP@NodeBs. Each slave MIOP@NodeB sends its data to the master MIOP@NodeB, which combines the data from all into a best packet. | 03-21-2013 |
20130073752 | LOW LATENCY, HIGH BANDWIDTH DATA COMMUNICATIONS BETWEEN COMPUTE NODES IN A PARALLEL COMPUTER - Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core. | 03-21-2013 |
20130074071 | COPYING SEGMENTS OF A VIRTUAL RESOURCE DEFINITION - Segments of a virtual resource definition are copied from an existing virtual resource to create a new virtual resource definition or modifying an existing one to simplify virtualization management. The virtualization manager divides a virtual resource definition into a number of reusable segments. A user may then select one or more segments and place them into a new or existing virtual resource definition. The user can choose to mix and match segments to quickly create or modify a virtual resource definition such as a virtual server, virtual printer or virtual data storage. Any default information in the new virtual resource or old information in the existing resource is replaced by the information in the copied segment. Any dependencies in the existing virtual resource are resolved with user input to break the dependencies or copy dependent data. | 03-21-2013 |
20130074097 | ENDPOINT-BASED PARALLEL DATA PROCESSING WITH NON-BLOCKING COLLECTIVE INSTRUCTIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing by the parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation. | 03-21-2013 |
20130074102 | FLEXIBLE EVENT DATA CONTENT MANAGEMENT FOR RELEVANT EVENT AND ALERT ANALYSIS WITHIN A DISTRIBUTED PROCESSING SYSTEM - Methods, systems, and computer program products for flexible event data content management for relevant event and alert analysis within a distributed processing system are provided. Embodiments include capturing, by an interface connector, an event from a resource of the distributed processing system; inserting, by the interface connector, the event into an event database; receiving from the interface connector, by a notifier, a notification of insertion of the event into the event database; based on the received notification, tracking, by the notifier, the number of events indicated as inserted into the event database; receiving from the notifier, by a monitor, a cumulative notification indicating the number of events that have been inserted into the event database; in response to receiving the cumulative notification, retrieving, by the monitor, from the event database, events inserted into the event database; and processing, by the monitor, the retrieved events. | 03-21-2013 |
20130080409 | DEDUPLICATED DATA PROCESSING CONGESTION CONTROL - Various embodiments for deduplicated data processing congestion control in a computing environment are provided. In one such embodiment, a congestion target setpoint is calculated using one of a proportional constant, an integral constant, and a derivative constant, wherein the congestion target setpoint is a virtual dimension setpoint. A single congestion metric is determined from a sampling of a plurality of combined deduplicated data processing congestion statistics in a number of active deduplicated data processes. A congestion limit is calculated from a comparison of the single congestion metric to the congestion target setpoint, the congestion limit being a manipulated variable. The congestion limit is compared to the number of active deduplicated data processes. If the number of active deduplicated data processes are less than the congestion limit, a new deduplicated data process of the number of active deduplicated data processes is spawned. | 03-28-2013 |
20130080410 | ACTIVE MEMORY EXPANSION IN A DATABASE ENVIRONMENT TO QUERY NEEDED/UNEEDED RESULTS - Techniques are described for estimating and managing memory compression for request processing. Embodiments of the invention may generally include receiving a request for data, determining if the requested data contains any compressed data, and sending the requesting entity only the uncompressed data. A separate embodiment generally includes receiving a request for data, determining if the requested data contains any compressed data, gathering uncompression criteria about the requested data, and using the uncompression criteria to selectively determine what portion of the compressed data to uncompress. | 03-28-2013 |
20130080654 | OVERLOADING PROCESSING UNITS IN A DISTRIBUTED ENVIRONMENT - Techniques are disclosed for overloading, at one or more nodes, an output of data streams containing data tuples. A first plurality of tuples is received via a first data stream and a second plurality of tuples is received via a second data stream. A first value associated with the first data stream and a second value associated with the second data stream are established based on a specified metric. A third plurality of tuples is output based on the first value and the second value, wherein the third plurality of tuples is a subset of the first plurality of tuples and the second plurality of tuples. | 03-28-2013 |
20130080745 | FINE-GRAINED INSTRUCTION ENABLEMENT AT SUB-FUNCTION GRANULARITY - Fine-grained enablement at sub-function granularity. An instruction encapsulates different sub-functions of a function, in which the sub-functions use different sets of registers of a composite register file, and therefore, different sets of functional units. At least one operand of the instruction specifies which set of registers, and therefore, which set of functional units, is to be used in performing the sub-function. The instruction can perform various functions (e.g., move, load, etc.) and a sub-function of the function specifies the type of function (e.g., move-floating point; move-vector; etc.). | 03-28-2013 |
20130080818 | CONVERSION OF TIMESTAMPS BETWEEN MULTIPLE ENTITIES WITHIN A COMPUTING SYSTEM - Method is described for converting received timestamps to a time-recording standard recognized by the receiving computing system. Embodiments of the invention generally include receiving data from an external device that includes a timestamp. If the received data is the first communication from the external device, creating a time base used for converting subsequently received timestamps to a recognized standard. Moreover, the system updates the time base if a counter failure at the external device is detected. When the external device transmits subsequent data, the time base is added to the subsequently received timestamps to convert the subsequent timestamps to a time-recording standard recognized by the computing system. | 03-28-2013 |
20130081047 | MANAGING A WORKLOAD OF A PLURALITY OF VIRTUAL SERVERS OF A COMPUTING ENVIRONMENT - An integrated hybrid system is provided. The hybrid system includes compute components of different types and architectures that are integrated and managed by a single point of control to provide federation and the presentation of the compute components as a single logical computing platform. | 03-28-2013 |
20130081058 | EXECUTING A START OPERATOR MESSAGE COMMAND - A facility is provided to enable operator message commands from multiple, distinct sources to be provided to a coupling facility of a computing environment for processing. These commands are used, for instance, to perform actions on the coupling facility, and may be received from consoles coupled to the coupling facility, as well as logical partitions or other systems coupled thereto. Responsive to performing the commands, responses are returned to the initiators of the commands. | 03-28-2013 |
20130081240 | METHOD OF MANUFACTURING COMPLIMENTARY METAL-INSULATOR-METAL (MIM) CAPACITORS - A low capacitance density, high voltage MIM capacitor and the high density MIM capacitor and a method of manufacture are provided. The method includes depositing a plurality of plates and a plurality of dielectric layers interleaved with one another. The method further includes etching a portion of an uppermost plate of the plurality of plates while protecting other portions of the uppermost plate. The protected other portions of the uppermost plate forms a top plate of a first metal-insulator-metal (MIM) capacitor and the etching exposes a top plate of a second MIM capacitor. | 04-04-2013 |
20130089089 | NETWORK SWITCHING DOMAINS WITH A VIRTUALIZED CONTROL PLANE - A distributed switching fabric system includes multiple network switches coupled to a cell-based switching fabric by cell-fabric ports. A virtual machine runs on a server connected to a network port of one or more of the network switches that are members of a given switching domain. The virtual machine manages a control plane for the given switching domain. The server receives a protocol control packet from one of the network switches and forwards the received protocol control packet to the virtual machine for processing. | 04-11-2013 |
20130089101 | CREDIT-BASED NETWORK CONGESTION MANAGEMENT - A switching network includes first, second and third switches coupled for communication, such that the first and third switches communicate data traffic via the second switch. The first switch is operable to request transmission credits from the third switch, receive the transmission credits from the third switch and perform transmission of data traffic in reference to the transmission credits. The third switch is operable to receive the request for transmission credits from the first switch, generate the transmission credits and transmit the transmission credits to the first switch via the second switch. The second switch is operable to modify the transmission credits transmitted by the third switch prior to receipt of the transmission credits at the first switch. | 04-11-2013 |
20130091094 | ACCELERATING DATA PROFILING PROCESS - A data profile request is handles by utilizing data in a distributed file system. Tabular data is extracted from a data source and stored in a distributed file system. Each table in the tabular data is split by columns, which are each stored in separate files in a set of physical nodes of the distributed file system. In response to a data profiling request, a master node determines, based on the profiling request, which groups of files are needed to be on a same physical node in order to perform the profiling analysis. The master node creates jobs using physical nodes that contain the requisite files needed for each job. | 04-11-2013 |
20130091506 | MONITORING PERFORMANCE ON WORKLOAD SCHEDULING SYSTEMS - The present invention relates to the field of enterprise network computing. In particular, it relates to monitoring workload of a workload scheduler. Information defining a plurality of test jobs of low priority is received. The test jobs have respective launch times, and are launched for execution in a data processing system in accordance with said launch times and said low execution priority. The number of test jobs executed within a pre-defined analysis time range is determined A performance decrease warning is issued if the number of executed test jobs is lower than a predetermined threshold number. A workload scheduler discards launching of jobs having a low priority when estimating that a volume of jobs submitted with higher priority is sufficient to keep said scheduling system busy. | 04-11-2013 |
20130091868 | THERMOELECTRIC-ENHANCED, VAPOR-CONDENSER FACILITATING IMMERSION-COOLING OF ELECTRONIC COMPONENT(S) - Cooling methods are provided for immersion-cooling one or more electronic components. The cooling method includes: providing a housing at least partially surrounding and forming a fluid-tight compartment about the electronic component(s) and a dielectric fluid disposed within the fluid-tight compartment, with the electronic component(s) immersed within the dielectric fluid; and providing a vapor-condenser, heat sink, and thermal conductive path. The vapor-condenser includes a plurality of thermally conductive condenser fins extending within the fluid-tight compartment, and the heat sink includes a first region and a second region, with the first region of the heat sink being in thermal contact with the vapor-condenser. The thermal conduction path couples the fluid-tight compartment and the second region of the heat sink in thermal contact, and includes a thermoelectric array, which facilitates transfer of heat from the fluid-tight compartment to the second region of the heat sink through the thermal conduction path. | 04-18-2013 |
20130092938 | SILICON BASED MICROCHANNEL COOLING AND ELECTRICAL PACKAGE - A chip package includes: a substrate; a plurality of conductive connections in contact with the silicon carrier; a silicon carrier in a prefabricated shape disposed above the substrate, the silicon carrier including: a plurality of through silicon vias for providing interconnections through the silicon carrier to the chip stack; liquid microchannels for cooling; a liquid coolant flowing through the microchannels; and an interconnect to one or more chip stacks. The chip package further includes a cooling lid disposed above the chip stack providing additional cooling. | 04-18-2013 |
20130096908 | EMPLOYING NATIVE ROUTINES INSTEAD OF EMULATED ROUTINES IN AN APPLICATION BEING EMULATED - Processing within an emulated computing environment is facilitated. Code used to implement system-provided (e.g., standard or frequently used) routines referenced in an application being emulated is native code available for the computing environment, rather than emulated code. Responsive to encountering a reference to a system-provided routine in the application being emulated, the processor is directed to native code, rather than emulated code, even though the application is being emulated. | 04-18-2013 |
20130097300 | ADMINISTERING INCIDENT POOLS FOR EVENT AND ALERT ANALYSIS - Administering incident pools including creating a pool of incidents, the pool having a predetermined initial period of time; assigning each received incident to the pool; assigning, by the incident analyzer, to each incident a predetermined minimum time for inclusion in a pool; extending for one or more of the incidents the predetermined initial period of time of the pool by a particular period of time assigned to the incident; determining whether conditions have been met to close the pool; and if conditions have been met to close the pool determining for each incident in the pool whether the incident has been in the pool for its predetermined minimum time for inclusion in a pool; and if the incident has not been in the pool for its predetermined minimum time, evicting the incident from the closed pool and including the incident in a next pool. | 04-18-2013 |
20130097323 | DYNAMIC PROCESSING UNIT RELOCATION IN A MULTI-NODAL ENVIRONMENT BASED ON INCOMING PHYSICAL DATA - A relocation mechanism in a multi-nodal computer environment dynamically routes processing units in a distributed computer system based on incoming physical data into the processing unit. The relocation mechanism makes an initial location decision to place a processing unit onto a node in the distributed computer system. The relocation mechanism monitors physical data flowing into a processing unit or node and dynamically relocates the processing unit to another type of node within the ‘cloud’ of nodes based on the type of physical data or pattern of data flowing into the processing unit. The relocation mechanism may use one or more rules with criteria for different data types observed in the data flow to optimize when to relocate the processing units. | 04-18-2013 |
20130097404 | DATA COMMUNICATIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Eager send data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address. | 04-18-2013 |
20130097411 | TRANSFERRING ARCHITECTED STATE BETWEEN CORES - A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads. | 04-18-2013 |
20130097578 | DYNAMICALLY SELECTING SERVICE PROVIDER, COMPUTING SYSTEM, COMPUTER, AND PROGRAM - To enable a service consumer that can use multiple service providers to dynamically select a service provider that satisfies a service level requested for each processing method to be called at the time of execution of an application. A cloud service directory (CSD) provides an evaluation table indicative of the evaluation of resource information on each cloud service provider (CSP), and each cloud service consumer (CSC) defines service levels requested by itself and items of resource information associated with each processing method as a request table and an association table, respectively. Then, the formats of these tables and the definitions of the service levels are standardized throughout the entire computing system. This enables each CSC to use a distribution table in order to select and use an appropriate CSP for each processing method. | 04-18-2013 |
20130103932 | MULTI-ADDRESSABLE REGISTER FILES AND FORMAT CONVERSIONS ASSOCIATED THEREWITH - A multi-addressable register file is addressed by a plurality of types of instructions, including scalar, vector and vector-scalar extension instructions. It may be determined that data is to be translated from one format to another format. If so determined, a convert machine instruction is executed that obtains a single precision datum in a first representation in a first format from a first register; converts the single precision datum of the first representation in the first format to a converted single precision datum of a second representation in a second format; and places the converted single precision datum in a second register. | 04-25-2013 |
20130117469 | REGISTER ACCESS IN DISTRIBUTED VIRTUAL BRIDGE ENVIRONMENT - Systems and methods to perform a register access are described. A particular method includes receiving a data frame at a bridge element of a plurality of bridge elements in communication with a plurality of server computers. The data frame may include a register access request and may be forwarded from a controlling bridge in communication with the plurality of bridge elements. A register may be accessed and execution of the register access request may be initiated in response to receiving the data frame. | 05-09-2013 |
20130120291 | MOBILE TOUCH-GENERATING DEVICE AS SECURE LOUPE FOR TOUCHSCREEN DEVICES - A mobile touch-generating device includes logic; a touch-generating system, including one or more touch-generating elements, operatively coupled to the logic and configured to generate touch events detectable by a touchscreen, via the elements; a network connectivity device operatively coupled to the logic to establish a secure connection with a server via a telecommunication network and receive data through an established secure connection; and a visualization device connectable to the logic to display contents according to data received through the established secure connection. | 05-16-2013 |
20130132216 | POS INTERFACE (IF) EMULATOR - A point of sale interface (POS IF) emulator system includes a payment data processing-verification device that processes payment information in accordance with POS device information, emulates a POS input operation, and inputs the processed payment information to a POS device through a keyboard interface; the payment information being created from payment information extracted from shopping information and payment information acquired from user registration information, the shopping information being created by a mobile phone by acquiring item information from an item tag for an item and by performing the acquisition one or more times; wherein payment is made at the POS device corresponding to the POS device information. | 05-23-2013 |
20130132658 | Device For Executing Program Instructions and System For Caching Instructions - The system of the present invention includes an instruction fetch unit | 05-23-2013 |
20130133062 | System and Method to Capture and Manage Input Values for Automatic Form Fill - A system for automatically completing fields in online forms, such as login forms and new user registration forms, which employs a Master Cookie File containing sets of records associated with the user, his or her accounts or web sites, and registered values associated with form tags (e.g. username, password, address, email, telephone, etc.). When the user encounters another form, the MCF is automatically searched for matching values and form tags, primarily from the same account or web site, or alternatively from other accounts or sites. A flowing pop-up menu is displayed nearby the form fields from which the user can select values to automatically complete the form. Automatic account information updating, value expiration management, mapping of favorite values, and sharing of values are optional, enhanced functions of the invention. | 05-23-2013 |
20130138809 | RELEVANT ALERT DELIVERY IN A DISTRIBUTED PROCESSING SYSTEM - Methods, systems and products are provided relevant alert delivery including assigning by an event analyzer each received event to an events pool; determining by the event analyzer in dependence upon event analysis rules and the events assigned to the events pool whether to suppress one or more of the events; identifying by the event analyzer in dependence upon event analysis rules and the events assigned to the events pool one or more alerts; sending by the event analyzer to an alert analyzer all the alerts identified by the event analyzer; assigning by the alert analyzer the identified alerts to an alerts pool; determining by the alert analyzer in dependence upon alert analysis rules and the alerts in the alert pool whether to suppress any alerts; and transmitting the unsuppressed alerts to one or more components of the distributed processing system. | 05-30-2013 |
20130139133 | INFORMATION PROCESSING APPARATUS AND PROGRAM AND METHOD FOR ADJUSTING INITIAL ARRAY SIZE - An adjustment apparatus includes a storage device, an execution target program, an execution unit, a first API, a second API, a profiler, and a dynamic compiler. The execution unit interprets the program, and calls and executes a function of an API in response to the API description. The first and second API are callable by the execution unit, to respectively allocate an array of a predetermined size, and extend the array. The first and second APIs are converted into code to store an array allocation call context of the pre-extension array into a profile information storage area of the allocated array. The profiler profiles access to arrays. The dynamic compiler inline-expands an array allocation call context included in a code part to be dynamically compiled and embeds an array size determined based on context based access information, as an allocation initial size of the array, into the code part. | 05-30-2013 |
20130145170 | CROSS SYSTEM SECURE LOGON - A cross system secure logon in a target system by using a first authentication system and a second authentication system. A correct password may be valid on the first authentication system and the second authentication system. An aspect includes receiving an input password, generating a first hash key by using the first authentication system, and/or generating a second hash key by using the second authentication system, wherein each authentication system uses a system unique non-collision free hash algorithm. Further, in one aspect, comparing the first hash key with a first predefined hash key of the correct password stored in the first authentication system, and/or comparing the second hash key with a second predefined hash key of the correct password stored in the second authentication system. Furthermore, granting access to the target system based on at least one of the comparisons. | 06-06-2013 |
20130145459 | Information Processing Device, Control Method and Program - An information processing device, control method and program that suppresses security risks to a minimum. When power is activated, a control component starts by reading a first program from a first memory component and, in observance of the first memory program, it reads the identification information of an authentication device that is mounted to a mounting component, references a table T, and performs authentication processing for the authentication device, with the condition that the count value correspondingly listed for the identification information of the authentication device be larger than a prescribed value and, when authentication processing has succeeded, starts by reading the second program from a second memory component, and in the event that the authentication device continues to be mounted to the mounting component during executing the second program, decreases the table count value corresponding to the unique identification information of the authentication device. | 06-06-2013 |
20130148662 | MAC LEARNING IN A TRILL NETWORK - A switch of a data network implements both a bridge and a virtual bridge. In response to receipt of a data frame by the switch from an external link, the switch performs a lookup in a data structure using a source media access control (SMAC) address specified by the data frame. The switch determines if the external link is configured in a link aggregation group (LAG) and if the SMAC address is newly learned. In response to a determination that the external link is configured in a LAG and the SMAC address is newly learned, the switch associates the SMAC with the virtual bridge and communicates the association to a plurality of bridges in the data network. | 06-13-2013 |
20130159819 | METHOD, APPARATUS AND DECODER FOR DECODING CYCLIC CODE - A method, apparatus and decoder for decoding cyclic code are proposed. The decoding method comprises: receiving a transmitted cyclic code; calculating the initial syndrome of the cyclic code; by using the initial syndrome and w prestored successive shift operators, calculating respectively w successive shift syndromes in a w-bit window of the cyclic code in parallel; and detecting/locating error in the cyclic code based on the obtained syndromes. The decoding apparatus corresponds to the above method. And the corresponding decoder is also proposed in this invention. The method, apparatus and decoder according to the invention could process the cyclic code within a window width and thus enhance decoding efficiency in parallel. | 06-20-2013 |
20130174180 | FENCING DATA TRANSFERS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints. | 07-04-2013 |