Patent application number | Description | Published |
20120096473 | MEMORY MAXIMIZATION IN A HIGH INPUT/OUTPUT VIRTUAL MACHINE ENVIRONMENT - A computer implemented method is provided, including monitoring the utilization of resources available within a compute node, wherein the resources include an input/output capacity, a processor capacity, and a memory capacity. The method further comprises allocating virtual machines to the compute node to maximize use of a first one of the resources; and then allocating an additional virtual machine to the compute node to increase the utilization of the resources other than the first one of the resources without over-allocating the first one of the resources. In a web server, the input/output capacity may be the resource to be maximized. However, unused memory capacity and/or processor capacity of the compute node may be used more effectively by identifying an additional virtual machine that is memory intensive or processor intensive to be allocated or migrated to the compute node. The additional virtual machine(s) may be identified in new workload requests or from analysis of virtual machines running on other compute nodes accessible over the network. | 04-19-2012 |
20120102190 | INTER-VIRTUAL MACHINE COMMUNICATION - A computer implemented method is provided, including monitoring network traffic among virtual machines that are allocated to a plurality of compute nodes on a network, and identifying first and second virtual machines having inter-virtual machine communication over the network in an amount that is greater than a threshold amount of the network traffic. The method further comprises migrating at least one of the first and second virtual machines so that the first and second virtual machines are allocated to the same compute node and the inter-virtual machine communication between the first and second virtual machines is no longer directed over the network. In one embodiment, each compute node is coupled to an Ethernet link of a network switch, and data is obtained from a management information database of the network switch to determine the amount of network bandwidth that is being utilized for communication between the first and second virtual machines. | 04-26-2012 |
20120266163 | Virtual Machine Migration - Virtual machine migration, including: monitoring, by a management agent, the utilization of a system resource in a computing system; determining, by the management agent, a rate of change in the utilization of the system resource over a predetermined period of time; comparing, by the management agent, the rate of change in the utilization of the system resource over a predetermined period of time to a predetermined maximum allowable rate of change in the utilization of the system resource over the predetermined period of time; and taking, by the management agent, corrective action upon determining that the rate of change in the utilization of the system resource over the predetermined period of time exceeds the predetermined maximum allowable rate of change in the utilization of the system resource over the predetermined period of time. | 10-18-2012 |
20120284398 | INTER-VIRTUAL MACHINE COMMUNICATION - A computer implemented method is provided, including monitoring network traffic among virtual machines allocated to a plurality of compute nodes on a network, and identifying first and second virtual machines having inter-virtual machine communication over the network in an amount that is greater than a threshold amount of the network traffic. The method further comprises migrating at least one of the first and second virtual machines so that the first and second virtual machines are allocated to the same compute node and the inter-virtual machine communication between the first and second virtual machines is no longer directed over the network. In one embodiment, each compute node is coupled to an Ethernet link of a network switch, and data is obtained from a management information database of the network switch to determine the amount of network bandwidth that is being utilized for communication between the first and second virtual machines. | 11-08-2012 |
20140365818 | Method relating to configurable storage device and adaptive storage device array - An array can include a controller and multiple storage devices of a first type. When a storage device of the first type is replaced by a replacement storage device of a second type, and other storage devices of the first type remain in the array, the controller instructs the replacement storage device to configure itself as a storage device of the first type. When the last storage device of the first type in the array is replaced by a replacement storage device of the second type, the controller instructs all the storage devices of the array to configure themselves as storage devices of the second type. | 12-11-2014 |
20140365820 | Configurable storage device and adaptive storage device array - An array can include a controller and multiple storage devices of a first type. When a storage device of the first type is replaced by a replacement storage device of a second type, and other storage devices of the first type remain in the array, the controller instructs the replacement storage device to configure itself as a storage device of the first type. When the last storage device of the first type in the array is replaced by a replacement storage device of the second type, the controller instructs all the storage devices of the array to configure themselves as storage devices of the second type. | 12-11-2014 |
Patent application number | Description | Published |
20120213066 | Optimizing A Physical Data Communications Topology Between A Plurality Of Computing Nodes - Methods, apparatus, and products are disclosed for optimizing a physical data communications topology between a plurality of computing nodes, the physical data communications topology including physical links configured to connect the plurality of nodes for data communications, that include carrying out repeatedly at a predetermined pace: detecting network packets transmitted through the links between each pair of nodes in the physical data communications topology, each network packet characterized by one or more packet attributes; assigning, to each network packet, a packet weight in dependence upon the packet attributes for that network packet; determining, for each pair of nodes in the physical data communications topology, a node pair traffic weight in dependence upon the packet weights assigned to the network packets transferred between that pair of nodes; and reconfiguring the physical links between each pair of nodes in dependence upon the node pair traffic weights. | 08-23-2012 |
20130153187 | Dual Heat Sinks For Distributing A Thermal Load - Dual heat sinks, apparatuses, and methods for installing a dual heat sink for distributing a thermal load are provided. Embodiments include a top base to couple with a first integrated circuit of a first board and to receive a first thermal load from the first integrated circuit; a bottom base to couple with a second integrated circuit of a second board and to receive a second thermal load from the second integrated circuit; and a thermal dissipating structure coupled between the top base and the bottom base, the thermal dissipating structure to receive and distribute the first thermal load and the second thermal load from the top base and the bottom base; wherein a height of the thermal dissipating structure is adjustable so as to change a distance separating the top base and the bottom base. | 06-20-2013 |
20130312257 | CONNECTING AN ELECTRONIC COMPONENT TO A PRINTED CIRCUIT BOARD - An electronic component is connected to a circuit board by forming a connector pin on the electronic component, the connector pin having a proximate end secured to the electronic component, a distal end with a fork lock, and a compliant portion between the proximate and distal ends. A multi-width through-hole is formed on a circuit board having a circuit board thickness greater than a length of the connector pin, with a first portion that is narrower than each of the compliant portion and the fork lock and extends partially through the circuit board and a second portion that extends beyond the first portion and is wider than the first portion. The connector pin is inserted into the first portion of the through-hole and the fork lock is moved beyond the first portion into the second portion of the through-hole. | 11-28-2013 |
20130316551 | UNIVERSAL PRESS-FIT CONNECTION FOR PRINTED CIRCUIT BOARDS - A universal press-fit connection allows a component having a connector pin to be connected to a compatible plated through hole of a circuit board regardless of circuit board thickness. The connector pin includes a proximate end adjacent the component, a distal end with a fork lock, and a compliant portion between the proximate and distal ends. A multi-width through hole includes a first portion partially extending through the circuit board and a second, wider portion extending beyond the first portion. The fork lock initially moves radially inward upon insertion into the first portion via flexing of the compliant portion, and re-expands when entering the second portion. The compliant portion engages the through hole and the fork lock secures the connector pin in the through hole. | 11-28-2013 |
Patent application number | Description | Published |
20090083472 | DESIGN STRUCTURE FOR A MEMORY SWITCHING DATA PROCESSING SYSTEM - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a memory switching data processing system is provided. The memory switching data processing system includes one or more central processing units (‘CPUs’); random access memory organized in at least two banks of memory modules; one or more memory buses providing communications paths for data among the CPUs and the memory modules; and a flexibly configurable memory bus switch comprising a first configuration adapting the first CPU to a first bank of memory modules and a second CPU to a second bank of memory modules and a second configuration adapting the first CPU to both the first bank of memory modules and the second bank of memory modules. | 03-26-2009 |
20090083529 | Memory Switching Data Processing System - A memory switching data processing system including one or more central processing units (‘CPUs’); random access memory organized in at least two banks of memory modules; one or more memory buses providing communications paths for data among the CPUs and the memory modules; and a flexibly configurable memory bus switch comprising a first configuration adapting the first CPU to a first bank of memory modules and a second CPU to a second bank of memory modules and a second configuration adapting the first CPU to both the first bank of memory modules and the second bank of memory modules. | 03-26-2009 |
20090133010 | VIRTUALIZED BLADE FLASH WITH MANAGEMENT MODULE - The invention is directed to providing a virtualized blade flash with a management module in a blade server. A method of configuring a blade server according to an embodiment of the invention includes: providing a plurality of blades, wherein each blade comprising: a service processor; a chip set; an at least one central processing unit (CPU); providing a management module in communication with each of the plurality of blades; and adding a virtual flash store at the management module. | 05-21-2009 |
20090219835 | Optimizing A Physical Data Communications Topology Between A Plurality Of Computing Nodes - Methods, apparatus, and products are disclosed for optimizing a physical data communications topology between a plurality of computing nodes, the physical data communications topology including physical links configured to connect the plurality of nodes for data communications, that include carrying out repeatedly at a predetermined pace: detecting network packets transmitted through the links between each pair of nodes in the physical data communications topology, each network packet characterized by one or more packet attributes; assigning, to each network packet, a packet weight in dependence upon the packet attributes for that network packet; determining, for each pair of nodes in the physical data communications topology, a node pair traffic weight in dependence upon the packet weights assigned to the network packets transferred between that pair of nodes; and reconfiguring the physical links between each pair of nodes in dependence upon the node pair traffic weights. | 09-03-2009 |
20090281761 | Detecting An Increase In Thermal Resistance Of A Heat Sink In A Computer System - Methods, apparatus, and products for detecting an increase in thermal resistance of a heat sink in a computer system, the heat sink dissipating heat for a component of the computer system, the computer system including a fan controlling airflow across the heat sink, the computer system also including a temperature monitoring device, including: measuring, by a monitoring module through use of the temperature monitoring device during operation of the computer system, thermal resistance of the heat sink; determining whether the measured thermal resistance of the heat sink is greater than a threshold thermal resistance, the threshold thermal resistance stored in a thermal profile in non-volatile memory, and if the measured thermal resistance of the heat sink is greater than the threshold thermal resistance, notifying a system administrator. | 11-12-2009 |
Patent application number | Description | Published |
20090016019 | AIRFLOW CONTROL AND DUST REMOVAL FOR ELECTRONIC SYSTEMS - Airflow control and dust removal systems and methods are disclosed. In one embodiment, a plurality of blade servers is mounted in a chassis. A blower generates airflow through the chassis. Air enters the chassis uniformly across the blade servers and flows in parallel through the servers. An airflow directing mechanism is provided for allowing airflow through a selected one of the blade servers while reducing or closing airflow to the other blade servers, to individually clean and remove dust from the selected blade server. The airflow directing mechanism may include a movable vane actuated by a rotary or linear solenoid to selectively block airflow ports of the servers. The vane may be held in a closed position, assisted by an electromagnet. The airflow directing mechanism may alternatively comprise a rolled shade having a pattern of openings. The position of the rolled shade may be controlled to align openings in the shade with airflow ports in the servers, to control which servers airflow may pass through. | 01-15-2009 |
20090021270 | CAPACITIVE DETECTION OF DUST ACCUMULATION IN A HEAT SINK - A system and method for electronically detecting the accumulation of dust within a computer system using a capacitive dust sensor. The dust detection system may be implemented on a smaller computer, such as an individual PC, or in a more expansive system, such as a rack-based server system (“rack system”) having multiple servers and other hardware devices. In one embodiment, each server in a rack system includes a capacitive sensor responsive to the accumulation of dust. The capacitive sensor may include one or more capacitive plates integral with a heatsink. As dust collects on the capacitive plates, the capacitance increases. When a capacitance setpoint is reached, indicating the dust has reached a critical level, an alert is generated. The alerts may be received by a management console for the attention of a system administrator. Each alert may contain the identity of the server generating the alert, so that the system administrator knows which server(s) are to be removed for cleaning. | 01-22-2009 |
20090045967 | CAPACITIVE DETECTION OF DUST ACCUMULATION USING MICROCONTROLLER COMPONENT LEADS - A system and method are used for electronically detecting the accumulation of dust within a computer system using a capacitive dust sensor. The dust detection system may be implemented on a smaller computer, such as an individual PC, or in a more expansive system, such as a rack-based server system (“rack system”) having multiple servers and other hardware devices. In one embodiment, each server in a rack system includes a capacitive sensor responsive to the accumulation of dust. The capacitive sensor may include one or more capacitive plates integral with a heatsink. As dust collects on the capacitive plates, the capacitance increases. When a capacitance setpoint is reached, indicating the dust has reached a critical level, an alert is generated. The alerts may be received by a management console for the attention of a system administrator. Each alert may contain the identity of the server generating the alert, so that the system administrator knows which server(s) are to be removed for cleaning. | 02-19-2009 |