28th week of 2011 patent applcation highlights part 59 |
Patent application number | Title | Published |
20110173339 | NETWORK SERVICE ACCESS METHOD AND ACCESS GATEWAY EQUIPMENT - The present invention includes a network service access method. In one embodiment, such a method comprises: forwarding the domain name resolution requests to a local domain name server of each Internet service provider providing services through access link corresponding with the Internet service provider; receiving Internet Protocol address on the domain name resolution requests which the local domain name server of each Internet service provider returned from the corresponding access link; selecting the Internet Protocol address according to line state of the access link of each Internet service provider providing services and returning the selected Internet Protocol address to the user equipments; and visiting network services by the access link of the Internet service provider returning the selected Internet Protocol address. | 2011-07-14 |
20110173340 | COMPUTERIZED, COPY DETECTION AND DISCRIMINATION APPARATUS AND METHOD - An engine identifying segments or portions of one source material or source file common to or found in another source material or file. The engine may receive a first data stream in binary form as well as a second stream in binary form. The engine may include a data stream processor or pre-processor programmed to translate the first and second data streams to generate respective first and second processed data streams. The commonality between the first and second processed data streams may be greater than the commonality between the first and second data streams themselves. Also, a comparator may be programmed to compare the first and second process data streams and identify binary segments found in both the first and second processed data streams. | 2011-07-14 |
20110173341 | REALTIME MEDIA DISTRIBUTION IN A P2P NETWORK - Nodes in a realtime p2p media distribution can act in the role of ‘Amplifiers’ to increase the total available bandwidth in the network and thus to improve the quality of the realtime media consumed by the viewers. Examples of such media consumptions are TV channels over the Internet, video on demand films, and files, and media files downloaded to be consumed at a later time. Amplifiers are added to the p2p swarm by a mechanism that discovers the need for supplemental bandwidth in the swarm and orders nodes to join the swarm in the role of amplifiers. The amplifiers' main goal is to maximize the amount of bandwidth they supply (upload) to the swarm while minimizing the amount of bandwidth they consume (download). | 2011-07-14 |
20110173342 | METHOD AND APPARATUS FOR RATE LIMITING - A method and apparatus for a network monitor internals mechanism, which serves to translate packet data into multiple concurrent streams of encoded network event data, to contribute to enterprise management, reporting, and global mechanisms for aggregating monitors at a centralized aggregation point, and to facilitate rate limiting techniques because such monitors are not in control (i.e. cannot back pressure flow) is provided. | 2011-07-14 |
20110173343 | ZONE ROUTING IN A TORUS NETWORK - A system for routing data in a network comprising a network logic device at a sending node for determining a path between the sending node and a receiving node, wherein the network logic device sets one or more selection bits and one or more hint bits within the data packet, a control register for storing one or more masks, wherein the network logic device uses the one or more selection bits to select a mask from the control register and the network logic device applies the selected mask to the hint bits to restrict routing of the data packet to one or more routing directions for the data packet within the network and selects one of the restricted routing directions from the one or more routing directions and sends the data packet along a link in the selected routing direction toward the receiving node. | 2011-07-14 |
20110173344 | SYSTEM AND METHOD OF REDUCING INTRANET TRAFFIC ON BOTTLENECK LINKS IN A TELECOMMUNICATIONS NETWORK - A system, method and node of masquerading remote hosts at the remote end of the bottleneck link without breaking layer 2 transparency using a cache mechanism. A local edge node stores specified objects of the remote host. Upon request of an initiator host, the edge node sends the stored object to the initiator host without requiring the transfer of the object from the remote host. The present invention also provides for the election of a Local Master Browser (LMB). The method consists of electing one LMB for spreading information for each LAN segment, instead of having one such global node for the entire LAN. The present invention elects one local LMB for each LAN segment, rather than using one global LMB node. | 2011-07-14 |
20110173345 | Method and system for HTTP-based stream delivery - A method of delivering a live stream is implemented within a content delivery network (CDN) and includes the high level functions of recording the stream using a recording tier, and playing the stream using a player tier. The step of recording the stream includes a set of sub-steps that begins when the stream is received at a CDN entry point in a source format. The stream is then converted into an intermediate format (IF), which is an internal format for delivering the stream within the CDN and comprises a stream manifest, a set of one or more fragment indexes (FI), and a set of IF fragments. The player process begins when a requesting client is associated with a CDN HTTP proxy. In response to receipt at the HTTP proxy of a request for the stream or a portion thereof, the HTTP proxy retrieves (either from the archive or the data store) the stream manifest and at least one fragment index. Using the fragment index, the IF fragments are retrieved to the HTTP proxy, converted to a target format, and then served in response to the client request. The source format may be the same or different from the target format. Preferably, all fragments are accessed, cached and served by the HTTP proxy via HTTP. In another embodiment, a method of delivering a stream on-demand (VOD) uses a translation tier (in lieu of the recording tier) to manage the creation and/or handling of the IF components. | 2011-07-14 |
20110173346 | ADAPTIVE METHOD AND DEVICE FOR CONVERTING MESSAGES BETWEEN DIFFERENT DATA FORMATS - A computer-implemented method for converting messages between different data formats in a network for electronic data interchange (EDI), comprises: receiving ( | 2011-07-14 |
20110173347 | METHOD FOR SYNCHRONIZING LOCAL CLOCKS IN A DISTRIBUTED COMPUTER NETWORK - The invention relates to a method for synchronizing local clocks in a distributed computer network, where said computer network consists of a number of end systems and at least two switches. Each end system is connected to at least two switches via bi-directional communication links. A configured subset of end systems and switches executes the method in form of a synchronization state machine. The state machine uses at least three different frame types. The states in the state machine are either said to belong to an unsynchronized set of states or belong to a synchronized set of states. All end systems that are configured as Synchronization Master periodically send coldstart frames in one of the unsynchronized states and react to the reception of a coldstart frame by sending a coldstart acknowledgment frame a configurable first timeout after the reception of the coldstart frame on all replicated communication channels, provided that the end system is in a state in which the synchronization state machine defines a transition for coldstart frames, and where said first timeout is reset when a consecutive coldstart frame is received before the coldstart acknowledge is sent. All end systems that are configured as Synchronization Master react to the reception of a coldstart acknowledgment frame by starting a configurable second timeout, provided that they are not already executing said first timeout, and entering a synchronized state when said second timeout expires. | 2011-07-14 |
20110173348 | DEVICE AND METHOD FOR RETRIEVING INFORMATION FROM A DEVICE - The present invention concerns a gateway, and a method at a gateway for retrieving information from a device without requiring any configuration and installation at the device. To this end the invention relates to a method in a gateway device comprising an interface to a first network, an interface to a second network and a local web server. The method comprises the steps intercepting a request from a first device detected on the first network to a web server located on the second network, sending a webpage located on the local web server to the device, the webpage comprising means for, when the webpage is loaded by the device, retrieving information from that device and receiving information from the device, the information being retrieved from said device by the webpage. | 2011-07-14 |
20110173349 | I/O ROUTING IN A MULTIDIMENSIONAL TORUS NETWORK - A method, system and computer program product are disclosed for routing data packet in a computing system comprising a multidimensional torus compute node network including a multitude of compute nodes, and an I/O node network including a plurality of I/O nodes. In one embodiment, the method comprises assigning to each of the data packets a destination address identifying one of the compute nodes; providing each of the data packets with a toio value; routing the data packets through the compute node network to the destination addresses of the data packets; and when each of the data packets reaches the destination address assigned to said each data packet, routing said each data packet to one of the I/O nodes if the toio value of said each data packet is a specified value. In one embodiment, each of the data packets is also provided with an ioreturn value used to route the data packets through the compute node network. | 2011-07-14 |
20110173350 | USING A STORAGE CONTROLLER TO DETERMINE THE CAUSE OF DEGRADED I/O PERFORMANCE - A method for identifying the cause of degraded I/O performance between a host system and a storage controller includes initially monitoring I/O performance between the host system and the storage controller. The method further detects degraded I/O performance between the host system and the storage controller using any suitable technique. Once degraded I/O performance is detected, the method determines the cause of the degraded I/O performance by analyzing historical configuration records in the storage controller. These historical configuration records enable the storage controller to correlate the degraded I/O performance with configuration changes in the storage controller and/or the connected host systems. The method then notifies one or more host systems of the cause of the degraded I/O performance. A corresponding apparatus and computer program product are also disclosed herein. | 2011-07-14 |
20110173351 | EXTENSIONS FOR USB DRIVER INTERFACE FUNCTIONS - Extensions for USB driver interface functions are described. In embodiments, input/output of computer instructions and data exchange is managed in a USB core driver stack. A set of USB driver interfaces are exposed by the USB core driver stack, and the USB driver interfaces include USB driver interface functions that interface with USB client function drivers that correspond to client USB devices. Extensions for the USB driver interface functions are also exposed by the USB core driver stack to interface with the USB client function drivers. | 2011-07-14 |
20110173352 | Power Reduction on Idle Communication Lanes - A method for communication includes establishing a full-duplex communication link between first and second nodes. The link includes multiple first lanes for conveying first communication traffic in a first link direction and multiple second lanes for conveying second communication traffic in a second link direction. Signals are exchanged between the first and second nodes to indicate a requested change in lane activity in the first link direction. Responsively to the signals, a number of the first lanes that are active is changed so that the first node conveys the first communication traffic to the second node over a first number of the first lanes, while the second node conveys the second communication traffic to the first node over a second number of the second lanes, which is different from the first number. | 2011-07-14 |
20110173353 | Virtualizing A Host USB Adapter - Virtualizing a host USB adapter in a virtualized environment maintained by a hypervisor, the hypervisor administering one or more logical partitions, where virtualizing includes receiving, by the hypervisor from a logical partition via a logical USB adapter, a USB Input/Output (‘I/O’) request, the logical USB adapter associated with a USB device coupled to the host USB adapter; placing, by the hypervisor, a work queue element (‘WQE’) in a queue of a queue pair associated with the logical USB adapter; and administering, by an interface device in dependence upon the WQE, USB data communications among the logical partition and the USB device including retrieving, with direct memory access (‘DMA’), USB data originating at the USB device from the host USB adapter into a dedicated memory region for the logical USB adapter. | 2011-07-14 |
20110173354 | Hardware Based Connection State Machine With Built In Timers - The present invention provides a hardware implemented connection monitoring system. A timer array establishes an input timer and an output timer for each connection between the processor and each I/O connection. A state machine periodically steps through the timer array to update the accumulated values of the timers and to monitor if any of the timers has reached a preset, timer done value. If a timer reaches the timer done value, the state machine loads the timer status into an event buffer and generates an interrupt for the processor. The processor reads the event buffer, identifies whether the expired timer was an input timer or an output timer, and takes action accordingly. | 2011-07-14 |
20110173355 | Method for setting and controlling hot key area of keyboard via KVM switch - The present invention is to provide a method for setting and controlling a hot key area of a keyboard via a keyboard-video-mouse (KVM) switch electrically connected to the keyboard, a mouse, a monitor and a plurality of servers and provided therein with a flag and a hot key lookup table. When the KVM switch receives an instruction command for activating a direct hot key (DHK) state from the keyboard, the KVM switch sets the flag to an activated state for entering into the DHK state, and then sets a numeric key area and/or a function key area of the keyboard as a hot key area. Thus, when the KVM switch receives a management command matching with the hot key lookup table, the KVM switch executes a server switching procedure corresponding to the management command, thereby switching to a specified server and displaying a corresponding server image on the monitor. | 2011-07-14 |
20110173356 | EXCLUSIVE ACCESS DURING A CRITICAL SUB-OPERATION TO ENABLE SIMULTANEOUS OPERATIONS - A method, apparatus, and system of exclusive access during a critical sub-operation to enable simultaneous operations are disclosed. In one embodiment, a method of a host device includes identifying a critical sub-operation of an operation associated with a storage system, applying a lock associated with the critical sub-operation based on a type of the sub-operation, providing exclusive access of the critical sub-operation to a first instance requiring the critical sub-operation, denying other instances access to the critical sub-operation during an interval comprising a period when the first instance executes the critical sub-operation, and releasing the lock when the critical sub-operation is no longer required by the first instance. The first instance and the other instances may originate on different host devices. | 2011-07-14 |
20110173357 | ARBITRATION IN CROSSBAR INTERCONNECT FOR LOW LATENCY - A system and method and computer program product for reducing the latency of signals communicated through a crossbar switch, the method including using at slave arbitration logic devices associated with Slave devices for which access is requested from one or more Master devices, two or more priority vector signals cycled among their use every clock cycle for selecting one of the requesting Master devices and updates the respective priority vector signal used every clock cycle. Similarly, each Master for which access is requested from one or more Slave devices, can have two or more priority vectors and can cycle among their use every clock cycle to further reduce latency and increase throughput performance via the crossbar. | 2011-07-14 |
20110173358 | EAGER PROTOCOL ON A CACHE PIPELINE DATAFLOW - A master device sends a request to communicate with a slave device to a switch. The master device waits for a period of cycles the switch takes to decide whether the master device can communicate with the slave device, and the master device sends data associated with the request to communicate at least after the period of cycles has passed since the master device sent the request to communicate to the switch without waiting to receive an acknowledgment from the switch that the master device can communicate with the slave device. | 2011-07-14 |
20110173359 | COMPUTER-IMPLEMENTED METHOD AND SYSTEM FOR SECURITY EVENT TRANSPORT USING A MESSAGE BUS - A computer-implemented device provides security events from publishers to subscribers. There is provided a message bus, configured to contain a plurality of security events. Also provided is a receiver unit, responsive to a plurality of publishers, to receive the plurality of security events from the publishers. There is also a queue unit, responsive to receipt of the security events, to queue the plurality of security events in the message bus. Also, there is a transport unit, responsive to the security events in the message bus, to transport the plurality of security events in the message bus to a plurality of subscribers. | 2011-07-14 |
20110173360 | SYSTEM AND METHOD OF MONITORING A CENTRAL PROCESSING UNIT IN REAL TIME - A method of monitoring one or more central processing units in real time is disclosed. The method may include monitoring state data associated with the one or more CPUs in real-time, filtering the state data, and at least partially based on filtered state data, selectively altering one or more system settings. | 2011-07-14 |
20110173361 | INFORMATION PROCESSING APPARATUS AND EXCEPTION CONTROL CIRCUIT - An information processing apparatus performs switching between an exception handler and normal processing. The information processing apparatus includes a processor; a data processing unit that performs particular processing upon receiving a processing request from the processor; an interrupt controller that issues an interrupt request to the processor; and an exception control unit that controls the interrupt controller, wherein the data processing unit is connected with the exception control unit via a dedicated line. The data processing unit includes a notification unit that notifies, via the dedicated line, the exception control unit of status information indicating current status of the data processing unit, and based on the notified status information and setup information set by the processor, the exception control unit judges whether to cause the interrupt controller to issue an interrupt request to execute an exception handler to the processor. | 2011-07-14 |
20110173362 | HARDWARE VIRTUALIZATION FOR MEDIA PROCESSING - Methods and systems for implementing virtual processors are disclosed. For example, in an embodiment a processing apparatus configured to act as a plurality of virtual processors includes a first virtual program space that includes a first program execution memory, the first program execution memory including code to run a non-real-time operating system capable of supporting a one or more non-real-time applications, a second virtual program space that includes a second program execution memory, the second program execution memory including code to run one or more real-time processes, and a central processing unit (CPU) configured to operate in a first operating mode and a second operating mode, the CPU being configured to perform operating system and application activities using the first virtual program space for the first operating mode without using the second virtual program space and without appreciably interfering with the one or more real-time processes that are running in the second operating mode. | 2011-07-14 |
20110173363 | PROCESSOR SYSTEM WITH AN APPLICATION AND A MAINTENANCE FUNCTION - A processor system with an application and a maintenance function that would interfere with the application if concurrently executed. The processor system comprises a set of processor cores operable in different security and context-related modes, said processors having at least one interrupt input and at least one wait for interrupt output. The processor system also comprises a wait for interrupt expansion circuit responsive to the at least one wait for interrupt output to provide an interrupt signal, at least one of said processor cores operable in response to the interrupt signal to schedule a maintenance function separated in time from execution of the application. | 2011-07-14 |
20110173364 | ELECTRONIC DEVICE - An electronic device includes a first opening disposed in a casing; a power source connector disposed opposing the first opening and to which a detachable power supply plug that supplies power from a power supply unit is attached; a second opening disposed in the casing; and a support member by which at least one interface connector among various types of interface connectors for communication with an external apparatus can be attached at the second opening, where the support member covers and hides the power source connector if among the various types of interface connectors, an interface connector having a power supply terminal is attached. | 2011-07-14 |
20110173365 | ROTARY DISPLAY STAGE - A rotary display stage includes a display ground, an integrated circuit device, and a data transmission port. The display ground includes a base, and a rotary table arranged on the base. The base has a sound-effect switch structure and a track switch structure. The integrated circuit device is disposed inside the base, and includes an amplifier structure and a storage media. The data transmission port is arranged on the display ground and communicated with an exterior device optionally. The sound-effect switch structure, the track switch structure and the data transmission port electrically connect the integrated circuit device. Therefore, various audio data, except the tracks saved in the storage media previously, could optionally be played on via the data transmission port. | 2011-07-14 |
20110173366 | DISTRIBUTED TRACE USING CENTRAL PERFORMANCE COUNTER MEMORY - A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit. | 2011-07-14 |
20110173367 | PCI EXPRESS ENHANCEMENTS AND EXTENSIONS - A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses. | 2011-07-14 |
20110173368 | BUS TRANSLATOR - Disclosed are methods and devices, among which is a device including a bus translator. In some embodiments, the device also includes a core module and a core bus coupled to the core module. The bus translator may be coupled to the core module via the core bus, and the bus translator may be configured to translate between signals from a selected one of a plurality of different types of buses and signals on the core bus. | 2011-07-14 |
20110173369 | MEMORY MANAGEMENT USING PACKET SEGMENTING AND FORWARDING - Systems, devices and methods according to these exemplary embodiments provide for memory management techniques and systems for storing data. Data is segmented for storage in memory. According to one exemplary embodiment, each fragment is routed via a different memory bank and forwarded until they reach a destination memory bank wherein the fragments are reassembled for storage. According to another exemplary embodiment, data is segmented and stored serially in memory banks. | 2011-07-14 |
20110173370 | Relocating Page Tables And Data Amongst Memory Modules In A Virtualized Environment - Relocating data in a virtualized environment maintained by a hypervisor administering access to memory with a Cache Page Table (‘CPT’) and a Physical Page Table (‘PPT’), the CPT and PPT including virtual to physical mappings. Relocating data includes converting the virtual to physical mappings of the CPT to virtual to logical mappings; establishing a Logical Memory Block (‘LMB’) relocation tracker that includes logical addresses of an LMB, source physical addresses of the LMB, target physical addresses of the LMB, a translation block indicator for each relocation granule, and a pin count associated with each relocation granule; establishing a PPT entry tracker including PPT entries corresponding to the LMB to be relocated; relocating the LMB in a number of relocation granules including blocking translations to the relocation granules during relocation; and removing the logical addresses from the LMB relocation tracker. | 2011-07-14 |
20110173371 | WRITING TO ASYMMETRIC MEMORY - A memory controller writes to a virtual address associated with data residing within an asymmetric memory component of main memory that is within a computer system and that has a symmetric memory component, while preserving proximate other data residing within the asymmetric memory component. The symmetric memory component within the main memory of the computer system is configured to enable random access write operations in which an address within a block of the symmetric memory component is written without affecting the availability of other addresses within the block of the symmetric memory component during the writing of that address. The asymmetric memory component is configured to enable block write operations in which writing to an address within a region of the asymmetric memory component affects the availability of other addresses within the region of the asymmetric memory component during the block write operations involving the address. | 2011-07-14 |
20110173372 | METHOD AND APPARATUS FOR INCREASING FILE COPY PERFORMANCE ON SOLID STATE MASS STORAGE DEVICES - A mass storage device and method that utilize storage memory and a shadow memory capable of increasing the speed associated with copying data from one location to another location within the storage memory without the need to access a host computer for the copy transaction. A controller of the mass storage device receives a file copy request for a file to be copied between first and second locations within the storage memory. Data from the first location within the storage memory is then loaded into a shadow memory means of the mass storage device, and then the data is written from the shadow memory means to the second location within the storage memory. | 2011-07-14 |
20110173373 | NON-VOLATILE MEMORY DEVICE AND METHOD THEREFOR - A method of storing information at a non-volatile memory includes storing a first status bit at a sector header of the memory prior to erasing a sector at the memory. A second status bit is stored after erasing of the sector. Because the erasure of the sector is interleaved with the storage of the status bits, a brownout or other corrupting event during erasure of the record will likely result in a failure to store the second status bit. Therefore, the first and second status bits can be compared to determine if the data was properly erased at the non-volatile memory. Further, multiple status bits can be employed to indicate the status of other memory sectors, so that a difference in the status bits for a particular sector can indicate a brownout or other corrupting event. | 2011-07-14 |
20110173374 | SOLID-STATE MEMORY MANAGEMENT - An exemplary method includes performing flash memory operations; receiving a signal from a voltage monitor as being associated with the performed flash memory operations; and, based at least in part on the received signal, setting a limit for performing subsequent flash memory operations. In such a method, the limit can act to avoid resetting flash memory responsive to current demand associated with subsequent flash memory operations. Various other apparatuses, systems, methods, etc., are also disclosed. | 2011-07-14 |
20110173375 | METHOD FOR ENHANCING FILE SYSTEM PERFORMANCE, AND ASSOCIATED MEMORY DEVICE AND CONTROLLER THEREOF - A method for enhancing file system performance includes: in a situation where operations of visiting a file system of a memory device according to a plurality of file names are performed, regarding each of the file names, extracting a characteristic value and full file name location information from file information that is first read, and temporarily storing the characteristic value and the full file name location information; and when visiting the file system according to a target file name, checking whether any of temporarily stored characteristic values matches the target file name, and determining accordingly whether to perform a file system operation corresponding to the target file name. An associated memory device and the controller thereof are further provided. | 2011-07-14 |
20110173376 | CACHE APPARATUS FOR INCREASING DATA ACCESSING SPEED OF STORAGE DEVICE - A cache apparatus for increasing data accessing speed of a storage device includes: a non-volatile memory, for storing data; a memory controller, coupled to the non-volatile memory, for controlling data accessing operations of the non-volatile memory; a first transmission interface, coupled to the memory controller, for electrically connecting the memory controller to the storage device; and a second transmission interface, coupled to the memory controller, for electrically connecting the memory controller to a user-end personal computer. | 2011-07-14 |
20110173377 | Secure portable data storage device - A portable memory device for use with a host device includes an array of non-volatile memory and a memory controller for performing memory access operations. A processor issues an authorization challenge to a host device prior to enabling external access to the memory. Upon receipt of a valid authorization from the host device, access is enabled. In one embodiment, the processor preconditions at least one signal in the interface between the host device and the memory controller. The preconditioning results in a desynchronization of synchronized signals applied at the memory device interface, thereby interfering with proper operation of the memory device. Attempts to access the memory device prior to authorization lead to intentional corruption of data stored in the memory. In alternative embodiment, a secure, machine-readable digital storage device is implemented as a boot device for a host machine. When booted off of the secure device operating system, the host machine's access to secure files on the secure digital storage device is restricted. When the secure digital storage device is accessed by a host device not booted off of the secure device's operating system, the secured files on the secure digital storage device are inaccessible. | 2011-07-14 |
20110173378 | COMPUTER SYSTEM WITH BACKUP FUNCTION AND METHOD THEREFOR - A solid-state mass storage device and method of anticipating a failure of the mass storage device resulting from a memory device of the mass storage device reaching a write endurance limit. A procedure is then initiated to back up data to a second mass storage device prior to failure. The method includes assigning at least a first memory block of the memory device as a wear indicator, using other memory blocks of the memory device as data blocks for data storage, performing program/erase (P/E) cycles and wear leveling on the data blocks, subjecting the wear indicator to more P/E cycles than the data blocks, performing integrity checks and monitoring the bit error rate of the wear indicator, and taking corrective action if the bit error rate increases, including the initiation of the backup procedure and generating a request to replace the device. | 2011-07-14 |
20110173379 | SEMICONDUCTOR DEVICE WITH DOUBLE PROGRAM PROHIBITION CONTROL - The present invention provides a semiconductor device and a method for controlling the semiconductor device, the semiconductor device including memory regions; program prohibition information units storing program prohibition information to be used for determining whether to prohibit or allow programming in the memory regions corresponding to the program prohibition information units; a first prohibition information control circuit that prohibits a change of the program prohibition information from a program prohibiting state with respect to a memory region based on first prohibition information; and a second prohibition information control circuit that prohibits a change of the program prohibition information from a program allowing state to a program prohibiting state with respect to the corresponding memory region based on second prohibition information with respect to the corresponding memory region. | 2011-07-14 |
20110173380 | MEMORY SYSTEM AND METHOD OF CONTROLLING MEMORY SYSTEM - A first log indicating that a system is running is recorded in a second storage unit before a first difference log is recorded in the second storage unit after system startup, and a second log indicating that the system halts is recorded in the second storage unit following the difference log, at the time of normal system halt, and it is judged whether normal system halt has been performed or an incorrect power-off sequence has been performed last time, based on a recorded state of the first and second logs in the second storage unit, at the time of system startup, thereby detecting an incorrect power-off easily and reliably. | 2011-07-14 |
20110173381 | SYSTEM AND APPARATUS FOR FLASH MEMORY DATA MANAGEMENT - The system and apparatus for managing flash memory data includes a host transmitting data, wherein when the data transmitted from the host have a first time transmission trait and the address for the data indicates a temporary address, temporary data are retrieved from the temporary address to an external buffer. A writing command is then executed and the temporary data having a destination address are written to a flash memory buffer. When the flash memory buffer is not full, the buffer data are written into a temporary block of the flash memory. The writing of buffer data into the temporary block includes using an address changing command, or executing a writing command to rewrite the external buffer data to the flash memory buffer so that the data are written into the temporary block. | 2011-07-14 |
20110173382 | NAND INTERFACE - A NAND interface having a reduced pin count configuration, in which all command and address functions and operations of the NAND are provided serially on a single serial command and address pin. | 2011-07-14 |
20110173383 | METHODS OF OPERATING A MEMORY SYSTEM - Methods of operating a memory system are useful in facilitating access to data. Where repetitive data patterns are detected among portions of received data, and an indication is provided, a portion of the data may be stored and/or subsequently retrieved without having to store and/or retrieve, respectively, all portions of the data. | 2011-07-14 |
20110173384 | Internet-Safe Computer - The present invention eliminates the possibility of problems with viruses, worms, identity theft, and other hazards that may result from the connection of a computer to the Internet. It does so by creating a new configuration of components within the computer. In addition to commonly used components, two new components are added. These are a secondary hard drive and a secondary random access memory. When the computer is connected to the Internet these secondary components are used in place of their primary counterparts. The primary hard drive is electronically isolated from the Internet, thus preventing Internet contamination of the primary hard drive. | 2011-07-14 |
20110173385 | Methods And Apparatus For Demand-Based Memory Mirroring - A method includes determining an amount of memory space in a memory device available for memory mirroring. The method further includes presenting the available memory space to an operating system. The method further includes selecting at least a portion of the amount of memory space to be used for memory mirroring with the operating system. The method further includes adding a non-selected portion of the available memory to memory space available to the operating system during operation. An associated system and machine readable medium are also disclosed. | 2011-07-14 |
20110173386 | TERNARY CONTENT ADDRESSABLE MEMORY EMBEDDED IN A CENTRAL PROCESSING UNIT - An arithmetic logic unit ( | 2011-07-14 |
20110173387 | STORAGE SYSTEM HAVING FUNCTION OF PERFORMING FORMATTING OR SHREDDING - It is desired to reduce the danger of leakage of data stored in a logical storage device. A storage system has a detection unit and a security processing unit. The detection unit detects a system change, during which it is not possible to perform I/O for a first logical storage device, among a plurality of logical storage devices in the storage system. And the security processing unit takes this type of system change as the opportunity for performing security processing, i.e. formatting or shredding, upon the first logical storage device. | 2011-07-14 |
20110173388 | FIBER CHANNEL CONNECTION STORAGE CONTROLLER - A storage system adapted to be coupled to a plurality of host devices via a fibre channel. The storage system including a plurality of storage devices, at least a portion of the plurality of storage devices corresponding to a logical unit of a plurality of logical units, the logical unit having a logical unit number (LUN). The storage system also including a storage control device having a cache memory and controlling to store data, addressed to the LUN, into the portion of the plurality of storage devices. The storages system also including an input device being adapted to be used to set information, which is used to prevent an unauthorized access to the logical unit and which corresponds to a relationship between a host device of the plurality of host devices and the logical unit. | 2011-07-14 |
20110173389 | METHODS AND DEVICES FOR TREATING AND/OR PROCESSING DATA - At the inputs and/or outputs, memories are assigned to a reconfigurable module to achieve decoupling of internal data processing and in particular decoupling of the reconfiguration cycles from the external data streams (to/from peripherals, memories, etc.). | 2011-07-14 |
20110173390 | STORAGE MANAGEMENT METHOD AND STORAGE MANAGEMENT SYSTEM - There is provided a storage management system capable of utilizing division management with enhanced flexibility and of enhancing security of the entire system, by providing functions by program products in each division unit of a storage subsystem. The storage management system has a program-product management table stored in a shared memory in the storage subsystem and showing presence or absence of the program products, which provide management functions of respective resources to respective SLPRs. At the time of executing the management functions by the program products in the SLPRs of users in accordance with instructions from the users, the storage management system is referred to and execution of the management function having no program product is restricted. | 2011-07-14 |
20110173391 | System and Method to Access a Portion of a Level Two Memory and a Level One Memory - A system and method to access data from a portion of a level two memory or from a level one memory is disclosed. In a particular embodiment, the system includes a level one cache and a level two memory. A first portion of the level two memory is coupled to an input port and is addressable in parallel with the level one cache. | 2011-07-14 |
20110173392 | EVICT ON WRITE, A MANAGEMENT STRATEGY FOR A PREFETCH UNIT AND/OR FIRST LEVEL CACHE IN A MULTIPROCESSOR SYSTEM WITH SPECULATIVE EXECUTION - In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache. | 2011-07-14 |
20110173393 | CACHE MEMORY, MEMORY SYSTEM, AND CONTROL METHOD THEREFOR - A cache memory according to the present invention includes: a first port for input of a command from the processor; a second port for input of a command from a master other than the processor; a hit determining unit which, when a command is input to said first port or said second port, determines whether or not data corresponding to an address specified by the command is stored in said cache memory; and a first control unit which performs a process for maintaining coherency of the data stored in the cache memory and corresponding to the address specified by the command and data stored in the main memory, and outputs the input command to the main memory as a command output from the master, when the command is input to the second port and said hit determining unit determines that the data is stored in said cache memory. | 2011-07-14 |
20110173394 | ORDERING OF GUARDED AND UNGUARDED STORES FOR NO-SYNC I/O - A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue. | 2011-07-14 |
20110173395 | TEMPERATURE-AWARE BUFFERED CACHING FOR SOLID STATE STORAGE - A system and method for managing a cache includes monitoring a temperature of regions on a secondary storage based on a cumulative cost to access pages from each region of the secondary storage. Similar temperature pages are grouped in logical blocks. Data is written to a cache in a logical block granularity by overwriting cooler blocks with hotter blocks. | 2011-07-14 |
20110173396 | Performing High Granularity Prefetch from Remote Memory into a Cache on a Device without Change in Address - Provided is a method, which may be performed on a computer, for prefetching data over an interface. The method may include receiving a first data prefetch request for first data of a first data size stored at a first physical address corresponding to a first virtual address. The first data prefetch request may include second data specifying the first virtual address and third data specifying the first data size. The first virtual address and the first data size may define a first virtual address range. The method may also include converting the first data prefetch request into a first data retrieval request. To convert the first data prefetch request into a first data retrieval request the first virtual address specified by the second data may be translated into the first physical address. The method may further include issuing the first data retrieval request at the interface, receiving the first data at the interface and storing at least a portion of the received first data in a cache. Storing may include setting each of one or more cache tags associated with the at least a portion of the received first data to correspond to the first physical address. | 2011-07-14 |
20110173397 | PROGRAMMABLE STREAM PREFETCH WITH RESOURCE OPTIMIZATION - A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed. | 2011-07-14 |
20110173398 | TWO DIFFERENT PREFETCHING COMPLEMENTARY ENGINES OPERATING SIMULTANEOUSLY - A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command. | 2011-07-14 |
20110173399 | DISTRIBUTED PARALLEL MESSAGING FOR MULTIPROCESSOR SYSTEMS - A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network. | 2011-07-14 |
20110173400 | BUFFER MEMORY DEVICE, MEMORY SYSTEM, AND DATA TRANSFER METHOD - This invention may be applied for performing a burst write of write data, and increases efficiency of data transfer to memory. A buffer memory device transfers data between processors and a main memory in response to a memory access request issued by each of the processors. The buffer memory device includes: buffer memories each of which holds write data corresponding to the write request issued by a corresponding processor; a memory access information obtaining unit which obtains memory access information indicating a type of the memory access request; a determining unit which determines whether or not the type indicated by the memory access information obtained by the memory access information obtaining unit meets a predetermined condition; and a control unit which drains, to the main memory, data held in one of the buffer memories which meets the predetermined condition, when determined that the predetermined condition is met. | 2011-07-14 |
20110173401 | PRESENTATION OF A READ-ONLY CLONE LUN TO A HOST DEVICE AS A SNAPSHOT OF A PARENT LUN - A method, apparatus, and system of presentation of a read-only clone Logical Unit Number (LUN) to a host device as a snapshot of a parent LUN are disclosed. In one embodiment, a method includes generating a read-write clone LUN of a parent LUN and coalescing an identical data instance of the read-write clone LUN and the parent LUN in a data block of a volume of a storage system. A block transfer protocol layer is modified to refer the read-write clone LUN as a read-only clone LUN, according to the embodiment. Furthermore, according to the embodiment, the read-only clone LUN is presented to a host device as a snapshot of the parent LUN. | 2011-07-14 |
20110173402 | HARDWARE SUPPORT FOR COLLECTING PERFORMANCE COUNTERS DIRECTLY TO MEMORY - Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element. | 2011-07-14 |
20110173403 | USING DMA FOR COPYING PERFORMANCE COUNTER DATA TO MEMORY - A device for copying performance counter data includes hardware path that connects a direct memory access (DMA) unit to a plurality of hardware performance counters and a memory device. Software prepares an injection packet for the DMA unit to perform copying, while the software can perform other tasks. In one aspect, the software that prepares the injection packet runs on a processing core other than the core that gathers the hardware performance counter data. | 2011-07-14 |
20110173404 | USING THE CHANGE-RECORDING FEATURE FOR POINT-IN-TIME-COPY TECHNOLOGY TO PERFORM MORE EFFECTIVE BACKUPS - A method for using a change-recording feature to perform more effective backups includes generating an initial point-in-time copy of source data residing in a storage device. The method may then perform an initial backup of the initial point-in-time copy. As changes are made to the source data, the method may record changes made to the source data after the initial point-in-time copy is generated. These changes may be stored as incremental change data. At some point, the initial point-in-time copy may be updated using the incremental change data. In order to perform an incremental backup of the updated point-in-time copy, the method may query the incremental change data to determine which changes were used to update the point-in-time copy. The method may then perform an incremental backup of the updated point-in-time copy by backing up the changes designated in the incremental change data. | 2011-07-14 |
20110173405 | SYSTEM AND METHOD FOR REDUCING LATENCY TIME WITH CLOUD SERVICES - A system and method for reducing service latency includes dividing an information technology service for a customer into an infrastructure management service and a data management service. Data associated with the information technology service is stored in a backup memory. A set of infrastructure images related to the information technology service is stored at a cloud service provider. The infrastructure images are updated with software updates and hardware updates, as needed, and the data associated with the information technology service is updated through backup and restore mechanisms. The set of infrastructure images that have been updated with data with latest updates are started for recovery, continuity, testing, etc. | 2011-07-14 |
20110173406 | DATA PROCESSING SYSTEM HAVING A PLURALITY OF STORAGE SYSTEMS - It is an object of the present invention to conduct data transfer or data copying between a plurality of storage systems, without affecting the host computer of the storage systems. Two or more auxiliary storage systems | 2011-07-14 |
20110173407 | DATA STORAGE SYSTEM - A data storage system comprising a server computer and a data storage medium. The server computer includes an interface, such as an iSCSI interface, for communicating with a host computer. In response to receiving data from the host computer, the server computer determines whether or not the host computer has access to a virtual data storage device. If the host computer does not have access to a virtual data storage device, the server computer provides a virtual data storage device for access by the host computer, the virtual data storage device employing at least a portion of the data storage medium such that data stored to the virtual data storage device are stored to the portion of the data storage medium. | 2011-07-14 |
20110173408 | Securing non-volatile data in an embedded memory device - The various embodiments of the invention relate generally to semiconductors and memory technology. More specifically, the various embodiment and examples of the invention relate to memory devices, systems, and methods that protect data stored in one or more memory devices from unauthorized access. The memory device may include third dimension memory that is positioned on top of a logic layer that includes active circuitry in communication with the third dimension memory. The third dimension memory may include multiple layers of memory that are vertically stacked upon each other. Each layer of memory may include a plurality of two-terminal memory elements and the two-terminal memory elements can be arranged in a two-terminal cross-point array configuration. At least a portion of one or more of the multiple layers of memory may include an obfuscation layer configured to conceal data stored in one or more of the multiple layers of memory. | 2011-07-14 |
20110173409 | Secure Processing Unit Systems and Methods - A hardware Secure Processing Unit (SPU) is described that can perform both security functions and other information appliance functions using the same set of hardware resources. Because the additional hardware required to support security functions is a relatively small fraction of the overall device hardware, this type of SPU can be competitive with ordinary non-secure CPUs or microcontrollers that perform the same functions. A set of minimal initialization and management hardware and software is added to, e.g., a standard CPU/microcontroller. The additional hardware and/or software creates an SPU environment and performs the functions needed to virtualize the SPU's hardware resources so that they can be shared between security functions and other functions performed by the same CPU. | 2011-07-14 |
20110173410 | EXECUTION OF DATAFLOW JOBS - A method, system and computer program product for storing data in memory. An example system includes at least one multistage application configured to generate intermediate data in a generating stage of the application and consume the intermediate data in a subsequent consuming stage of the application. A runtime profiler is configured to monitor the application's execution and dynamically allocate memory to the application from an in-memory data grid. | 2011-07-14 |
20110173411 | TLB EXCLUSION RANGE - A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page. | 2011-07-14 |
20110173412 | DATA PROCESSING DEVICE AND MEMORY PROTECTION METHOD OF SAME - A memory protection method includes setting a memory area in at least one address setting register; setting a trap type in a trap type setting register corresponding to the address setting register; generating a trap of the trap type set in the trap type setting register in accordance with an access request to the memory area set at the address setting register; setting a size of an inaccessible area in a memory; allocating, in accordance with a memory allocation request from an application, a memory area to the application as an accessible area and an inaccessible area having the inaccessible area size right after the accessible area; setting the inaccessible area in a first address setting register and a first trap type in a first trap type setting register; and generating a memory image of the application and closing the application when a trap of the first trap type occurred. | 2011-07-14 |
20110173413 | EMBEDDING GLOBAL BARRIER AND COLLECTIVE IN A TORUS NETWORK - Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure. | 2011-07-14 |
20110173414 | MAXIMIZED MEMORY THROUGHPUT ON PARALLEL PROCESSING DEVICES - In parallel processing devices, for streaming computations, processing of each data element of the stream may not be computationally intensive and thus processing may take relatively small amounts of time to compute as compared to memory accesses times required to read the stream and write the results. Therefore, memory throughput often limits the performance of the streaming computation. Generally stated, provided are methods for achieving improved, optimized, or ultimately, maximized memory throughput in such memory-throughput-limited streaming computations. Streaming computation performance is maximized by improving the aggregate memory throughput across the plurality of processing elements and threads. High aggregate memory throughput is achieved by balancing processing loads between threads and groups of threads and a hardware memory interface coupled to the parallel processing devices. | 2011-07-14 |
20110173415 | MULTI-CORE SYSTEM AND DATA TRANSFER METHOD - According to one embodiment, each of routers includes: a cache mechanism that stores data transferred to the other routers or processor elements; and a unit that reads out, when an access generated from each of the processor elements is transferred thereto, if target data of the access is stored in the cache mechanism, the data from the cache mechanism and transmits the data to the processor element as a request source. | 2011-07-14 |
20110173416 | DATA PROCESSING DEVICE AND PARALLEL PROCESSING UNIT - A data processing device in which parallel processing elements can efficiently perform processing is provided. A parallel processing module includes plural processing elements, banks A and B provided to correspond to the processing elements and used to store data to be used when the processing elements perform processing, and an I/O bank provided to correspond to the processing elements and used to transfer data to and from an external memory. A first selector circuit selectively couples bank B or the I/O bank to the processing elements. A second selector circuit selectively couples the external memory or the processing elements to the I/O bank. Thus, data can be transferred from the external memory to the I/O bank concurrently with the processing performed by the processing elements. The processing elements can therefore perform processing efficiently. | 2011-07-14 |
20110173417 | Programming Idiom Accelerators - A wake-and-go mechanism may be a programming idiom accelerator. As a processor fetches instructions, the programming idiom accelerator may look ahead to determine whether a programming idiom is coming up in the instruction stream. If the programming idiom accelerator recognizes a programming idiom, the programming idiom accelerator may perform an action to accelerate execution of the programming idiom. In the case of a wake-and-go programming idiom, the programming idiom accelerator may record an entry in a wake-and-go array, for example. | 2011-07-14 |
20110173418 | INSTRUCTION SET EXTENSION USING 3-BYTE ESCAPE OPCODE - A method, apparatus and system are disclosed for decoding an instruction in a variable-length instruction set. The instruction is one of a set of new types of instructions that uses a new escape code value, which is two bytes in length, to indicate that a third opcode byte includes the instruction-specific opcode for a new instruction. The new instructions are defined such the length of each instruction in the opcode map for one of the new escape opcode values may be determined using the same set of inputs, where each of the inputs is relevant to determining the length of each instruction in the new opcode map. For at least one embodiment, the length of one of the new instructions is determined without evaluating the instruction-specific opcode. | 2011-07-14 |
20110173419 | Look-Ahead Wake-and-Go Engine With Speculative Execution - A wake-and-go mechanism is provided for a microprocessor. The wake-and-go mechanism looks ahead in the instruction stream of a thread for programming idioms that indicates that the thread is waiting for an event. If a look-ahead polling operation succeeds, the look-ahead wake-and-go engine may record an instruction address for the corresponding idiom so that the wake-and-go mechanism may have the thread perform speculative execution at a time when the thread is waiting for an event. During execution, when the wake-and-go mechanism recognizes a programming idiom, the wake-and-go mechanism may store the thread state in the thread state storage. Instead of putting thread to sleep, the wake-and-go mechanism may perform speculative execution. | 2011-07-14 |
20110173420 | PROCESSOR RESUME UNIT - A system for enhancing performance of a computer includes a computer system having a data storage device. The computer system includes a program stored in the data storage device and steps of the program are executed by a processor. An external unit is external to the processor for monitoring specified computer resources. The external unit is configured to detect a specified condition using the processor. The processor including one or more threads. The thread resumes an active state from a pause state using the external unit when the specified condition is detected by the external unit. | 2011-07-14 |
20110173421 | MULTI-INPUT AND BINARY REPRODUCIBLE, HIGH BANDWIDTH FLOATING POINT ADDER IN A COLLECTIVE NETWORK - To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device. | 2011-07-14 |
20110173422 | PAUSE PROCESSOR HARDWARE THREAD UNTIL PIN - A system and method for enhancing performance of a computer which includes a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program are executed by a processer. The processor processes instructions from the program. A wait state in the processor waits for receiving specified data. A thread in the processor has a pause state wherein the processor waits for specified data. A pin in the processor initiates a return to an active state from the pause state for the thread. A logic circuit is external to the processor, and the logic circuit is configured to detect a specified condition. The pin initiates a return to the active state of the thread when the specified condition is detected using the logic circuit. | 2011-07-14 |
20110173423 | Look-Ahead Hardware Wake-and-Go Mechanism - A hardware wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism looks ahead in the instruction stream of a thread for programming idioms that indicates that the thread is waiting for an event. The wake-and-go mechanism updates a wake-and-go array with a target address associated with the event for each recognized programming idiom. When the thread reaches a programming idiom, the thread goes to sleep until the event occurs. The wake-and-go array may be a content addressable memory (CAM). When a transaction appears on the symmetric multiprocessing (SMP) fabric that modifies the value at a target address in the CAM, the CAM returns a list of storage addresses at which the target address is stored. The wake-and-go mechanism associates these storage addresses with the threads waiting for an even at the target addresses, and may wake the one or more threads waiting for the event. | 2011-07-14 |
20110173424 | INTEGRATED CIRCUIT DEVICE CONFIGURATION - Various embodiments include an integrated circuit (IC) device having a conductive contact, and a circuit to determine a resistance value of a circuit path between the conductive contact and a circuit node during an initialization mode of the device. The IC device includes a controller to select at least one value of at least one operating parameter of the device based at least in part on the resistance value. | 2011-07-14 |
20110173425 | COMPUTER AND METHOD FOR MANAGING COMPUTER - A computer includes a control module and a basic input and output system (BIOS) storage module. The BIOS storage module stores BIOS programs. The BIOS storage module includes a detection sub-module and a switch sub-module. The detection sub-module is capable of detecting a network connection state. The switch sub-module is capable of controlling an on-off state of the detection sub-module. The control module is capable of executing a control operation to restrict a system function when a connected network state is detected by the detection module. | 2011-07-14 |
20110173426 | METHOD AND SYSTEM FOR PROVIDING INFORMATION TO A SUBSEQUENT OPERATING SYSTEM - A method for transferring execution to a subsequent operating system. The method includes rebooting a computer system. Rebooting the computer system includes initializing an in-kernel boot loader. The in-kernel boot loader executes in a kernel of an initial operating system. Rebooting the computer system further includes populating, by the in-kernel boot loader, an initialization data structure using system data gathered during the execution of the initial operating system, loading, by the in-kernel boot loader, the subsequent operating system, and transferring control of the computer system from the initial operating system to the subsequent operating system. The subsequent operating system accesses the initialization data structure to identify available hardware. The method further includes executing the subsequent operating system on the available hardware of the computer system. | 2011-07-14 |
20110173427 | System and Method for Personalizing Devices - A system and method for personalizing a device is disclosed herein. A user configures a plurality of settings associated with a device. Each setting is identified as a user setting or a platform setting. The user settings are stored in a personalization virtual object with the user. Platform settings are stored separately from the personalization virtual object. Software for personalizing a device provided on a computer readable medium is disclosed herein. The software comprises a code for execution on a central processing unit operable to configure a plurality of settings associated with a device by a user. The software identifies each setting as a user setting or a platform setting. The user settings are stored in a personalization virtual object associated with the user, and the platform settings are stored separately from the personalization virtual object. | 2011-07-14 |
20110173428 | COMPUTER SYSTEM, METHOD FOR BOOTING A COMPUTER SYSTEM, AND METHOD FOR REPLACING A COMPONENT - The invention relates to a computer system ( | 2011-07-14 |
20110173429 | METHOD AND APPARATUS TO MINIMIZE COMPUTER APPARATUS INITIAL PROGRAM LOAD AND EXIT/SHUT DOWN PROCESSING - A method to reduce and thereby improve the initial program load time of a computing apparatus operating system and thus provides for near instantaneous user interaction. When practicing the instant invention, a computing apparatus operating system or application processing component is loaded neither sequentially nor completely, but rather on an as required basis. The invention's “required only” loading of processing components persist through subsequent operation and shut down of the computing apparatus with each loaded task creating a checkpoint record of processing modifications to non-volatile memory. Such checkpointing allows shut down processing of the apparatus to consist of merely flushing memory buffers in the apparatus checkpointed non-volatile memory of the apparatus to permanent storage and powering off of the apparatus, with subsequent initial program load (IPL) sequencing referencing the checkpointed records to minimize future system initialization elapsed time. | 2011-07-14 |
20110173430 | IT Automation Appliance Imaging System and Method - A system, method, and computer program product for harvesting an image from a local disk of a managed endpoint to an image library is provided. In an embodiment of the method for harvesting an image, a managed endpoint is provided with a boot image that causes the endpoint to instantiate a RAM disk and execute the boot image from the RAM disk. The boot image is used to harvest an image by determining data on a local disk of the managed endpoint to be included in the image that are not already stored in the image library. In one embodiment, this is done by comparing hashes calculated on the data on the local disk to hashes of data in the image library. The data not already stored in the image library are then copied to the image library. | 2011-07-14 |
20110173431 | HARDWARE SUPPORT FOR SOFTWARE CONTROLLED FAST RECONFIGURATION OF PERFORMANCE COUNTERS - Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters. | 2011-07-14 |
20110173432 | RELIABILITY AND PERFORMANCE OF A SYSTEM-ON-A-CHIP BY PREDICTIVE WEAR-OUT BASED ACTIVATION OF FUNCTIONAL COMPONENTS - A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit. | 2011-07-14 |
20110173433 | METHOD AND APPARATUS FOR TUNING A PROCESSOR TO IMPROVE ITS PERFORMANCE - A data processing apparatus comprising a processor for executing a data processing process and a processor for executing a tuning process is disclosed. The data processing apparatus is arranged such that the tuning process which is a different process to the data processing process can access the parameters of speculative mechanisms of the data processing process and tune the parameters so that the mechanisms speculate differently and in this way the performance of this data processing process can be improved. | 2011-07-14 |
20110173434 | SYSTEM AND METHOD FOR REDUCING MESSAGE SIGNALING - A system for communicating a message using a second signaling protocol is disclosed. The second signaling protocol provides a session control channel between a user agent (UA) and a network node and may include, for example the I1 protocol. The system identifies a first string to be transmitted within a first message. The first message is encoded in accordance with a first signaling protocol. The system associates the first string with a first key, and stores the first string and the first key in a database. The database associates the first string and the first key. The system encodes the first key within a second message, and transmits the second message using the second signaling protocol. The first string may include a plurality of data values. The system sorts the plurality of data values into an ordering, and associates each of the plurality of data values with a key. | 2011-07-14 |
20110173435 | Secure Node Admission in a Communication Network - A system and method for node admission in a communication network having a NC and a plurality of associated network nodes. According to various embodiments of the disclosed method and apparatus, key determination in a communication network includes an NN sending to the NC a request for a SALT; the NN receiving the SALT from the NC, combining the SALT with its network password to calculate a static key, and submitting an admission request to the network coordinator to request a dynamic key. The SALT can be a random number generated by the NC, and the admission request can be encrypted by the NN using the static key. | 2011-07-14 |
20110173436 | Method and apparatus for providing secure streaming data transmission facilities using unreliable protocols - The invention provides a method and apparatus for transmitting data securely using an unreliable communication protocol, such as User Datagram Protocol. In one variation, the invention retains compatibility with conventional Secure Sockets Layer (SSL) and SOCKS protocols, such that secure UDP datagrams can be transmitted between a proxy server and a client computer in a manner analogous to conventional SOCKS processing. In contrast to conventional SSL processing, which relies on a guaranteed delivery service such as TCP and encrypts successive data records with reference to a previously-transmitted data record, encryption is performed using a nonce that is embedded in each transmitted data record. This nonce acts both as an initialization vector for encryption/decryption of the record, and as a unique identifier to authenticate the record. Because decryption of any particular record does not rely on receipt of a previously received data record, the scheme will operate over an unreliable communication protocol. The system and method allows secure packet transmission to be provided with a minimum amount of overhead. Further, the invention provides a network arrangement that employs a cache having copies distributed among a plurality of different locations. SSL/TLS session information for a session with each of the proxy servers is stored in the cache so that it is accessible to at least one other proxy server. Using this arrangement, when a client computer switches from a connection with a first proxy server to a connection with a second proxy server, the second proxy server can retrieve SSL/TLS session information from the cache corresponding to the SSL/TLS communication session between the client device and the first proxy server. The second proxy server can then use the retrieved SSL/TLS session information to accept a session with the client device. | 2011-07-14 |
20110173437 | INTERFACE FOR PDA AND COMPUTING DEVICE - A method of reviewing an email attachment receives at an email server an email message including at least one attachment. A preview portion of the email message is transmitted to a mobile communication device. The preview portion does not include the at least one attachment, and the preview portion is viewable on a computing device in communication with the mobile communications device. An attachment download instruction based on the preview portion is received from the computing device via the mobile communication device. The at least one attachment is transmitted to the computing device based on the attachment download instruction. The attachment is not transmitted to the computing device until the attachment download instruction is received. | 2011-07-14 |
20110173438 | METHOD AND SYSTEM FOR SECURE USE OF SERVICES BY UNTRUSTED STORAGE PROVIDERS - A method for encrypting data. The method comprises receiving, from a user, via a client terminal, digital content including at least one textual string for filling in at least one field in a document managed by a network node via a computer network, encrypting the at least one textual string, and sending the at least one encrypted textual string to the network node via the computer network so as to allow filling in the at least one field with the at least one encrypted textual string. The network node is configured for storing and retrieving the at least one textual encrypted string without decrypting. | 2011-07-14 |