47th week of 2012 patent applcation highlights part 60 |
Patent application number | Title | Published |
20120297085 | REDUCTION OF MESSAGE FLOW BETWEEN BUS-CONNECTED CONSUMERS AND PRODUCERS - A system, method, and computer readable medium for reducing message flow on a message bus are disclosed. The method includes determining if at least one logical operator in a plurality of logical operators requires processing on a given physical processing node in a group of physical nodes. The logical operator is pinned to the given physical processing node. The pinning prevents any subsequent reassignment of the logical operator to another physical processing node. Each logical operator in the plurality of logical operators is assigned to an initial physical processing node in the group of physical processing nodes on a message bus. A determination is made as to whether at least one logical operating in the plurality of logical operators needs to be reassigned to a different physical processing node. The at least one logical operator is reassigned to the different physical processing node. | 2012-11-22 |
20120297086 | METHOD FOR IMPLEMENTING COMMUNICATION BETWEEN DIFFERENT NETWORKS AND APPARATUS - Embodiments of the present invention disclose a method for implementing communication between different networks, where the method includes: receiving a multicast data obtaining request supporting a first network protocol, and determining multicast data identity information (MDID) of multicast data that needs to be obtained; obtaining, according to the MDID, in a multicast manner and from a network device supporting a second network protocol, the multicast data that needs to be obtained, and buffering the multicast data that needs to be obtained; establishing, for the multicast data that needs to be obtained, a multicast group supporting the first network protocol; and sending the multicast data that needs to be obtained by a user apparatus to the user apparatus which joins the multicast group supporting the first network protocol. | 2012-11-22 |
20120297087 | Method And Apparatus For Message Distribution In A Device Management System - A method and apparatus for managing CPE devices. In managing a CPE, an ACS must first establish a communication session with the CPE. In accordance with the present invention, the connection request formed by the ACS and containing proxy information is transmitted to a primary blast box. The primary blast box, which includes a blast box registry, forwards the connection request to a plurality of secondary blast boxes, each secondary blast box being associated with a respective CGN private network of the communications network. The secondary blast boxes in turn removes the proxy information and forwards the connection request to one or more CPEs in the private network encompassed by the corresponding CGN. Authentication information sent with the proxy information uniquely permits authentication of the connection request in the target CPE. When authentication occurs, the CPE initiates a communication session with the ACS so that the desired management function may be executed. | 2012-11-22 |
20120297088 | Selective Content Routing and Storage Protocol for Information-Centric Network - A network component comprising a receiver configured to receive an advertisement for a content name for content associated with a list of secured router identifiers (SRIDs) that indicates a plurality of content routers authorized for routing and caching the content, a processor configured to determine whether to flood the advertisement to a plurality of neighboring nodes if a locally assigned SRID is included in the list of SRIDs received in the advertisement or to drop the advertisement otherwise, a transmitter configured to flood the advertisement on a plurality of ports coupled to the neighboring nodes, and a storage configured to cache received content if the received content is associated with the locally assigned SRID. | 2012-11-22 |
20120297089 | Systems and Methods of Mapped Network Address Translation - A private customer IP address is mapped to a public NAT address using a repeatable, reversible algorithm. A given private IP address must always map to the same public IP address and a fixed range of source ports. In the mapped address translation (MAT) implementation, private IP addresses are mapped to public IP/port ranges by borrowing bits from the 16 bit port number. | 2012-11-22 |
20120297090 | NETWORK ARCHITECTURE FOR SYNCHRONIZED DISPLAY - Systems and methods are provided that couple one or more devices to one or more presentation screens and to one or more servers via network connections. Various devices can be identified on a network and location data regarding each of the mobile devices can be delivered to the servers. Data can be displayed on a presentation screen based on mobile devices in its proximity, for example. | 2012-11-22 |
20120297091 | METHOD AND APPARATUS OF SERVER I/O MIGRATION MANAGEMENT - In an information system, for I/O migration, the migration management module detects a first I/O function associated with a first I/O device to which the OS is connected, selects a second I/O function associated with a second I/O device which is the same type of the first I/O device, and instructs to hot-add the second I/O function to the OS. And the OS sets a teaming for a first virtual MAC address of a first virtual NIC corresponding to the first I/O function and a second virtual MAC address of a second virtual NIC corresponding to the second I/O function, and disconnects the first virtual MAC address of the first virtual NIC corresponding to the first I/O function. | 2012-11-22 |
20120297092 | MANAGING DATA MOVEMENT IN A CELL BROADBAND ENGINE PROCESSOR - A cell broadband engine processor includes memory, a power processing element (PPE) coupled with the memory, and a plurality of synergistic processing elements. The PPE creates a SPE as a computing SPE for an application. The PPE determines idles ones of the plurality of SPEs, and creates a managing SPE from one of the idle SPEs. Each of the plurality of SPEs is associated with a local storage. The managing SPE informs the computing SPE of a starting effective address of the local storage of the managing SPE and an effective address for a command queue. The managing SPE manages movement of data associated with computing of the computing SPE based on one or more commands associated with the application. A computing SPE sends the one or more commands to the managing SPE for insertion into the command queue. | 2012-11-22 |
20120297093 | SMART CARD SET PROTOCOL OPTIMIZATION - A method of facilitating communications between a computer device and a smart card reader having an associated smart card, the computer device including a smart card resource manager and a smart card reader service, the smart card reader service acting as a relay for commands between the smart card resource manager and the smart card reader, the method comprising: receiving from the smart card resource manager a first command for setting a protocol for communications with the smart card; and responding, prior to receiving a reply from the smart card to the first command, to the smart card resource manager with a message indicating that the smart card has successfully received the first command. | 2012-11-22 |
20120297094 | DEVICE START UP SYSTEM AND METHOD - Software executes on a processor of a device, such as an automate teller machine, at start-up to perform validation of expected peripheral devices for a predetermined number of start-ups. Once the predetermined number of start-ups has been reached with the same peripheral devices being present and operational the validation operation is curtailed and start-up of the device is sped up. | 2012-11-22 |
20120297095 | DMA CONTROL DEVICE AND IMAGE FORMING APPARATUS - Provided is efficiently performing DMA transfer of data without causing heavy overhead to occur. A data transfer detecting portion detects data transfer from an external device to a predetermined memory area in a memory; and a DMA execution instructing portion instructs, when the data transfer to the memory area is detected by the data transfer detecting portion, an image processing DMA controller to start execution of the direct memory access transfer of data from the above memory area to an image processing dedicated memory. | 2012-11-22 |
20120297096 | Data Flow Control Within and Between DMA Channels - In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure. | 2012-11-22 |
20120297097 | UNIFIED DMA - In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations. | 2012-11-22 |
20120297098 | METHOD, SERVICE BOARD, AND SYSTEM FOR TRANSMITTING KVM DATA - A method for transmitting keyboard, video, mouse (KVM) data includes converting, by a service board, KVM data into a KVM packet; sending the KVM packet to a switch board through a BASE channel, so that the switch board forwards the KVM packet to a remote console. The embodiments of the present invention are mainly applied to a process for implementing KVM data transmission based on ATCA architecture. | 2012-11-22 |
20120297099 | CONTROL OVER LOADING OF DEVICE DRIVERS FOR AN INDIVIDUAL INSTANCE OF A PCI DEVICE - A method identifies a plurality of PCI devices in a computer system by an associated PCI device handle, wherein each of the PCI devices is also associated with a default EFI device driver. The method further identifies a target PCI device to be disabled from within the plurality of PCI devices, provides a dummy driver that enables fewer functions for the target PCI device than would the default EFI device driver, and binds the dummy driver to the target PCI device instead of binding the default EFI device driver associated with the target PCI device. The dummy driver may be used to effectively disable the target PCI device so that the POST does not hang up or completes faster without loading the default EFI device driver. | 2012-11-22 |
20120297100 | STORAGE SYSTEM AND DATA TRANSMISSION METHOD - A storage system and a data transmission method are disclosed in embodiments of the present invention. According to embodiments of the present invention, a storage system contains a master node, and auxiliary nodes at different physical positions, for example, a short-distance auxiliary node and a long-distance auxiliary node, and for auxiliary nodes at different physical positions, different protocols are adopted for data transmission, for example, data is transmitted between the master node and the short-distance auxiliary node by using a SAS protocol through a SAS cable, while data is transmitted between the master node and the long-distance auxiliary node by using a protocol that supports serial long-distance transmission through an optical fiber or a serial cable, thereby minimizing the cost when long-distance data transmission can be implemented. | 2012-11-22 |
20120297101 | SAFETY MODULE FOR AN AUTOMATION DEVICE - Exemplary embodiments are directed to a safety module for connection to an automation device or automation system which is provided for control of safety critical and non-safety critical processes and/or plant components. The module includes a communication board that includes a processing unit which is connected via an input/output bus slave and an external input/output bus connected to a central processing unit, and one or more secure processing units arranged on one or more circuit boards having safety oriented input/output circuits for safety oriented functions. A serial communication master is connected via communication links to at least one of the circuit boards so that the at least one circuit board receives messages sent by the communication board, transmits safety oriented messages from and/or to the processing unit of the communication board via one of the secure processing units. | 2012-11-22 |
20120297102 | SERVER AND MOTHERBOARD - A server includes a motherboard and a fan system. The motherboard includes a circuit board, on which are mounted a first and a second groups of expansion slots, a central processing unit (CPU) and a group of memory slots. The fan system is arranged near a side of the circuit board, and the disposition of the fan system and the components on the circuit board is such that the airflow from the fan system does not blow pre-heated air from one component over a second component. | 2012-11-22 |
20120297103 | FABRIC INTERCONNECT FOR DISTRIBUTED FABRIC ARCHITECTURE - A system includes scaled-out fabric coupler (SFC) boxes and distributed line card (DLC) boxes. Each SFC box has fabric ports and a cell-based switch fabric for switching cells. Each DLC box is in communication with every SFC box. Each DLC box has network ports receiving packets and network processors. Each processor has a fabric interface that provides SerDes channels. The processors divide each packet received over the network ports into cells and distribute the cells of each packet across the SerDes channels. Each DLC box further comprises DLC fabric ports through which the DLC is in communication with the SFCs. Each DLC fabric port includes a pluggable interface with a given number of lanes over which to transmit and receive cells. Each lane is mapped to one of the SerDes channels such that an equal number of SerDes channels of each fabric interface is mapped to each DLC fabric port. | 2012-11-22 |
20120297104 | CONTROLLED INTERMEDIATE BUS ARCHITECTURE OPTIMIZATION - An intermediate bus architecture power system includes a bus converter that converts an input voltage into a bus voltage on an intermediate bus and a point-of-load converter that supplies an output voltage from the bus voltage on the intermediate bus. Additionally, the intermediate bus architecture power system includes a decision engine optimizing controller that controls a system variable to improve an overall system performance based on a monitored system variable or a system constraint. In another aspect, a method of operating an intermediate bus architecture power system includes converting an input voltage into a bus voltage on an intermediate bus and converting the bus voltage on the intermediate bus into an output voltage. The method also includes controlling a system variable to improve overall system performance based on a monitored system variable or a system constraint. | 2012-11-22 |
20120297105 | PATTERN DETECTION FOR PARTIAL NETWORKING - A pattern detector for a bus node for a system bus having a plurality of stations that are coupled together by means of an arrangement of bus lines, the bus node comprising: decoding circuitry configured for an analysis of sub-patterns in a stream of data on at least one bus line, and analysing circuitry configured to determine a series of digital relative length information of said sub-patterns, wherein said relative length information is generated by comparison of an actual sub-pattern with a preceding sub-pattern in the stream of data on said at least one bus line. A corresponding method of encoding digital bus message information on a bus system in which the digital bus message comprises at least one part that is by means of sub-patterns to be transmitted in a stream of data on at least one bus line, wherein the method comprises: encoding a series of digital relative information by means of the sub-patterns in the stream of data, wherein said relative information is generated by adapting each sub-pattern carrying one bit of the bus message information with respect to an preceding sub-pattern. A corresponding digital bus messages may be encoded in accordance with the method, which bus messages are of particular use in a bus system, in which communication takes place on arbitrarily manner. | 2012-11-22 |
20120297106 | Method and System for Dynamically Managing a Bus of a Portable Computing Device - A method and system for dynamically managing a bus within a portable computing device (“PCD”) are described. The method and system include monitoring software requests with a bus manager. The bus manager determines if a software request needs to be converted into at least one of an instantaneous bandwidth value and an average bandwidth value. The bus manager then converts the software requests into these two types of values as needed. The bus manager calculates a sum of average bandwidth values across all software requests in the PCD. With these values, the bus manager may dynamically adjust settings of the bus based on instantaneous or near instantaneous demands from the master devices. This dynamic adjustment of the bus settings may afford more power savings for the PCD during low loads or during sleep states. | 2012-11-22 |
20120297107 | STORAGE CONTROLLER SYSTEM WITH DATA SYNCHRONIZATION AND METHOD OF OPERATION THEREOF - A method of operation of a storage controller system includes: accessing a first controller having a synchronization bus; accessing a second controller, by the first controller, through the synchronization bus; and receiving a first transaction layer packet by the first controller including performing a multi-cast transmission between the first controller and the second controller through the synchronization bus. | 2012-11-22 |
20120297108 | INTEGRATED ELECTRONIC SYSTEM MOUNTED ON AIRCRAFT - The present invention provides an electronic system mounted on an aircraft which can effectively reduce electronic devices and wires by integration of control systems. Specifically, a fuselage ( | 2012-11-22 |
20120297109 | FACILITATING DATA COHERENCY USING IN-MEMORY TAG BITS AND FAULTING STORES - Fine-grained detection of data modification of original data is provided by associating separate guard bits with granules of memory storing the original data from which translated data has been obtained. The guard bits facilitate indicating whether the original data stored in the associated granule is indicated as protected. The guard bits are set and cleared by special-purpose instructions. Responsive to initiating a data store operation to modify the original data, the associated guard bit(s) are checked to determine whether the original data is indicated as protected. Responsive to the checking indicating that a guard bit is set for the associated original data, the data store operation to modify the original data is faulted and the translated data is discarded, thereby facilitating data coherency between the original data and the translated data. | 2012-11-22 |
20120297110 | METHOD AND APPARATUS FOR IMPROVING COMPUTER CACHE PERFORMANCE AND FOR PROTECTING MEMORY SYSTEMS AGAINST SOME SIDE CHANNEL ATTACKS - A physical cache memory that is divided into one or more virtual segments using multiple circuits to decode addresses is provided. An address mapping and an address decoder is selected for each virtual segment. The address mapping comprises two or more address bits as set indexes for the virtual segment and the selected address bits are different for each virtual segment. A cache address decoder is provided for each virtual segment to enhance execution performance of programs or to protect against the side channel attack. Each physical cache address decoder comprises an address mask register to extract the selected address bits to locate objects in the virtual segment. The foregoing can be implemented as a method or apparatus for protecting against a side channel attack. | 2012-11-22 |
20120297111 | Non-Volatile Memory And Method With Improved Data Scrambling - A memory device cooperating with a memory controller scrambles each unit of data using a selected scrambling key before storing it in an array of nonvolatile memory cells. This helps to reduce program disturbs, user read disturbs, and floating gate to floating gate coupling that result from repeated and long term storage of specific data patterns. For a given page of data having a logical address and for storing at a physical address, the key is selected from a finite sequence thereof as a function of both the logical address and the physical address. In a block management scheme the memory array is organized into erase blocks, the physical address is the relative page number in each block. When logical address are grouped into logical groups and manipulated as a group and each group is storable into a sub-block, the physical address is the relative page number in the sub-block. | 2012-11-22 |
20120297112 | DATA STORAGE METHODS AND APPARATUSES FOR REDUCING THE NUMBER OF WRITES TO FLASH-BASED STORAGE - Methods and apparatuses are provided for reducing the number of write operations to a flash-based storage system that stores and replaces data. The storage system includes a first storage implemented using non-flash storage and a second storage implemented using flash memory. Missed data is first stored in the first storage, which can be less sensitive than flash to write operations. The missed data is stored in the flash-based second storage only after the missed data satisfies a storage management algorithm. | 2012-11-22 |
20120297113 | OPTIMIZED FLASH BASED CACHE MEMORY - Embodiments of the invention relate to throttling accesses to a flash memory device. The flash memory device is part of a storage system that includes the flash memory device and a second memory device. The throttling is performed by logic that is external to the flash memory device and includes calculating a throttling factor responsive to an estimated remaining lifespan of the flash memory device. It is determined whether the throttling factor exceeds a threshold. Data is written to the flash memory device in response to determining that the throttling factor does not exceed the threshold. Data is written to the second memory device in response to determining that the throttling factor exceeds the threshold. | 2012-11-22 |
20120297114 | STORAGE CONTROL APPARATUS AND MANAGMENT METHOD FOR SEMICONDUCTOR-TYPE STORAGE DEVICE - The present invention is provided for maintaining and replacing storage devices systematically in accordance with schedule. A storage control apparatus | 2012-11-22 |
20120297115 | PROGRAM CODE LOADING AND ACCESSING METHOD, MEMORY CONTROLLER, AND MEMORY STORAGE APPARATUS - A method of loading a program code from a rewritable non-volatile memory module is provided, wherein the program code includes data segments and two program code copies corresponding to the program code are stored in the rewritable non-volatile memory module. The method includes loading a first data segment of a first program code copy and determining whether the first data segment contains any uncorrectable error bit. The method still includes, when the first data segment does not contain any uncorrectable error bit, loading a second data segment of the first program code copy. The method further includes, when the first data segment contains an uncorrectable error bit, loading a first data segment of a second program code copy, and then loading a second data segment of the first program code copy or the second program code copy. Thereby, the program code can be successfully loaded. | 2012-11-22 |
20120297116 | SPARSE PROGRAMMING OF ANALOG MEMORY CELLS - A method for data storage in a memory including an array of analog memory cells, includes selecting a group of the memory cells such that each memory cell in the group has one or more neighbor memory cells in the array that are excluded from the group. Data is stored in the group of the memory cells while excluding the neighbor memory cells from programming as long as the data is stored in the group of the memory cells. | 2012-11-22 |
20120297117 | DATA STORAGE DEVICE AND DATA MANAGEMENT METHOD THEREOF - Disclosed is a data managing method of a storage device including a nonvolatile memory device. The data managing method includes detecting an update count of update-requested page data and allocating the update-requested page data to a first memory block or a second memory block based upon the update count, an erase count of the second memory block being different from that of the first memory block. | 2012-11-22 |
20120297118 | FAST TRANSLATION INDICATOR TO REDUCE SECONDARY ADDRESS TABLE CHECKS IN A MEMORY DEVICE - A system and method for reducing the need to check both a secondary address table and a primary address table for logical to physical translation tasks is disclosed. The method may include generating a fast translation indicator, such as a logical group bitmap, indicating whether there is an entry in the secondary address table that contains desired information pertaining to a particular logical address. Upon a host request relating to the particular logical address, the storage device may check the bitmap to determine if retrieval and parsing of the secondary table is necessary. The system may include a storage device having RAM cache storage, flash storage and a controller configured to generate and maintain at least one fast translation indicator to reduce the need to check both secondary and primary address tables during logical to physical address translation operations of the storage device. | 2012-11-22 |
20120297119 | STORAGE SYSTEM AND STORAGE MANAGEMENT METHOD FOR CONTROLLING OFF-LINE MODE AND ON-LINE OF FLASH MEMORY - A method for drying workpieces includes immersing the workpieces in a liquid bath, raising the workpieces with a first holder until a central opening of the workpieces is visible above liquid surface, then using a dry second holder rod inserted through said central opening to continue raising process. Due to this, a drying portion of the workpieces is not held by a wet holding mechanism. | 2012-11-22 |
20120297120 | STACK PROCESSOR USING A FERROELECTRIC RANDOM ACCESS MEMORY (F-RAM) FOR CODE SPACE AND A PORTION OF THE STACK MEMORY SPACE HAVING AN INSTRUCTION SET OPTIMIZED TO MINIMIZE PROCESSOR STACK ACCESSES - A stack processor and method implemented using a ferroelectric random access memory (F-RAM) for code and a portion of the stack memory space having an instruction set optimized to minimize processor stack accesses and thus minimize program execution time. This is particularly advantageous in low power applications and those in which the power supply is only available for a finite period of time such as RFID implementations. Disclosed herein is a relatively small but complete set of instructions enabling a multitude of possible applications to be supported with a program execution time that is not too long. | 2012-11-22 |
20120297121 | Non-Volatile Memory and Method with Small Logical Groups Distributed Among Active SLC and MLC Memory Partitions - A non-volatile memory organized into flash erasable blocks receives data from host writes by first staging into logical groups before writing into the blocks. Each logical group contains data from a predefined set of order logical addresses and has a fixed size smaller than a block. The totality of logical groups are obtained by partitioning a logical address space of the host into non-overlapping sub-ranges of ordered logical addresses, each logical group having a predetermined size within a range delimited by a minimum size of at least one page and a maximum size of fitting at least two logical groups in a block and up to an order of magnitude higher than a typical size of a host write. In this way, excessive garbage collection due to operating a large logical group is avoided while the address space is reduced to minimize the size of a caching RAM. | 2012-11-22 |
20120297122 | Non-Volatile Memory and Method Having Block Management with Hot/Cold Data Sorting - A non-volatile memory organized into flash erasable blocks sorts units of data according to a temperature assigned to each unit of data, where a higher temperature indicates a higher probability that the unit of data will suffer subsequent rewrites due to garbage collection operations. The units of data either come from a host write or from a relocation operation. The data are sorted either for storing into different storage portions, such as SLC and MLC, or into different operating streams, depending on their temperatures. This allows data of similar temperature to be dealt with in a manner appropriate for its temperature in order to minimize rewrites. Examples of a unit of data include a logical group and a block. | 2012-11-22 |
20120297123 | WEAR LEVELING - A method for operating a computer memory. The memory is organized to store data in units of such memory. For each unit of a set of units a wear level of the unit is determined. A maximum wear level among the wear levels is determined. A suggestion of a subset of one or more units for being selected for data erasure is received and at least one unit in the subset is identified for subsequent data erasure, a wear level (c(i)) of which units (i) is less than the maximum wear level (c_max). | 2012-11-22 |
20120297124 | FLASH MEMORY DEVICE - In a flash memory device, after an updated value is copied from a first block to a second block, a block management value of the first block is set to an unused state, and maintenance is performed to erase data from the first block. When performing maintenance, the block management value of the first block B1 is rewritten from “$FFF0” to “$FFFF.” When a reset occurs and the power supply is deactivated during the maintenance, the digit of “$0” in the block management value may become “1” to “E” of the hexadecimal system. In this manner, when the block management value includes a single digit of “1” to “E” and three digits of “F,” the reading of an updated value from the block corresponding to the block management value is restricted. | 2012-11-22 |
20120297125 | SOLID-STATE DEVICE WITH LOAD ISOLATION - Systems and methods are provided for coupling multiple flash devices to a shared bus utilizing isolation switches within a SSD device. The SSD device is operable at a speed of about 400 MT/s or higher with high signal integrity. The SSD device includes a controller, a channel in electrical communication with the controller, a plurality of isolation devices in electrical communication with channel, and a plurality of flash memory devices, wherein each flash memory device is in electrical communication with the channel and controller through the one of the isolation devices. | 2012-11-22 |
20120297126 | INFORMATION PROCESSING APPARTUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING CONTROL PROGRAM AND CONTROL METHOD - An information processing apparatus includes a calculator configured to perform a calculation, a plurality of system boards, each of the plurality of system boards including a first storage unit that stores a first program of a first type, the first program being to be used to operate the calculator, a preliminary board including a plurality of second storage units, at least one of the plurality of second storage units storing a second program of a second type, the second program corresponding to the first programs, and a controller configured to compare any one of the first types of the first programs with the second type of the second program and to write, when any one of the first types does not match the second type, the first program of the any one of the first types into the second storage unit. | 2012-11-22 |
20120297127 | OPTIMIZED FLASH BASED CACHE MEMORY - Embodiments of the invention relate to throttling accesses to a flash memory device. The flash memory device is part of a storage system that includes the flash memory device and a second memory device. The throttling is performed by logic that is external to the flash memory device and includes calculating a throttling factor responsive to an estimated remaining lifespan of the flash memory device. It is determined whether the throttling factor exceeds a threshold. Data is written to the flash memory device in response to determining that the throttling factor does not exceed the threshold. Data is written to the second memory device in response to determining that the throttling factor exceeds the threshold. | 2012-11-22 |
20120297128 | REDUCING ACCESS CONTENTION IN FLASH-BASED MEMORY SYSTEMS - Exemplary embodiments include a method for reducing access contention in a flash-based memory system, the method including selecting a chip stripe in a free state, from a memory device having a plurality of channels and a plurality of memory blocks, wherein the chip stripe includes a plurality of pages, setting the ship stripe to a write state, setting a write queue head in each of the plurality of channels, for each of the plurality of channels in the flash stripe, setting a write queue head to a first free page in a chip belonging to the channel from the chip stripe, allocating write requests according to a write allocation scheduler among the channels, generating a page write and in response to the page write, incrementing the write queue head, and setting the chip stripe into an on-line state when it is full. | 2012-11-22 |
20120297129 | MEMORY SYSTEM AND METHOD HAVING VOLATILE AND NON-VOLATILE MEMORY DEVICES AT SAME HIERARCHICAL LEVEL - A processor-based system includes a processor coupled to core logic through a processor bus. This includes a dynamic random access memory (“DRAM”) memory buffer controller. The DRAM memory buffer controller is coupled through a memory bus to a plurality of a dynamic random access memory (“DRAM”) modules and a flash memory module, which are at the same hierarchical level from the processor. Each of the DRAM modules includes a memory buffer to the memory bus and to a plurality of dynamic random access memory devices. The flash memory module includes a flash memory buffer coupled to the memory bus and to at least one flash memory device. The flash memory buffer includes a DRAM-to-flash memory converter operable to convert the DRAM memory requests to flash memory requests, which are then applied to the flash memory device. | 2012-11-22 |
20120297130 | STACK PROCESSOR USING A FERROELECTRIC RANDOM ACCESS MEMORY (F-RAM) FOR BOTH CODE AND DATA SPACE - A stack processor using a ferroelectric random access memory (F-RAM) for both code and data space which presents the advantages of easy stack pointer management inasmuch as the stack pointer is itself a memory address. Further, the time for saving all critical registers to memory is also minimized in that all registers are already maintained in non-volatile F-RAM per se. | 2012-11-22 |
20120297131 | Scheduling-Policy-Aware DRAM Page Management Mechanism - Memory controller page management devices, systems, and methods are disclosed in which a memory controller is configured to access memory in response to a memory access request by applying a scheduler-aware page management policy to at least one memory page based in the memory based on row buffer status information for the pending memory access requests scheduled in a current cycles. | 2012-11-22 |
20120297132 | MOTHERBOARD OF COMPUTING DEVICE - A motherboard of a computing device includes a dual inline memory module (DIMM), a processor socket, a platform controller hub (PCH), a switch, and a switch controller. The DIMM is connected to the processor socket or the PCH through the switch controller. The switch is connected to the switch controller, and generates a signal when the switch is operated. The switch controller controls the DIMM to connect either to the processor socket or to the PCH according to the signal, so that a solid state disk (SSD) or a memory that is connected to the DIMM can be supported appropriately by the motherboard. | 2012-11-22 |
20120297133 | METHODS AND SYSTEMS OF DISTRIBUTING RAID IO LOAD ACROSS MULTIPLE PROCESSORS - A method for distributing IO load in a RAID storage system is disclosed. The RAID storage system may include a plurality of RAID volumes and a plurality of processors. The IO load distribution method may include determining whether the RAID storage system is operating in a write-through mode or a write-back mode; distributing the IO load to a particular processor selected among the plurality of processors when the RAID storage system is operating in the write-through mode, the particular processor being selected based on a number of available resources associated with the particular processor; and distributing the IO load among the plurality of processors when the RAID storage system is operating in the write-back mode, the distribution being determined based on: an index of a data stripe, and a number of processors in the plurality of processors. | 2012-11-22 |
20120297134 | System and Method to Isolate Passive Disk Transfers to Improve Storage Performance - A storage system includes a storage controller, a storage array coupled to the storage controller, and a temporary storage device coupled to the storage controller. The storage array is operated as a redundant array of independent drives (RAID) array and includes a high priority storage volume and a low priority storage volume. The storage controller stores high priority data transfers on the high priority volume, stores low priority data transfers on the temporary storage device, and moves the low priority data transfers to the low priority volume in response to a condition of the storage system. | 2012-11-22 |
20120297135 | REDUNDANT ARRAY OF INDEPENDENT DISKS SYSTEM WITH INTER-CONTROLLER COMMUNICATION AND METHOD OF OPERATION THEREOF - A method of operation of a redundant array of independent disks system includes: instantiating a first controller having a first local map and a first remote map; instantiating a second controller having a second local map and a second remote map mapped to the first local map; mapping a first memory device to the first local map by the first controller; coupling a storage device to the second controller and the first controller; and switching control of the storage device to the first controller, when a failure of the second controller is detected, by the first controller reading the first memory device. | 2012-11-22 |
20120297136 | METHOD AND SYSTEM FOR DISTRIBUTED RAID IMPLEMENTATION - Embodiments of the systems and methods disclosed provide a distributed RAID system comprising a set of data banks. More particularly, in certain embodiments of a distributed RAID system each data bank has a set of associated storage media and executes a similar distributed RAID application. The distributed RAID applications on each of the data banks coordinate among themselves to distribute and control data flow associated with implementing a level of RAID in conjunction with data stored on the associated storage media of the data banks. | 2012-11-22 |
20120297137 | METHOD AND SYSTEM FOR DATA MIGRATION IN A DISTRIBUTED RAID IMPLEMENTATION - Embodiments of the systems and methods disclosed provide a distributed RAID system comprising a set of data banks. More particularly, in certain embodiments of a distributed RAID system each data bank has a set of associated storage media and executes a similar distributed RAID application. The distributed RAID applications on each of the data banks coordinate among themselves to distribute and control data flow associated with implementing a level of RAID in conjunction with a volume stored on the associated storage media of the data banks. Migration of this volume, or a portion thereof, from one configuration to another configuration may be accomplished such that the volume, or the portion thereof, and corresponding redundancy data may be stored according to this second configuration. | 2012-11-22 |
20120297138 | HIERARCHICAL STORAGE MANAGEMENT FOR DATABASE SYSTEMS - Embodiments for managing data in a hierarchical storage server storing data blocks of a database system comprising primary storage devices being in an active mode and secondary storage devices being in one of an active and passive mode are provided. In response to read and write requests for data blocks at logical storage locations, a block mapping device determines physical storage locations on the storage devices. Read requests switch over secondary storage devices to the active mode when they are in the passive mode. Write requests write data blocks only to the primary storage devices. Secondary storage devices that have not been accessed for a minimum activation time may be switched over from the active to the passive mode to save power consumption and cooling. Data migration and data recall policies control moving of data blocks between the primary and secondary storage devices and are primarily based on threshold values. | 2012-11-22 |
20120297139 | MEMORY MANAGEMENT UNIT, APPARATUSES INCLUDING THE SAME, AND METHOD OF OPERATING THE SAME - A method of operating a memory management unit includes accessing a translation lookaside buffer (TLB), translating a page number of a virtual address into a frame number of a physical address when there is a match for the page number of the virtual address in the TLB, executing a miss process when there is no match for the page number of the virtual address in the TLB. The miss process includes accessing a page table translation (PTT) cache, checking whether access information of a k-th level page table corresponding to a k-th page number that will be accessed in the virtual address is in the PTT cache, acquiring a base address of a physical page using the access information, and determining the frame number of physical address corresponding to the page number of the virtual address using a page offset in the physical page. | 2012-11-22 |
20120297140 | EXPANDABLE DATA CACHE - A method and system for cache management in a storage device is disclosed. A portion of unused memory in the storage device is used for temporary data cache so that two levels of cache may be used (such as a permanent data cache and a temporary data cache). The storage device may manage the temporary data cache in order to maintain clean entries in the temporary data cache. In this way, the storage area associated with the temporary data cache may be immediately reclaimed and retasked for a different purpose without the need for extraneous copy operations. | 2012-11-22 |
20120297141 | IMPLEMENTING TRANSACTIONAL MECHANISMS ON DATA SEGMENTS USING DISTRIBUTED SHARED MEMORY - Systems, Methods, and Computer Program Products are provided for implementing transactional mechanisms by a plurality of procedures on data segments by using distributed shared memory (DSM) agents in a clustered file system (CFS). A new data segment is allocated and an associated cache data segment and metadata data segments, which are allocated for the new data segment and loaded into a cache and modified during the allocating of the new data segment, are added to a list of data segments modified within an associated transaction. The DSM agents assign an exclusive permission to the new data segment. | 2012-11-22 |
20120297142 | DYNAMIC HIERARCHICAL MEMORY CACHE AWARENESS WITHIN A STORAGE SYSTEM - Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment. | 2012-11-22 |
20120297143 | DATA SUPPLY DEVICE, CACHE DEVICE, DATA SUPPLY METHOD, AND CACHE METHOD - A data supply device includes an output unit, a fetch unit including a storage region for storing data and configured to supply data stored in the storage region to the output unit, and a prefetch unit configured to request, from an external device, data to be transmitted to the output unit. The fetch unit is configured to store data received from the external device in a reception region, which is a portion of the storage region, and, according to a request from the prefetch unit, to assign, as a transmission region, the reception region where data corresponding to the request is stored. The output unit is configured to output data stored in the region assigned as the transmission region by the fetch unit. | 2012-11-22 |
20120297144 | DYNAMIC HIERARCHICAL MEMORY CACHE AWARENESS WITHIN A STORAGE SYSTEM - A computing device-implemented method for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system is described. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment. | 2012-11-22 |
20120297145 | SYSTEM AND METHOD TO IMPROVE I/O PERFORMANCE OF DATA ANALYTIC WORKLOADS - A method and structure for processing an application program on a computer. In a memory of the computer executing the application, an in-memory cache structure is provided for normally temporarily storing data produced in the processing. An in-memory storage outside the in-memory cache structure is provided in the memory, for by-passing the in-memory cache structure for temporarily storing data under a predetermined condition. A sensor detects an amount of usage of the in-memory cache structure used to store data during the processing. When it is detected that the amount of usage exceeds the predetermined threshold, the processing is controlled so that the data produced in the processing is stored in the in-memory storage rather than in the in-memory cache structure. | 2012-11-22 |
20120297146 | FACILITATING DATA COHERENCY USING IN-MEMORY TAG BITS AND TAG TEST INSTRUCTIONS - A method is provided for fine-grained detection of data modification of original data by associating separate guard bits with granules of memory storing original data from which translated data has been obtained. The guard bits indicating whether the original data stored in the associated granule is protected for data coherency. The guard bits are set and cleared by special-purpose instructions. Responsive to attempting access to translated data obtained from the original data, the guard bit(s) associated with the original data is checked to determine whether the guard bit(s) fail to indicate coherency of the original data, and if so, discarding of the translated data is initiated to facilitate maintaining data coherency between the original data and the translated data. | 2012-11-22 |
20120297147 | Caching Operations for a Non-Volatile Memory Array - A method includes receiving in conjunction with data to be written at a non-volatile memory device an indication from a host that is descriptive of a write-back requirement for the data; and storing the data in a cache memory of the non-volatile memory device and selectively, depending on the indication, controlling whether the data is or is not written back from the cache memory to a non-volatile memory array that comprises a part of the non-volatile memory device. | 2012-11-22 |
20120297148 | RESOURCE SHARING IN A TELECOMMUNICATIONS ENVIRONMENT - A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system. | 2012-11-22 |
20120297149 | METHOD AND DEVICE FOR MULTITHREAD TO ACCESS MULTIPLE COPIES - A method and a device for multithread to access multiple copies. The method includes: when multiple threads of a process are distributed to different nodes, creating a thread page directory table whose content is the same as that of a process page directory table of the process, where each thread page directory table includes a special entry which points to specific data and a common entry other than the special entry, each thread corresponds to a thread page directory table, and the specific data is data with multiple copies at different nodes; and when each thread is scheduled and the special entry in the thread page directory table of the each thread does not point to the specific data stored in a node where the thread is located, modifying, based on a physical address of the specific data, the special entry to point to the specific data. | 2012-11-22 |
20120297150 | DATA STORAGE APPARATUS, CODING UNIT, SYSTEMS INCLUDING THE SAME, METHOD OF CODING AND METHOD OF READING DATA - In one embodiment, the data storage apparatus includes a control unit configured to decode at least one input command and configured to generate at least one of a read signal and a start signal in response to the input command. The start signal indicates to start an internal mode determination process. The data storage apparatus also includes a memory unit configured to output data in response to the read signal, and a coding unit configured to start and perform the internal mode determination process in response to the start signal. The internal mode determination process includes autonomously determining a coding mode, and the coding unit is configured to code the output data based on the determined coding mode to produce coded data. | 2012-11-22 |
20120297151 | MEMORY MANAGEMENT APPARATUS, MEMORY MANAGEMENT METHOD AND CONTROL PROGRAM - If it is determined in step S | 2012-11-22 |
20120297152 | HARDWARE ACCELERATION OF A WRITE-BUFFERING SOFTWARE TRANSACTIONAL MEMORY - A method and apparatus for accelerating a software transactional memory (STM) system is described herein. Annotation field are associated with lines of a transactional memory. An annotation field associated with a line of the transaction memory is initialized to a first value upon starting a transaction. In response to encountering a read operation in the transaction, then annotation field is checked. If the annotation field includes a first value, the read is serviced from the line of the transaction memory without having to search an additional write space. A second and third value in the annotation field potentially indicates whether a read operation missed the transactional memory or a tentative value is stored in a write space. Additionally, an additional bit in the annotation field, may be utilized to indicate whether previous read operations have been logged, allowing for subsequent redundant read logging to be reduced. | 2012-11-22 |
20120297153 | BIT INVERSION IN MEMORY DEVICES - Bit inversions occurring in memory systems and apparatus are provided. Data is acquired from a source destined for a target. As the data is acquired from the source, the set bits associated with data are tabulated. If the total number of set bits exceeds more than half of the total bits, then an inversion flag is set. When the data is transferred to the target, the bits are inverted during the transfer if the inversion flag is set. | 2012-11-22 |
20120297154 | STORAGE SYSTEM - There are provided: a distribution storage processing means for distributing and storing a plurality of fragment data including division data obtained by dividing storage target data into a plurality of pieces and redundant data into a plurality of storing means; an operation status detecting means for detecting operation statuses of the respective storing means; and a data regenerating means for, in accordance with a result of the detection by the operation status detecting means, when any of the storing means goes down, regenerating the fragment data having been stored in the down storing means based on the other fragment data stored in the other storing means different from the down storing means. Moreover, the data regenerating means has a function of transferring and storing the fragment data stored in the storing means previously scheduled to go down into the other storing means before the storing means goes down. | 2012-11-22 |
20120297155 | STORAGE SYSTEM AND METHOD OF EXECUTING COMMANDS BY CONTROLLER - A storage subsystem capable of processing time-critical control commands while suppressing deterioration of the system performance to a minimum. When various commands are received in a multiplex manner via the same port from plural host devices, the channel adapter of the storage subsystem extracts commands of a first kind from the received commands. Then, the adapter executes the extracted commands of the first kind with high priority within a given unit time until a given number of guaranteed activations is reached. At the same time, commands of a second kind are enqueued in a queue of commands. After the commands of the first kind are executed as many as the number of guaranteed activations, the commands of the second kind are executed in the unit time. | 2012-11-22 |
20120297156 | STORAGE SYSTEM AND CONTROLLING METHOD OF THE SAME - A storage system comprising a first storage apparatus, a second storage apparatus, each storing data processed by an external apparatus, each of the first and second apparatuses including a pool of a plurality of unit physical storage areas for storing the data, the unit physical storage areas being classified into a plurality of storage tiers, a logical storage area in the first storage apparatus and the logical storage area in the second storage apparatus respectively including one or more of the storage tiers that are assigned to the respective logical storage areas, the storage system holding storage tier construction information of the first storage apparatus, and a data migration controller, when the data stored in the first storage apparatus are migrated to the second storage apparatus, transferring the storage tier construction information of the first storage apparatus to the second storage apparatus. | 2012-11-22 |
20120297157 | INFORMATION SYSTEM AND DATA TRANSFER METHOD OF INFORMATION SYSTEM - Availability of an information system including a storage apparatus and a host computer is improved. A host system includes a first storage apparatus provided with a first volume for storing data, and a second storage apparatus for storing the data sent from the first storage apparatus. In case of a failure occurring in the first storage apparatus, the host sends the data to be sent to the first storage apparatus to the second storage apparatus. | 2012-11-22 |
20120297158 | MASS STORAGE DEVICE CAPABLE OF ACCESSING A NETWORK STORAGE - A mass storage device capable of accessing a network storage in response to an access request of an electronic device electrically connected to the mass storage device, the mass storage device includes a first memory unit comprising a file management table for storing a first mapping relationship between a logical address and a network address of the network storage, and a controller for receiving an access request corresponding to the logical address from the electronic device and accessing a file in the network storage according to the network address through a network interface. | 2012-11-22 |
20120297159 | VIRTUALIZATION CONTROLLER AND DATA TRANSFER CONTROL METHOD - System for controlling data transfer between a host system and storage devices. A virtualization controller implements the data transfer and includes first ports for connection with the storage devices, a second port for connection with the host system, a processor, and a memory configured to store volume mapping information which correlates first identification information used by the host system to access a first storage area in one of the storage devices, with second identification information for identifying the first storage area, the correlation being used by the processor to access the first storage area. When data stored in the first storage area is transferred to a second storage area, the processor correlates the first identification information with a third identification information for identifying the second storage area and registers the first identification information and the third identification information in the volume mapping information. | 2012-11-22 |
20120297160 | Surface Caching - Techniques for surface caching are described in which a cache for surfaces is provided to enable existing surfaces to be reused. Surfaces in the cache can be assigned to one of multiple surface lists used to service requests for surfaces. The multiple lists can include at least a main list and an auxiliary list configured to group existing surfaces according to corresponding surface constraints. When a surface is requested, the multiple lists can be searched to find an existing surface based on constraints including, for example, the type of surface and size requirements for the requested surface. If an existing surface is discovered, the existing surface can be returned to service the request. If a suitable surface is not found in the multiple lists, a new surface is created for the request and the new surface can be added to a corresponding one of the multiple surface lists. | 2012-11-22 |
20120297161 | Providing Metadata In A Translation Lookaside Buffer (TLB) - In one embodiment, the present invention includes a translation lookaside buffer (TLB) to store entries each having a translation portion to store a virtual address (VA)-to-physical address (PA) translation and a second portion to store bits for a memory page associated with the VA-to-PA translation, where the bits indicate attributes of information in the memory page. Other embodiments are described and claimed. | 2012-11-22 |
20120297162 | Method for Detecting Address Match in a Deeply Pipelined Processor Design - A method, apparatus and algorithm for quickly detecting an address match in a deeply pipelined processor design in a manner that may be implemented using a minimum of physical space in the critical area of the processor. The address comparison is split into two parts. The first part is a fast, partial address match comparator system. The second part is a slower, full address match comparator system. If a partial match between a requested address and a registry address is detected, then execution of the program or set of instructions requesting the address is temporarily suspended while a full address match check is performed. If the full address match check results in a full match between the requested address and a registry address, then the program or set of instructions is interrupted and stopped. Otherwise, the program or set of instructions continues execution. | 2012-11-22 |
20120297163 | AUTOMATIC KERNEL MIGRATION FOR HETEROGENEOUS CORES - A system and method for automatically migrating the execution of work units between multiple heterogeneous cores. A computing system includes a first processor core with a single instruction multiple data micro-architecture and a second processor core with a general-purpose micro-architecture. A compiler predicts execution of a function call in a program migrates at a given location to a different processor core. The compiler creates a data structure to support moving live values associated with the execution of the function call at the given location. An operating system (OS) scheduler schedules at least code before the given location in program order to the first processor core. In response to receiving an indication that a condition for migration is satisfied, the OS scheduler moves the live values to a location indicated by the data structure for access by the second processor core and schedules code after the given location to the second processor core. | 2012-11-22 |
20120297164 | VIRTUALIZATION IN A MULTI-CORE PROCESSOR (MCP) - This invention describes an apparatus, computer architecture, method, operating system, compiler, and application program products for MPEs as well as virtualization in a symmetric MCP. The disclosure is applied to a generic microprocessor architecture with a set (e.g., one or more) of controlling elements (e.g., MPEs) and a set of groups of sub-processing elements (e.g., SPEs). Under this arrangement, MPEs and SPEs are organized in a way that a smaller number MPEs control the behavior of a group of SPEs. The apparatus enables virtualized control threads within MPEs to be assigned to different groups of SPEs for controlling the same. The apparatus further includes a MCP coupled to a power supply coupled with cores to provide a supply voltage to each core (or core group) and controlling-digital elements and multiple instances of sub-processing elements. | 2012-11-22 |
20120297165 | Electronic Device and Method for Data Processing Using Virtual Register Mode - The invention relates to an electronic device for data processing, which includes an execution unit with a temporary register, a register file, a first feedback path from the data output of the execution unit to the register file, a second feedback path from the data output of the execution unit to the temporary register, a switch configured to connect the first feedback path and/or the second feedback path, and a logic stage coupled to control the switch. The control stage is configured to control the switch to connect the second feedback path if the data output of an execution unit is used as an operand in the subsequent operation of an execution unit. | 2012-11-22 |
20120297166 | STACK PROCESSOR USING A FERROELECTRIC RANDOM ACCESS MEMORY (F-RAM) HAVING AN INSTRUCTION SET OPTIMIZED TO MINIMIZE MEMORY FETCH OPERATIONS - A stack processor using a non-volatile, ferroelectric random access memory (F-RAM) for both code and data space. The stack processor is operative in response to as many as 64 possible instructions based upon a 16 bit word. Each of the instructions in the 16 bit word comprises 3 five bit instructions and a 16 | 2012-11-22 |
20120297167 | EFFICIENT CALL RETURN STACK TECHNIQUE - A processor, method, and medium for implementing a call return stack within a pipelined processor. A stack head register is used to store a copy of the top entry of the call return stack, and the stack head register is accessed by the instruction fetch unit on each fetch cycle. If a fetched instruction is decoded as a return instruction, the speculatively read address from the static register is utilized as a target address to fetch subsequent instructions and the address at the second entry from the top of the call return stack is written to the stack head register. | 2012-11-22 |
20120297168 | PROCESSING INSTRUCTION GROUPING INFORMATION - Processing instruction grouping information is provided that includes: reading addresses of machine instructions grouped by a processor at runtime from a buffer to form an address file; analyzing the address file to obtain grouping information of the machine instructions; converting the machine instructions in the address file into readable instructions; and obtaining grouping information of the readable instructions based on the grouping information of the machine instructions and the readable instructions resulted from conversion. Status of grouping and processing performed on instructions by a processor at runtime can be acquired dynamically, such that processing capability of the processor can be better utilized. | 2012-11-22 |
20120297169 | DATA PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A data processing apparatus which sequentially executes a verification process so as to recognize a target object, comprising: an obtaining unit configured to obtain dictionary data to be referred to in the verification process; a holding unit configured to hold a plurality of dictionary data; a verification unit configured to execute the verification process for the input data by referring to one dictionary data; a history holding unit configured to hold a verification result; and a prefetch determination unit configured to determine based on the verification result whether to execute prefetch processing in which the obtaining unit obtains in advance dictionary data to be referred to by the verification unit in a succeeding verification process, and holds the dictionary data in the holding unit before the succeeding verification process. | 2012-11-22 |
20120297170 | DECENTRALIZED ALLOCATION OF RESOURCES AND INTERCONNNECT STRUCTURES TO SUPPORT THE EXECUTION OF INSTRUCTION SEQUENCES BY A PLURALITY OF ENGINES - A method for decentralized resource allocation in an integrated circuit. The method includes receiving a plurality of requests from a plurality of resource consumers of a plurality of partitionable engines to access a plurality resources, wherein the resources are spread across the plurality of engines and are accessed via a global interconnect structure. At each resource, a number of requests for access to said each resource are added. At said each resource, the number of requests are compared against a threshold limiter. At said each resource, a subsequent request that is received that exceeds the threshold limiter is canceled. Subsequently, requests that are not canceled within a current clock cycle are implemented. | 2012-11-22 |
20120297171 | METHODS FOR GENERATING CODE FOR AN ARCHITECTURE ENCODING AN EXTENDED REGISTER SPECIFICATION - There are provided methods and computer program products for generating code for an architecture encoding an extended register specification. A method for generating code for a fixed-width instruction set includes identifying a non-contiguous register specifier. The method further includes generating a fixed-width instruction word that includes the non-contiguous register specifier. | 2012-11-22 |
20120297172 | Multi-Threaded Processes for Opening and Saving Documents - Tools and techniques are described for multi-threaded processing for opening and saving documents. These tools may provide load processes for reading documents from storage devices, and for loading the documents into applications. These tools may spawn a load process thread for executing a given load process on a first processing unit, and an application thread may execute a given application on a second processing unit. A first pipeline may be created for executing the load process thread, with the first pipeline performing tasks associated with loading the document into the application. A second pipeline may be created for executing the application process thread, with the second pipeline performing tasks associated with operating on the documents. The tasks in the first pipeline are configured to pass tokens as input to the tasks in the second pipeline. | 2012-11-22 |
20120297173 | DEBUGGER SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DEBUGGING INSTRUCTIONS - Debugger system, method and computer program product for debugging instructions. The method for debugging instructions may include: receiving, by a debugger module, a group of instructions that are stored in a non-volatile memory module and is scheduled to be executed by a processor of a device; determining whether the group of instructions includes a conditional branch instruction; defining, by the debugger module, a hardware breakpoint address as an address of the conditional branch instruction if the group of instructions includes the conditional branch instruction; defining, by the debugger module, the hardware breakpoint as an address of a last instruction of the group of instructions to be executed if the group of instructions does not comprise the conditional branch instruction; instructing a hardware breakpoint detector of the device to detect the hardware breakpoint address; instructing the processor to execute instructions of the group of instructions in a continuous mode until the hardware breakpoint detector detects the hardware breakpoint address; instructing the processor to execute at least one instruction of the group of instructions in a single step mode after the hardware breakpoint detector detects the hardware breakpoint address; and receiving, from the device, debug information that is indicative of an execution of instructions by the processor. | 2012-11-22 |
20120297174 | Modifying Operating Parameters Based on Device Use - Monitoring aging information for multiple devices. Aging information of the devices may be received. Statistics regarding the multiple devices may be determined based on the aging information. For at least some of the devices, update information may be determined based on the respective aging information. The update information may include modifications to operating parameters of the devices. For example, the devices may operate according to initial parameters that are above sustainable parameters and the update information may lower the operating parameters based on the aging information. | 2012-11-22 |
20120297175 | Secure Boot With Trusted Computing Group Platform Registers - Disclosed is a method that includes providing at least two platform configuration registers, where a first platform configuration register is a measurement platform configuration register and where a second platform configuration register is a resettable binding configuration platform configuration register; executing an authorization chain under direction of a trusted engine to perform an authorization, where a value of the measurement platform configuration register is included as a precondition; extending the binding platform configuration register with a value enforced by the authorization; and monitoring, such as with a trusted operating system, a validation result of the binding platform configuration register. Apparatus and computer program instructions embodied in a computer-readable medium that implement the method are also disclosed. | 2012-11-22 |
20120297176 | METHOD AND APPARATUS FOR PROCESS ENFORCED CONFIGURATION MANAGEMENT - A system for and method of automatically enforcing a configuration change process for change requests of one or more configurable element within one or more configurable computation systems. The system comprises means for managing a configuration change process for one or more configurable elements within a corresponding configurable computation system, means for generation a configuration request, means for applying a set of authorization rules to the configuration change requests to generate selective authorization of the CEs, and means for selectively locking and unlocking changes to configurable elements within the configurable computational systems. | 2012-11-22 |
20120297177 | Hardware Assisted Operating System Switch - An interoperable firmware memory containing a Basic Input Output System (BIOS) and a trusted platform module (TPSM). The BIOS includes CPU System Management Mode (SMM) firmware configured as read-only at boot. The SMM firmware configured to control switching subsequent to boot between at least: a first memory and second isolated memory; and a first and second isolated non-volatile storage device. The first memory including a first operating system and the second memory including a second operating system. The first non-volatile storage device configured to be used by the first operating system and the second non-volatile storage device configured to be used by the second operating system. The trusted platform module (TPSM) configured to check the integrity of the CPU system Management Mode (SMM) during the boot process. | 2012-11-22 |
20120297178 | CONFIGURATION MODE SWITCHING SYSTEM AND METHOD - A computer and method automatically switches from a manufacture mode to a user mode in a basic input and output system (BIOS) chip of the motherboard. The computer invokes an interrupt program to switch from a manufacture mode number to a user mode number in a BIOS setting file. The computer initializes the diagnostic mode according to the parameters of the diagnostic mode and the stress mode according to the parameters of the stress mode. The computer configures the BIOS setting file to a user mode according to the user mode number, and saves configuration into the BIOS chip when the BIOS chip starts. | 2012-11-22 |
20120297179 | METHODS, DEVICES, AND SYSTEMS FOR ESTABLISHING, SETTING-UP, AND MAINTAINING A VIRTUAL COMPUTER INFRASTRUCTURE - A system and method of operating an electronic device may include loading an operating system, from a boot key, on the electronic device during turn-on of the electronic device. The operating system may be operated on the electronic device. The boot key may cause the electronic device to automatically communicate with a web-service located on a communications network to enable executable instructions from the web-service to be communicated to the electronic device for execution thereon. | 2012-11-22 |
20120297180 | METHOD OF SWITCHING BETWEEN MULTIPLE OPERATING SYSTEMS OF COMPUTER SYSTEM - A method of switching between multiple operating systems of a computer system includes the following steps. Firstly, the computer system is in an environment of a first operating system. Then, a system management interrupt is triggered to allow the computer system to enter a system management mode, and a controlling authority of the computer system is transferred from the first operating system to a basic input output system. Then, a backup of a first environmental parameter of the first operating system is created. If the second environmental parameter is not included in the computer system, a second operating system is loaded in a normal mode. On the other hand, if the second environmental parameter is included in the computer system, the second operating system is booted according to the second environmental parameter. | 2012-11-22 |
20120297181 | Persisting a Provisioned Machine on a Client Across Client Machine Reboot - Systems and methods are provided for implementing a provisioned machine that persists across a client machine reboot. For example, a bootstrap function executing on a client machine may identify a delta disk stored on a physical disk of the client machine prior to booting up the operating system of the client machine. The bootstrap function may establish the path to the delta disk during the boot up of the operating system of the client machine. A provisioned machine may then be established based on the delta disk and the remote base disk to form a virtual disk of the operating system. Subsequently, the client machine may shut down, reboot and re-establish the provisioned machine based on the delta disk stored locally on the client machine. | 2012-11-22 |
20120297182 | CIPHER AND ANNOTATION TECHNOLOGIES FOR DIGITAL CONTENT DEVICES - Systems, methods, and/or devices are provided that include a variety of cipher tools and techniques that may be utilized with digital content on digital devices. Systems, methods, and/or devices are provided that include a variety of annotation tools and techniques that may be utilized with digital content on digital devices. | 2012-11-22 |
20120297183 | TECHNIQUES FOR NON REPUDIATION OF STORAGE IN CLOUD OR SHARED STORAGE ENVIRONMENTS - Techniques for non-repudiation of storage in cloud or shared storage environments are provided. A unique signature is generated within a cloud or shared storage environment for each file of the storage tenant that accesses the cloud or shared storage environment. Each signature is stored as part of the file system and every time a file is accessed that signature is verified. When a file is updated, the signature is updated as well to reflect the file update. | 2012-11-22 |
20120297184 | CLOUD COMPUTING METHOD AND SYSTEM - Methods and systems integrating sensitive or private data with cloud computing resources while mitigating security, privacy and confidentiality risks associated with cloud computing. In one embodiment, a computer network system includes a firewall separating a public portion of the computer network from an on-premises portion of the computer network, a database storing private data behind the firewall, and a user device connected with the computer network. The user device accesses an application hosted in the public portion of the computer network. In response, the application generates return information. The user device receives the return information and generates a request for private data based on at least a portion of the returned information. The request is transmitted to the database which generates a response including the requested private data. The response is transmitted in an encrypted form from the database via the computer network to the user device. | 2012-11-22 |