04th week of 2013 patent applcation highlights part 53 |
Patent application number | Title | Published |
20130024578 | Method and system for distributed initiation of USB over network data plane connections - Connecting USB devices with USB hosts over a network supporting distributed initiations of USB connections over the network, including the following steps: Connecting non-collocated USB hosts with respective non-collocated USB host adaptors (USBHs), according to USB specification timings. Connecting non-collocated USB devices with respective non-collocated USB device adaptors (USBDs). Enabling the USBDs and the USBHs to communicate over the network and to discover the presence and capabilities of one another. Initiating, by the USBDs or the USBHs, via the network control plane, USB-over-network-data-plane connections between the USB devices and the USB hosts. And operating at least two of the USB-over-network-data-plane connections essentially simultaneously and without any common network node. | 2013-01-24 |
20130024579 | Controller Placement for Split Architecture Networks - A network topology design system to determine placement of a set of controllers within a network with a split architecture, the placement of the set of controllers selected to minimize disruption of the split architecture network caused by a link failure, a switch failure or a connectivity loss between the set of controllers and the data plane components. The system performs a method including graphing a topology of the split architecture network, determining a set of clusters of nodes within the graph by applying an agglomerative clustering process or a partitive clustering process, determining, a centroid for each cluster in the set of clusters, assigning one of the set of controllers to each network element corresponding to a determined centroid in the graph, and assigning each controller to control a set of network elements corresponding to a cluster in the graph. | 2013-01-24 |
20130024580 | Transient Unpruning for Faster Layer-Two Convergence - In one embodiment, a method includes detecting a change in network topology and broadcasting a transient unconditional unpruning message to all nodes in the network. The message is configured to instruct each network element receiving the message to start a phase timer in response to the broadcast message; unprune its operational ports; and, upon expiration of the phase timer, prune its ports in accordance with the results of a pruning protocol. | 2013-01-24 |
20130024581 | BANDWIDTH MANAGEMENT IN A CLIENT/SERVER ENVIRONMENT - A method of managing bandwidth usage among a plurality of client devices is provided. A request is received at a first device from a second device. The request is to transfer a file between the first device and the second device and includes an identifier of the second device. A client group associated with the second device is determined based on the identifier and used to select a bandwidth usage policy. A data transfer rate for transferring the file between the first device and the second device is determined based on the selected bandwidth usage policy and a bandwidth usage at the first device associated with a plurality of devices. A number of bytes to transfer is determined based on the determined data transfer rate and a time period. A response, which includes the determined number of bytes and the time period, is sent to the second device. | 2013-01-24 |
20130024582 | SYSTEMS AND METHODS FOR DYNAMICALLY SWITCHING BETWEEN UNICAST AND MULTICAST DELIVERY OF MEDIA CONTENT IN A WIRELESS NETWORK - Systems and methods for dynamically switching between unicast and multicast delivery of media content are disclosed. An exemplary method includes a user device 1) accessing, over a wireless network, a unicast stream carrying data representative of a media content program, 2) detecting, during the accessing of the unicast stream, an instruction to switch to a multicast stream carrying data representative of the media content program, and 3) switching, in response to the instruction, from the accessing of the unicast stream to accessing the multicast stream by way of the wireless network. Corresponding systems and methods are also disclosed. | 2013-01-24 |
20130024583 | SYSTEM AND METHOD FOR MANAGING BUFFERING IN PEER-TO-PEER (P2P) BASED STREAMING SERVICE AND SYSTEM FOR DISTRIBUTING APPLICATION FOR PROCESSING BUFFERING IN CLIENT - A system to manage a buffering of a data stream for a peer client in a peer-to-peer based streaming service includes a buffering control unit including a processor configured to control pieces of the data stream to be buffered in a first buffer of the peer client, and to control one or more outputted pieces to be buffered in a second buffer of the peer client, the outputted pieces being outputted from the first buffer for play back of the data stream. A method for managing a buffering includes storing pieces of the data stream in a first buffer; storing one or more outputted pieces of the data stream in a second buffer; and transmitting one or more pieces stored in the first buffer or the second buffer. | 2013-01-24 |
20130024584 | EXTERNAL DESKTOP AGENT FOR SECURE NETWORKS - Methods and apparatus are provided for externally managing control target devices such as computer systems, cameras, recorders, etc., in an effective and secure manner. In particular examples, an external desktop agent is connected to a computer system. Remote desktop agent software need not be installed on the computer system. The external desktop agent receives commands such as keyboard and mouse commands from a control computer over a mechanism such as a bi-directional network. To provide security, the external desktop agent does not directly connect to the computer system over an interface such as universal serial bus (USB) but instead provides a PS/2 interface that connects to the computer system through a standard PS/2 to USB adapter. PS/2 does not allow bi-directional command signaling and does not provide file level access to potentially sensitive computer system data. | 2013-01-24 |
20130024585 | Circuits and Methods for Providing Communication Between a Memory Card and a Host Device - An interface circuit provides communication between a memory card and a host device. The interface circuit includes first and second sets of pins and a control unit. The control unit enables the first set of pins and disables the second set of pins when transferring a first set of signals in a first mode via the first set of pins, and disables the first set of pins and enables the second set of pins when transferring a second set of signals in a second mode via the second set of pins. The control unit transfers a clock signal of the second set of signals by differential signaling in the second mode via a clock pin of the second set of pins. A signal transfer in the second mode is at a greater speed than a signal transfer in the first mode. | 2013-01-24 |
20130024586 | VERIFICATION OF HARDWARE CONFIGURATION - A method for verifying an input/output (I/O) hardware configuration is provided. Data from an input/output data set (IOCDS) is extracted for building a verification command. The IOCDS contains hardware requirements that define at least software devices associated with a logical control unit (LCU). The verification command is processed. The verification command includes a software device address range associated with a logical control unit (LCU) of the I/O hardware. The LCU utilizes a first logical path. The software device address range utilizing the first logical path is compared with an existing software device address range utilizing at least one additional logical path. The verification command is accepted if the software device address range and the existing software device address range match. | 2013-01-24 |
20130024587 | EXPANDED PROTOCOL ADAPTER FOR IN-VEHICLE NETWORKS - A protocol adapter for in-vehicle networks that provides diagnostics, analysis and monitoring. The protocol adapter has a pass-through feature (voltage translator)/smart mode that allows the protocol adapter to emulate older boxes. Visual indicators (LEDs) indicate the pass through feature is in operation. LEDs also indicate activity on the RS232 bus between the adapter and a PC. Single color and multiple color emitting LEDs indicate a program is being executed and identify the program that is being executed. The protocol adapter supports RP1202 and RP1210, J1708 and J1939 and J1939 Transport Layer. The protocol adapter has a Real Time Clock, Standard COMM port connection, 7-32 Volt Supply and is CE compliant. The adapter can be used wirelessly. | 2013-01-24 |
20130024588 | MULTICORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multicore processor system includes a core configured to detect a change in a state of assignment of a multicore processor; obtain, upon detecting the change in the state of assignment, number of accesses of a common resource shared by the multicore processor by each of process that are assigned to cores of the multicore processor; calculate an access ratio based on the obtained number of accesses; and notify an arbitration circuit of the calculated access ratio, the arbitration circuit arbitrating accesses of the common resource by the multicore processor. | 2013-01-24 |
20130024589 | MULTI-CORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multi-core processor system includes a given configured to queue an interrupt process of a software interrupt request to the given core, and execute queued processes in the order of queuing at the given core; execute preferentially an interrupt process of a hardware interrupt request to the given core over a process under execution at the given core; determine whether the software interrupt request is a specific software interrupt request; and perform control to preferentially execute the interrupt process without queuing, upon determining that the software interrupt request is the specific software interrupt request. | 2013-01-24 |
20130024590 | ELECTRONIC DEVICE AND INPUT METHOD - An electronic device and an input method are provided. The electronic device comprises a first system and a second system; the first system comprises a first hardware system on which a first Operation System (OS) runs, and the first hardware system comprises a first interface and a second interface; the second system comprises a second hardware system, an input device and a display device, a second OS runs on the second hardware system, and the second hardware system comprises a third interface and a fourth interface; the first interface and the third interface support a first data transmission protocol, and the second interface and the fourth interface support a second data transmission protocol; the electronic device has a first connection state and a second connection state; the first connection state is the state in which the first system is connected to the third interface of the second system through the first interface, the display device is used for displaying the running status of the first OS, the input device is used for generating a first operating instruction, and the first OS is used for responding to the first operating instruction; and the second connection state is the state in which the first system is connected to the fourth interface of the second system through the second interface, the input device is used for generating a second operating instruction which is processed by the second OS and transmitted to the first OS after being processed, and the processed second operating instruction is responded to by the first OS. Because the two parts of the electronic device are combined arbitrarily and the respective skilled functions of the two systems are fully used after being combined, user can take full use of all parts of a detachable computer by applying this technical solution. | 2013-01-24 |
20130024591 | LANE JUMPER - A lane jumper for transmitting at least one lane from a first interface to a second interface is disclosed. The at least one lane is connected with the first interface. The first interface defines a first pin group and a second pin group, and the second interface defines a third pin group connected with the second pin group. The lane jumper includes a fourth pin group and a fifth pin group, wherein the fourth pin group and the fifth pin group of the lane jumper are configured for being respectively connected with the first pin group and the second pin group. The at least one lane is transmitted from the first interface to the second interface sequentially through the first pin group, the fourth pin group, the fifth pin group, the second pin group, and the third pin group. | 2013-01-24 |
20130024592 | DOCKING STATION FOR COMMUNICATION TERMINAL - A docking station for a communication terminal having an antenna for a radio communication is provided. The docking station includes a docking unit, a fastening unit, and a pattern unit. The docking unit is configured to be joined to the communication terminal and to provide an interface with the communication terminal. The fastening unit is coupled with the docking unit, configured to fixedly hold the communication terminal, and electrically coupled to the antenna through an electromagnetic field created in the antenna when the antenna operates. The pattern unit is disposed on the docking unit so as to be extended from the fastening unit, and configured to perform the radio communication together with the antenna by being electrically coupled to the antenna through the fastening unit when the antenna operates. | 2013-01-24 |
20130024593 | SOURCE PACKET BRIDGE - A communication function between ports on a node that does not require a common time base to be distributed across the network is disclosed. A data stream received over a first port is placed on an interface between nodes using the time base of the first port; a second port samples the data stream on the interface and timestamps it using the time base of the second port. The data stream is timestamped by the second port and packetized before transmitted to the second node to another bridge or device. Alternatively, the first port extracts a time stamp from the data stream and calculates an offset using a cycle timer value from the bus connected to the first port. The offset is added to the cycle timer value on the bus connected to the second port and used to timestamp the data stream. | 2013-01-24 |
20130024594 | SEMICONDUCTOR STORAGE DEVICE-BASED DATA RESTORATION - Embodiments of the invention provide a device and method for warm booting whereby data restoration occurs at the powering-on of the host, and can therefore be performed by the boot disk. Specifically, when the system is powered on, a backup controller will send a notification to a DMA controller indicating the data restoration is needed. The backup controller will automatically resorts contents of a backup storage device to main memory. During the process, when the host requests data, the DMA controller reads the data from the backup storage unit and sends it to the host. Then, once data restoration is complete, normal operations can commence. | 2013-01-24 |
20130024595 | PCI EXPRESS SWITCH WITH LOGICAL DEVICE CAPABILITY - A PCIe switch implements a logical device for use by connected host systems. The logical device is created by logical device enabling software running on a host management system. The logical device is able to consolidate one or more physical devices or may be entirely software-based. Commands from the connected host are processed in the command and response queues in the host and are also reflected in shadow queues stored in the management system. A DMA engine associated with the connected host is set up to automatically trigger on queues in the connected (local) host. Commands are sent to the physical devices to complete the work and a completion signal is sent to the management software and a response to the work is sent directly to the connected host, which is not aware that the logical device is non-existent and is implemented by software in the management system. | 2013-01-24 |
20130024596 | Computer System Including CPU or Peripheral Bridge to Communicate Serial Bits of Peripheral Component Interconnect Bus Transaction and Low Voltage Differential Signal Channel to Convey the Serial Bits - A computer system for multi-processing purposes. The computer system has a console comprising a first coupling site and a second coupling site. Each coupling site comprises a connector. The console is an enclosure that is capable of housing each coupling site. The system also has a plurality of computer modules, where each of the computer modules is coupled to a connector. Each of the computer modules has a processing unit, a main memory coupled to the processing unit, a graphics controller coupled to the processing unit, and a mass storage device coupled to the processing unit. Each of the computer modules is substantially similar in design to each other to provide independent processing of each of the computer modules in the computer system. | 2013-01-24 |
20130024597 | TRACKING MEMORY ACCESS FREQUENCIES AND UTILIZATION - A method is provided including recording, in a counter of a set of counters, a number of cache accesses for a page corresponding to a translation lookaside buffer (TLB) page table entry, where the counters are physically grouped together and physically separate from the TLB. The method also includes recording the number of cache accesses from the corresponding counter to a field of the page table responsive to an event. An apparatus is provided that includes a memory unit and a set of counters coupled to the one memory unit, the set of counters comprises one or more counters that are physically grouped together and are adapted to store a value indicative of a number of memory page accesses. The apparatus includes a cache coupled to the set of counters. Also provided is a computer readable storage device encoded with data for adapting a manufacturing facility to create the apparatus. | 2013-01-24 |
20130024598 | INCREASING GRANULARITY OF DIRTY BIT INFORMATION IN HARDWARE ASSISTED MEMORY MANAGEMENT SYSTEMS - In a computer system having virtual machines, one or more unused bits of a guest physical address range are allocated for aliasing so that multiple virtually addressed sub-pages can be mapped to a common memory page. When one bit is allocated for aliasing, dirty bit information can be provided at a granularity that is one-half of a memory page. When M bits are allocated for aliasing, dirty bit information can be provided at a granularity that is 1/(2 | 2013-01-24 |
20130024599 | Method and Apparatus for SSD Storage Access - A media management system including an application layer, a system layer, and a solid state drive (SSD) storage layer. The application layer includes a media data analytics application configured to assign a classification code to a data file. The system layer is in communication with the application layer. The system layer includes a file system configured to issue a write command to a SSD controller. The write command includes the classification code of the data file. The SSD storage layer includes the SSD controller and erasable blocks. The SSD controller is configured to write the data file to one of the erasable blocks based on the classification code of the data file in the write command. In an embodiment, the SSD controller is configured to write the data file to one of the erasable blocks storing other data files also having the classification code. | 2013-01-24 |
20130024600 | NON-VOLATILE TEMPORARY DATA HANDLING - Systems and methods are provided for handling temporary data that is stored in a non-volatile memory, such as NAND flash memory. The temporary data may include hibernation data or any other data needed for only one boot cycle of an electronic device. When storing the temporary data in one or more pages of the non-volatile memory, the electronic device can store a temporary marker as part of the metadata in at least one of the pages. This way, on the next bootup of the electronic device, the electronic device can use the temporary marker to determine that the associated page contains unneeded data. The electronic device can therefore invalidate the page and omit the page from its metadata tables. | 2013-01-24 |
20130024601 | User Selectable Balance Between Density and Reliability - A method for enabling users to select a configuration balance for a memory device is described. The method includes receiving an indication of a memory configuration for a mass memory including two or more of memory cells. One or more memory cells of the mass memory are selected based at least in part on 1) the indication, 2) a current configuration for each of the one or more memory cells and 3) a program-erase count for each of the one or more memory cells. The method also includes determining a new configuration for each of the selected one or more memory cells. For each of the selected one or more memory cells, the configuration of the memory cell is changed from the current configuration to the determined new configuration. Apparatus and computer readable media are also disclosed. | 2013-01-24 |
20130024602 | Universal Storage for Information Handling Systems - An information handling system (IHS) includes a processor and a single universal storage device with a system memory region and a mass storage region, wherein disk commands are executed by the processor as transfers between the system memory region and the mass storage region. | 2013-01-24 |
20130024603 | DEVICE PROGRAMMING SYSTEM WITH DATA BROADCAST AND METHOD OF OPERATION THEREOF - A method of operation of a device programming system includes: providing a target programmer, having a programming bus; coupling an electronic device, having a non-volatile memory, to the target programmer by the programming bus; and programming a data image into the non-volatile memory by the target programmer includes: subscribing to a broadcast message, receiving a logical block, of the data image, by the broadcast message for programming the non-volatile memory, and sending an unsubscribe message after receiving the logical blocks of the data image from the broadcast message. | 2013-01-24 |
20130024604 | DATA WRITING METHOD, MEMORY CONTROLLER, AND MEMORY STORAGE APPARATUS - A method for writing updated data into a flash memory module having a plurality of physical pages is provided, wherein each physical page is the smallest writing unit of the flash memory module. The method includes partitioning a physical page into storage segments and configuring a state mark for each storage segment, wherein the state marks indicate the validity of data stored in the storage segments. The method also includes writing the updated data into at least one of the storage segments and changing the state mark corresponding to the storage segment containing the updated data, wherein the state mark corresponding to the storage segment containing the updated data indicates a valid state, and the state marks corresponding to the other storage segments of the physical page not containing the updated data indicate an invalid state. Thereby, the time for writing data into a physical page is effectively shortened. | 2013-01-24 |
20130024605 | SYSTEMS AND METHODS OF STORING DATA - A method of writing data is performed in a data storage device with a controller and a memory. The memory includes latches and multiple storage elements and is operative to store a first number of bits in each storage element according to a first mapping of sequences of bits to states of the storage elements. The method includes loading data bits into the latches within the memory and generating manipulated data bits in the latches by manipulating designated data bits in the latches using one or more logical operations. The method also includes storing sets of the manipulated data bits to respective storage elements of the group of storage elements according to the first mapping. The designated data bits correspond to states of the respective storage elements according to a second mapping of sequences of bits to states. The second mapping is different than the first mapping. | 2013-01-24 |
20130024606 | NONVOLATILE SEMICONDUCTOR MEMORY DEVICE - According to one embodiment, a nonvolatile semiconductor memory device comprises a first memory block and a second memory block, and a control circuit. In read operation, when a read target block is the first memory block, the control circuit determines whether the first memory block is single-level or multi-level according to a first flag, and stores a first determination result thereof. While the read target block is the first memory block, the control circuit reads the first memory block as single-level or multi-level according to the first determination result. When the read target block is changed from the first memory block to the second memory block, the control circuit erases the first determination result. | 2013-01-24 |
20130024607 | MEMORY APPARATUS - A memory apparatus includes first memory chip and second memory chip; and a control unit configured to manage a global reserved area, a first virtual area for the first memory chip, and a second virtual area for the second memory chip, wherein the first virtual area includes a first user area and a first reserved area, the second virtual area includes a second user area and a second reserved area, the global reserved area includes a first plurality of reserved blocks corresponding to the first reserved area and a second plurality of reserved blocks corresponding to the second reserved area, and the control unit is configured to assign a second virtual block included in the global reserved area to the first user area if the control unit detects a first virtual block included in the first user area is a bad block. | 2013-01-24 |
20130024608 | FLASH MEMORY APPARATUS - A flash memory apparatus includes a flash memory and a control unit for controlling the flash memory. The flash memory includes multiple blocks, each block of the multiple blocks corresponding to multiple word lines, and each word line of the multiple word lines corresponding to a first bit page and at least one second bit page. The control unit is configured to map a logic address included in a host's write request received from a host to a first process page of multiple in a first process block of the multiple blocks, and to program the first process page. The first process page is only the first bit page. | 2013-01-24 |
20130024609 | Tracking and Handling of Super-Hot Data in Non-Volatile Memory Systems - A non-volatile memory organized into flash erasable blocks sorts units of data according to a temperature assigned to each unit of data, where a higher temperature indicates a higher probability that the unit of data will suffer subsequent rewrites due to garbage collection operations. The units of data either come from a host write or from a relocation operation. Among the units more likely to suffer subsequent rewrites, a smaller subset of data super-hot is determined. These super-hot data are then maintained in a dedicated portion of the memory, such as a resident binary zone in a memory system with both binary and MLC portions. | 2013-01-24 |
20130024610 | Method for Operating Non-Volatile Memory and Data Storage System Using the Same - A method for operating a non-volatile memory is provided. The non-volatile memory includes a plurality of physical blocks having a plurality of data blocks and spare blocks. An index is obtained by comparing an average erase count of selected physical blocks with a first threshold. Each erase count for each physical block is the total number of the erase operations performed thereon. A performance capability status for the memory is determined according to the index. The performance capability status is set to a first status when the average erase count exceeds the first threshold. An indication is generated based on the performance capability status. A limp function is performed in response to the first status for configuring a minimum number of the at least some spare blocks reserved and used for data update operations. | 2013-01-24 |
20130024611 | Controller for One Type of NAND Flash Memory for Emulating Another Type of NAND Flash Memory - A method of executing reading instruction to read host data from a flash memory device is provided. The method initiates with receiving from a host device a read instruction to read host data from an array of NAND flash memory cells grouped into separately-readable device pages, the host data being a portion of device data that is stored in a device page. The host data is parsed from device data, and the parsed host data is sent to the host device. | 2013-01-24 |
20130024612 | STORING ROW-MAJOR DATA WITH AN AFFINITY FOR COLUMNS - A method, device, and computer readable medium for striping rows of data across logical units of storage with an affinity for columns is provided. Alternately, a method, device, and computer readable medium for striping columns of data across logical units of storage with an affinity for rows is provided. When data of a logical slice is requested, a mapping may provide information for determining which logical unit is likely to store the logical slice. In one embodiment, data is retrieved from logical units that are predicted to store the logical slice. In another embodiment, data is retrieved from several logical units, and the data not mapped to the logical unit is removed from the retrieved data. | 2013-01-24 |
20130024613 | PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. | 2013-01-24 |
20130024614 | STORAGE MANAGER - A switch includes an expander to couple an array controller to storage drive bays which are capable of supporting physical drives. A zone manager is coupled to the expander to perform zoning configuration of physical drives for the array controller. A storage manager is used to generate storage configuration information used by the array controller to configure logical drives of the physical drives configured for the array controller. | 2013-01-24 |
20130024615 | METHOD AND APPARATUS FOR DIFFERENTIATED DATA PLACEMENT - Method and apparatus for locating data on disk storage, wherein multiple instances of data can be stored at different locations to satisfy different use requirements such as read access, write access, and data security. The method allows a data storage system, such as a file system, to provide both read optimized and write optimized performance on disk storage of different types (e.g., sizes and speed). | 2013-01-24 |
20130024616 | Storage System and Its Logical Unit Management Method - The size of management information pages for storing format management information is minimized and a management size of the management information pages is reduced. | 2013-01-24 |
20130024617 | METHOD FOR ADAPTING PERFORMANCE SENSITIVE OPERATIONS TO VARIOUS LEVLS OF MACHINE LOADS - A redundant array of independent disk (RAID) stack executes a first memory access routine and a second memory access routine having different access timing characteristics. The RAID stack determines a number of cache misses for the execution of each of the first and second memory access routines. The RAID stack selects one of the first and second memory access routines based on the number of cache misses for further memory accesses. | 2013-01-24 |
20130024618 | LOG STRUCTURE ARRAY - A storage system, comprising: (a) a primary storage entity utilized for persistently storing an entire data-set; (b) a secondary storage entity; and (c) a secondary storage controller (“SSC”) responsive to a destage stream pending to be written to the second storage entity for identifying a succession of physical locations on the secondary storage entity formed by non-protected locations in an extent that is sufficient to accommodate the destage stream and one or more intervening protected locations between two or more of the non-protected locations; wherein said SSC is adapted to retrieve from said primary storage entity protected data associated with the intervening protected location(s), pad the stream of data with the protected data and write the padded stream of data to said secondary storage entity as a single successive write sequence over said succession of physical locations. | 2013-01-24 |
20130024619 | MULTILEVEL CONVERSION TABLE CACHE FOR TRANSLATING GUEST INSTRUCTIONS TO NATIVE INSTRUCTIONS - A method for translating instructions for a processor. The method includes accessing a guest instruction and performing a first level translation of the guest instruction using a first level conversion table. The method further includes outputting a resulting native instruction when the first level translation proceeds to completion. A second level translation of the guest instruction is performed using a second level conversion table when the first level translation does not proceed to completion, wherein the second level translation further processes the guest instruction based upon a partial translation from the first level conversion table. The resulting native instruction is output when the second level translation proceeds to completion. | 2013-01-24 |
20130024620 | METHOD AND APPARATUS FOR ADAPTIVE CACHE FRAME LOCKING AND UNLOCKING - Most recently accessed frames are locked in a cache memory. The most recently accessed frames are likely to be accessed by a task again in the near future and may be locked at the beginning of a task switch or interrupt to improve cache performance. The list of most recently used frames is updated as a task executes and may be embodied as a list of frame addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold. | 2013-01-24 |
20130024621 | MEMORY-CENTERED COMMUNICATION APPARATUS IN A COARSE GRAINED RECONFIGURABLE ARRAY - The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved. | 2013-01-24 |
20130024622 | EVENT-DRIVEN REGENERATION OF PAGES FOR WEB-BASED APPLICATIONS - Systems and methods for invalidating and regenerating pages. In one embodiment, a method can include detecting content changes in a content database including various objects. The method can include causing an invalidation generator to generate an invalidation based on the modification and communicating the invalidation to a dependency manager. A cache manager can be notified that pages in a cache might be invalidated based on the modification via a page invalidation notice. In one embodiment, a method can include receiving a page invalidation notice and sending a page regeneration request to a page generator. The method can include regenerating the cached page. The method can include forwarding the regenerated page to the cache manager replacing the cached page with the regenerated page. In one embodiment, a method can include invalidating a cached page based on a content modification and regenerating pages which might depend on the modified content. | 2013-01-24 |
20130024623 | METHOD AND APPARATUS FOR HIGH SPEED CACHE FLUSHING IN A NON-VOLATILE MEMORY - An invention is provided for performing flush cache in a non-volatile memory. The invention includes maintaining a plurality of free memory blocks within a non-volatile memory. When a flush cache command is issued, a flush cache map is examined to obtain a memory address of a memory block in the plurality of free memory blocks within the non-volatile memory. The flush cache map includes a plurality of entries, each entry indicating a memory block of the plurality of free memory blocks. Then, a cache block is written to a memory block at the obtained memory address within the non-volatile memory. In this manner, when a flush cache command is received, the flush cache map allows cache blocks to be written to free memory blocks in the non-volatile memory without requiring a non-volatile memory search for free blocks or requiring erasing of memory blocks storing old data. | 2013-01-24 |
20130024624 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 2013-01-24 |
20130024625 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 2013-01-24 |
20130024626 | PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. | 2013-01-24 |
20130024627 | PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. | 2013-01-24 |
20130024628 | EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2013-01-24 |
20130024629 | Data processing apparatus and method for managing coherency of cached data - An interconnect having a plurality of interconnect nodes arranged to provide at least one ring, a plurality of caching nodes for caching data coupled into the interconnect via an associated one of said interconnect nodes, and at least one coherency management node for implementing a coherency protocol to manage coherency of the data cached by each of said caching nodes. Each coherency management node being coupled into the interconnect via an associated one of said interconnect nodes. When each caching node produces a snoop response for said snoop request, the associated interconnect node is configured to output that snoop response in one of said at least one identified slots. Further, each interconnect node associated with a caching node has merging circuitry configured, when outputting the snoop response in an identified slot, to merge that snoop response with any current snoop response information held in that slot. | 2013-01-24 |
20130024630 | Terminating barriers in streams of access requests to a data store while maintaining data consistency - A memory controller for a slave memory that controls an order of data access requests is disclosed. There is a read and write channel having streams of requests with corresponding barrier transactions within the request streams indicating where reordering should not occur. The controller has barrier response generating circuitry located on the read and said write channels and being responsive to receipt of one of said barrier transactions: to issue a response to the received barrier transaction such that subsequent requests in said stream of requests are not blocked by the barrier transaction and can be received and to terminate the received barrier transaction and not transmit the received barrier transaction further; and to mark requests subsequent to the received barrier transaction in the stream of requests with a barrier context value identifying the received barrier transaction. The memory controller comprises a point of data consistency on the write channel prior to the memory; and the memory controller comprises comparison circuitry configured to compare the bather context value of each write request to be issued to the memory with the barrier context values of at least some pending read requests, the pending read requests being requests received at the memory controller but not yet issued to the memory and: in response to detecting at least one of the pending read requests with an earlier barrier context value identifying a bather transaction that has a corresponding barrier transaction in the stream of requests on the write channel that is earlier in the stream of requests than the write request, stalling the write request until the at least one pending read request has been performed; and in response to detecting no pending read requests with the earlier barrier context value, issuing the write request to the memory. | 2013-01-24 |
20130024631 | METHOD AND APPARATUS FOR REALTIME DETECTION OF HEAP MEMORY CORRUPTION BY BUFFER OVERRUNS - One embodiment of the present invention relates to a heap overflow detection system that includes an arithmetic logic unit, a datapath, and address violation detection logic. The arithmetic logic unit is configured to receive an instruction having an opcode and an operand and to generate a final address and to generate a compare signal on the opcode indicating a heap memory access related instruction. The datapath is configured to provide the opcode and the operand to the arithmetic logic unit. The address violation detection logic determines whether a heap memory access is a violation according to the operand and the final address on receiving the compare signal from the arithmetic logic unit. | 2013-01-24 |
20130024632 | METHOD AND SYSTEM FOR TRANSFORMATION OF LOGICAL DATA OBJECTS FOR STORAGE - There are provided a method of transforming a non-transformed stored logical data object (LO) device into a transformed LO and system thereof. The method comprises: a) in response to a respective transformation request, logically dividing the non-transformed LO in a first segment and one or more non-transformed subsequent segments, the segments having predefined size; b) generating a header for the respective transformed LO; c) processing said first segment; d) overwriting said first segment by said generated header and said transformed first segment; e) indexing said first transformed segment and said one or more non-transformed subsequent segments as constituting a part of said transformed LO; f) generating at least one index section; and g) updating the indication in the header to point that the non-transformed LO has been transformed in the transformed LO comprising said generated header, said first transformed segment, said one or more subsequent segments comprising data in non-transformed form and said at least one index section. | 2013-01-24 |
20130024633 | METHOD FOR OUTPUTTING AUDIO-VISUAL MEDIA CONTENTS ON A MOBILE ELECTRONIC DEVICE, AND MOBILE ELECTRONIC DEVICE - A method for outputting an audio-visual media content on a mobile electronic device, the mobile electronic device storing the media content in at least a compressed format in a memory of the mobile electronic device, is provided. The method may include receiving a request for the output of the media content; checking of whether the requested media content is stored in an uncompressed format in the memory; outputting the requested media content in the stored uncompressed format if the requested media content is stored in the uncompressed format in the memory, and outputting the requested media content in the stored compressed format if the requested media content is not stored in the uncompressed format in the memory. | 2013-01-24 |
20130024634 | INFORMATION PROCESSING SYSTEM AND METHOD FOR CONTROLLING THE SAME - The present invention concerns one of the plurality of first storage apparatuses, prior to a file migration to the second storage apparatus, notifies to the second storage apparatus of file migration information being information relating to the file migration, the second storage apparatus calculates an increment of a load on the second storage apparatus that is generated by the file migration based on information written in the file migration information, the second storage apparatus determines whether the file migration is allowable based on a current load on the second storage apparatus itself and the increment, the second storage apparatus notifies the determination result to the one of the plurality of first storage apparatuses that has notified the file migration information, and the one of the plurality of first storage apparatuses determines whether to migrate the file to the second storage apparatus based on the determination result. | 2013-01-24 |
20130024635 | First Storage Apparatus and First Storage Apparatus Control Method - The object is to achieve a disaster recovery configuration in a short period of time. In a first storage apparatus | 2013-01-24 |
20130024636 | METHOD OF MANUFACTURING A LIMITED USE DATA STORING DEVICE - Embodiments of methods and systems for controlling access to information stored on memory or data storage devices are disclosed. In various embodiments, methods of retrieving information from a data storage device previously deactivated by modification or degradation of at least a portion of the data storage device are disclosed. | 2013-01-24 |
20130024637 | MEMORY ACCESS UNLOCK - In one implementation, a controller is provided such that when an operation is performed at a first memory location, the controller unlocks access to a second memory location. | 2013-01-24 |
20130024638 | STORAGE DEVICE IN A LOCKED STATE - A method for managing a storage device including identifying a lock timing for the storage device when coupling to a device, transitioning the storage device into a locked state in response to detecting the storage device decoupling from the device, and configuring the storage device to remain in the locked state if the storage device is re-coupled to the device after the lock timing has elapsed. | 2013-01-24 |
20130024639 | COMPUTER SYSTEM AND DATA MIGRATION METHOD THEREOF - Data migration can be preformed between source and target storage subsystems without stopping exchanging data between a host computer and each of the storage subsystems. | 2013-01-24 |
20130024640 | Virtual Logical Volume for Overflow Storage of Special Data Sets - Method and system embodiments for facilitating overflow storage of special data sets that reside on a single logical volume are provided. A virtual logical volume is created from unallocated memory units across a plurality of logical volumes in a volume group. The virtual logical volume appears the same as any one of the logical volumes in the volume group to an external client. Upon receipt of a special data set that must reside in a single logical volume, an attempt is first made to allocate the special data set to one of the logical volumes in the volume group. If that allocation attempt fails, the special data set is allocated to the virtual logical volume. The virtual logical volume may be created only upon the failure to allocate the special data set to one of the logical volumes, and may be destroyed if sufficient space in one of the logical volumes is freed up to transfer the special data set. Creation of the virtual logical volume may be reserved for only critical special data sets whose failure would result in a storage system outage. | 2013-01-24 |
20130024641 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING STORAGE CAPACITY RECOVERY - An apparatus, system, and method are disclosed for managing storage capacity recovery. A monitor module determines a workload write bandwidth for a sequential log-based data storage device. The workload write bandwidth includes a rate at which workload write operations generate reclaimable storage capacity on the data storage device. A target module determines a target reclamation write bandwidth for the data storage device. A capacity reclaim rate is associated with the target reclamation write bandwidth. The capacity reclaim rate satisfies the workload write bandwidth for the data storage device. A reclaim rate module determines a prospective reclamation write bandwidth for the data storage device, based on the workload write bandwidth, to correspond to the capacity reclaim rate associated with the target reclamation write bandwidth. | 2013-01-24 |
20130024642 | APPARATUS, SYSTEM, AND METHOD FOR IDENTIFYING DATA THAT IS NO LONGER IN USE - An apparatus, system, and method are disclosed for managing a non-volatile storage medium. A storage controller receives a message that identifies data that no longer needs to be retained on the non-volatile storage medium. The data may be identified using a logical identifier. The message may comprise a hint, directive, or other indication that the data has been erased and/or deleted. In response to the message, the storage controller records an indication that the contents of a physical storage location and/or physical address associated with the logical identifier do not need to be preserved on the non-volatile storage medium. | 2013-01-24 |
20130024643 | STORAGE APPARATUS AND DATA MANAGEMENT METHOD - To efficiently manage data including control information. | 2013-01-24 |
20130024644 | METHODS FOR OPTIMIZING DATA MOVEMENT IN SOLID STATE DEVICES - Techniques for optimizing data movement in electronic storage devices are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for optimizing data movement in electronic storage devices comprising maintaining, on the electronic storage device, a data structure associating virtual memory addresses with physical memory addresses. Information can be provided regarding the data structure to a host which is in communication with the electronic storage device. Commands can be received from the host to modify the data structure on the electronic storage device, and the data structure can be modified in response to the received command. | 2013-01-24 |
20130024645 | STRUCTURED MEMORY COPROCESSOR - Intercepting a requested memory operation corresponding to a conventional memory is disclosed. The requested memory operation is translated to be applied to a structured memory. | 2013-01-24 |
20130024646 | Method and Simulator for Simulating Multiprocessor Architecture Remote Memory Access - A method for simulating remote memory access in a target machine on a host machine is disclosed. Multiple virtual memory spaces in the host machine are divided and a virtual address space of each target application process is set to one virtual memory space that corresponds to a target application process and is in the multiple virtual memory spaces. Access of the target application process is captured to a virtual memory space other than the virtual memory space corresponding to the target application process in the multiple virtual memory spaces. | 2013-01-24 |
20130024647 | CACHE BACKED VECTOR REGISTERS - A processor, method, and medium for utilizing a shared cache to store vector registers. Each thread of a multithreaded processor utilizes a plurality of virtual vector registers to perform vector operations. Virtual vector registers are allocated for each thread, and each virtual vector register is mapped into the shared cache on the processor. The cache is shared between multiple threads such that if one thread is not using vector registers, there is more space in the cache for other threads to use vector registers. | 2013-01-24 |
20130024648 | TLB EXCLUSION RANGE - A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page. | 2013-01-24 |
20130024649 | METHOD AND DEVICE FOR STORING ROUTING TABLE ENTRY - The present invention discloses a method and a device for storing a routing table entry. The method includes: splitting a routing table entry into two points according to a range matching policy; obtaining a storage location of the routing table entry in a hierarchical binary tree; and adding each segment related to the routing table entry to the binary tree of each segment according to the storage location. According to the present invention, the routing table entry is stored in the hierarchical binary tree in segments, which significantly reduces the total amount of memory required to be occupied by storage of the routing table entry. | 2013-01-24 |
20130024650 | DYNAMIC STORAGE TIERING - A method for dynamic storage tiering may include, but is not limited to: receiving an input/output (I/O) request from a host device; determining whether the I/O request results in a cache hit; and relocating data associated with the I/O request between a higher-performance storage device and lower-performance storage device according to the determination whether the data associated with the I/O request is stored in a cache. | 2013-01-24 |
20130024651 | PROCESSING VECTORS USING A WRAPPING ROTATE PREVIOUS INSTRUCTION IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an operand vector, a selection vector, and a control vector are disclosed. The executed instructions may also cause the processor to perform a wrapping rotate previous operation dependent upon the input vectors. | 2013-01-24 |
20130024652 | Scalable Processing Unit - Various methods and systems are provided for processing units that may be scaled. In one embodiment, a processing unit includes a plurality of scalar processing units and a vector processing unit in communication with each of the plurality of scalar processing units. The vector processing unit is configured to coordinate execution of instructions received from the plurality of scalar processing units. In another embodiment, a scalar instruction packet including a pre-fix instruction and a vector instruction packet including a vector instruction is obtained. Execution of the vector instruction may be modified by the pre-fix instruction in a processing unit including a vector processing unit. In another embodiment, a scalar instruction packet including a plurality of partitions is obtained. The location of the partitions is determined based upon a partition indicator included in the scalar instruction packet and a scalar instruction included in a partition is executed by a processing unit. | 2013-01-24 |
20130024653 | ACCELERATION OF STRING COMPARISONS USING VECTOR INSTRUCTIONS - A processor, method, and medium for using vector instructions to perform string comparisons. A single instruction compares the elements of two vectors and simultaneously checks for the null character. If an inequality or the null character is found, then the string comparison loop terminates, and a further check is performed to determine if the strings are equal. If all elements are equal and the null character is not found, then another iteration of the string comparison loop is executed. The vectors are loaded with the next portions of the strings, and then the next comparison is performed. The loop continues until either an inequality or the null character is found. | 2013-01-24 |
20130024654 | VECTOR OPERATIONS FOR COMPRESSING SELECTED VECTOR ELEMENTS - A processor, method, and medium for using vector operations to compress selected elements of a vector. An input vector is compared to a criteria vector, and then a subset of the plurality of elements of the input vector are selected based on the comparison. A permutation vector is generated based on the locations of the selected elements and then the permutation vector is used to permute the selected elements of the input vector to an output vector. The selected elements of the input vector are stored in contiguous locations in the leftmost elements of the output vector. Then, the output vector is stored to memory and a pointer to the memory location is incremented by the number of selected elements. | 2013-01-24 |
20130024655 | PROCESSING VECTORS USING WRAPPING INCREMENT AND DECREMENT INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a fixed-value addition operation dependent upon the input vector and the control vector. | 2013-01-24 |
20130024656 | PROCESSING VECTORS USING WRAPPING BOOLEAN INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a Boolean operation on another input vector dependent upon the input vector and the control vector. | 2013-01-24 |
20130024657 | RECONFIGURABLE SEQUENCER STRUCTURE - A cell element field for data processing, having function cell means for execution of algebraic and/or logic functions and memory cell means for receiving, storing and/or outputting information is described. Function cell-memory cell combinations are formed in which a control connection leads from the function cell means to the memory cell means. | 2013-01-24 |
20130024658 | MEMORY CONTROLLER AND SIMD PROCESSOR - Technology to suppress the drop in SIMD processor efficiency that occurs when exchanging two-dimensional data in a plurality of rectangular regions, between an external section and a plurality of processor elements in an SIMD processor, so that one rectangular region corresponds to one processor element. In the SIMD processor, an address storage unit in a memory controller is capable of setting N number of addresses Ai (i=1 through N) in an external memory by utilizing a control processor. A parameter storage unit is capable of setting a first parameter OSV, a second parameter W, and a third parameter L by utilizing a control processor. A data transfer unit executes the transfer of data between an external memory, and the buffers in N number of processor elements contained in the applicable SIMD processor, based on the contents of the address storage unit and the parameter storage unit. | 2013-01-24 |
20130024659 | Executing An Instruction for Performing a Configuration Virtual Topology Change - In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of guest CPUs, the polarization related to the amount of a host CPU resource is provided to a guest CPU. | 2013-01-24 |
20130024660 | PORTABLE HANDHELD DEVICE WITH MULTI-CORE IMAGE PROCESSOR - A portable handheld device includes an image sensor for capturing an image; and a one-chip microcontroller having integrated therein a CPU for processing a script language and a multi-core processor for processing an image captured by the image sensor. The multi-core processor includes therein multiple processing units connected in parallel by a crossbar switch. Each processing unit includes an arithmetic and logic unit (ALU). Each ALU includes a first register set for accepting data from the first crossbar switch, and a second register set for loading data to the crossbar switch. | 2013-01-24 |
20130024661 | HARDWARE ACCELERATION COMPONENTS FOR TRANSLATING GUEST INSTRUCTIONS TO NATIVE INSTRUCTIONS - A hardware based translation accelerator. The hardware includes a guest fetch logic component for accessing guest instructions; a guest fetch buffer coupled to the guest fetch logic component and a branch prediction component for assembling guest instructions into a guest instruction block; and conversion tables coupled to the guest fetch buffer for translating the guest instruction block into a corresponding native conversion block. The hardware further includes a native cache coupled to the conversion tables for storing the corresponding native conversion block, and a conversion look aside buffer coupled to the native cache for storing a mapping of the guest instruction block to corresponding native conversion block, wherein upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates the guest instruction has a corresponding converted native instruction in the native cache. | 2013-01-24 |
20130024662 | RELAXATION OF SYNCHRONIZATION FOR ITERATIVE CONVERGENT COMPUTATIONS - Systems and methods are disclosed that allow atomic updates to global data to be at least partially eliminated to reduce synchronization overhead in parallel computing. A compiler analyzes the data to be processed to selectively permit unsynchronized data transfer for at least one type of data. A programmer may provide a hint to expressly identify the type of data that are candidates for unsynchronized data transfer. In one embodiment, the synchronization overhead is reducible by generating an application program that selectively substitutes codes for unsynchronized data transfer for a subset of codes for synchronized data transfer. In another embodiment, the synchronization overhead is reducible by employing a combination of software and hardware by using relaxation data registers and decoders that collectively convert a subset of commands for synchronized data transfer into commands for unsynchronized data transfer. | 2013-01-24 |
20130024663 | Table Call Instruction for Frequently Called Functions - An apparatus includes a memory that stores an instruction including an opcode and an operand. The operand specifies an immediate value or a register indicator of a register storing the immediate value. The immediate value is usable to identify a function call address. The function call address is selectable from a plurality of function call addresses. | 2013-01-24 |
20130024664 | METHOD, APPARATUS AND INSTRUCTIONS FOR PARALLEL DATA CONVERSIONS - Method, apparatus, and program means for performing a conversion. In one embodiment, a disclosed apparatus includes a destination storage location corresponding to a first architectural register. A functional unit operates responsive to a control signal, to convert a first packed first format value selected from a set of packed first format values into a plurality of second format values. Each of the first format values has a plurality of sub elements having a first number of bits The second format values have a greater number of bits. The functional unit stores the plurality of second format values into an architectural register. | 2013-01-24 |
20130024665 | METHOD, APPARATUS AND INSTRUCTIONS FOR PARALLEL DATA CONVERSIONS - Method, apparatus, and program means for performing a conversion. In one embodiment, a disclosed apparatus includes a destination storage location corresponding to a first architectural register. A functional unit operates responsive to a control signal, to convert a first packed first format value selected from a set of packed first format values into a plurality of second format values. Each of the first format values has a plurality of sub elements having a first number of bits The second format values have a greater number of bits. The functional unit stores the plurality of second format values into an architectural register. | 2013-01-24 |
20130024666 | METHOD OF SCHEDULING A PLURALITY OF INSTRUCTIONS FOR A PROCESSOR - A method of scheduling a plurality of instructions for a processor comprises the steps of: establishing a functional unit resource table comprising a plurality of columns, each of which corresponds to one of a plurality of operation cycles of the processor and comprises a plurality of fields, each of which indicates a functional unit of the processor; establishing a ping-pong resource table comprising a plurality of columns, each of which corresponds to one of the plurality of operation cycles of the processor and comprises a plurality of fields, each of which indicates a read port or a write port of a register bank of the processor; and allotting the plurality of instructions to the plurality of operation cycles of the processor and registering the functional units and the ports of the register banks corresponding to the allotted instructions on the functional unit resource table and the ping-pong resource table. | 2013-01-24 |
20130024667 | ARITHMETIC AND CONTROL UNIT, ARITHMETHIC AND CONTROL METHOD, PROGRAM AND PARALLEL PROCESSOR - An attribute group storage unit acquires and holds attribute groups set to respective data blocks. A scenario determination unit determines respective transfer systems of the respective blocks between a memory of the lowest hierarchy and a memory of another hierarchy based on those attribute groups and a configuration of an arithmetic unit which is the parallel processor, and controls the transfer of the respective data blocks according to the determined transfer systems, and the parallel arithmetic operation corresponding to the transfer. Each of the attribute groups is necessary to determine the transfer systems, and includes one or more attributes not depending on the configuration of the parallel processor. The attribute groups of the write blocks are set assuming that each of the write blocks has already been located in the memory of another hierarchy, and is transferred to the memory of the lowest hierarchy. | 2013-01-24 |
20130024668 | ARCHITECTURE AND IMPLEMENTATION METHOD OF PROGRAMMABLE ARITHMETIC CONTROLLER FOR CRYPTOGRAPHIC APPLICATIONS - An architecture includes a controller. The controller is configured to receive a microprogram. The microprogram is configured for performing at least one of hierarchical or a sequence of polynomial computations. The architecture also includes an arithmetic logic unit (ALU) communicably coupled to the controller. The ALU is controlled by the controller. Additionally, the microprogram is compiled prior to execution by the controller, the microprogram is compiled into a plurality of binary tables, and the microprogram is programmed in a command language in which each command includes a first portion for indicating at least one of a command or data transferred to the ALU, and a second portion for including a control command to the controller. The architecture and implementation of the programmable controller may be for cryptographic applications, including those related to public key cryptography. | 2013-01-24 |
20130024669 | PROCESSING VECTORS USING WRAPPING SHIFT INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a shift operation on another input vector dependent upon the input vector and the control vector. | 2013-01-24 |
20130024670 | PROCESSING VECTORS USING WRAPPING MULTIPLY AND DIVIDE INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a product or quotient operation on another input vector dependent upon the input vector and the control vector. | 2013-01-24 |
20130024671 | PROCESSING VECTORS USING WRAPPING NEGATION INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a negation operation dependent upon the input vector and the control vector. | 2013-01-24 |
20130024672 | PROCESSING VECTORS USING WRAPPING PROPAGATE INSTRUCTIONS IN THE MACROSCALAR ARCHITECTURE - Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive a basis vector, an operand vector, a selection vector, and a control vector are disclosed. The executed instructions may also cause the processor to perform a wrapping propagate operation dependent upon the input vectors. | 2013-01-24 |
20130024673 | PROCESSSING UNIT AND MICRO CONTROLLER UNIT (MCU) - A technology capable of reducing load on both system processing and filter operation and improving power consumption and performance is provided. In a digital signal processor, a program memory, a program counter, and a control logic circuit are provided, and a bit field of each instruction includes instruction stop flag information and bit field information. Also, the control logic circuit carries out the control in such a manner that the instruction whose instruction stop flag information is cleared is executed as is to proceed to the next instruction processing, execution of the instruction whose instruction stop flag information is set is stopped if an execution resumption trigger condition corresponding to the bit field information is not satisfied, and the instruction whose instruction stop flag information is set is executed if the execution resumption trigger condition corresponding to bit field information is satisfied, to proceed to the next instruction processing. | 2013-01-24 |
20130024674 | RETURN ADDRESS OPTIMISATION FOR A DYNAMIC CODE TRANSLATOR - A dynamic code translator with isoblocking uses a return trampoline having branch instructions conditioned on different isostates to optimize return address translation, by allowing the hardware to predict that the address of a future return will be the address of trampoline. An IP relative call is inserted into translated code to write the trampoline address to a target link register and a target return address stack used by the native machine to predict return addresses. If a computed subject return address matches a subject return address register value, the current isostate of the isoblock is written to an isostate register. The isostate value in the isostate register is then used to select the branch instruction in the trampoline for the true subject return address. Sufficient code area in the trampoline instruction set can be reserved for a number of compare/branch pairs which is equal to the number of available isostates. | 2013-01-24 |
20130024675 | RETURN ADDRESS OPTIMISATION FOR A DYNAMIC CODE TRANSLATOR - A dynamic code translator with isoblocking uses a return trampoline having branch instructions conditioned on different isostates to optimize return address translation, by allowing the hardware to predict that the address of a future return will be the address of trampoline. An IP relative call is inserted into translated code to write the trampoline address to a target link register and a target return address stack used by the native machine to predict return addresses. If a computed subject return address matches a subject return address register value, the current isostate of the isoblock is written to an isostate register. The isostate value in the isostate register is then used to select the branch instruction in the trampoline for the true subject return address. Sufficient code area in the trampoline instruction set can be reserved for a number of compare/branch pairs which is equal to the number of available isostates. | 2013-01-24 |
20130024676 | Control flow integrity - In at least some embodiments, a processor in accordance with the present disclosure is operable to enforce control flow integrity. For examiner, a processor may comprise logic operable to execute a control flow integrity instruction specified to verify changes in control flow and respond to verification failure by at least one of a trap or an exception. | 2013-01-24 |
20130024677 | SECURE BOOTING A COMPUTING DEVICE - A method and an apparatus for executing codes embedded inside a device to verify a code image loaded in a memory of the device are described. A code image may be executed after being verified as a trusted code image. The embedded codes may be stored in a secure ROM (read only memory) chip of the device. In one embodiment, the verification of the code image is based on a key stored within the secure ROM chip. The key may be unique to each device. Access to the key may be controlled by the associated secure ROM chip. The device may complete establishing an operating environment subsequent to executing the verified code image. | 2013-01-24 |