07th week of 2013 patent applcation highlights part 56 |
Patent application number | Title | Published |
20130042036 | ELECTRONIC DEVICE AND ELECTRONIC DEVICE SYSTEM - An electronic device includes a USB connector, a power supply unit for supplying electric power to an external device, which is connected via the USB connector, a judgment unit for judging whether the external device is a device being compliant with a USB 2.0 standard or a compatible device being compatible with a device complying with the USB 2.0 standard; and an acquisition unit for acquiring a value of voltage requested by a connected compatible device by communicating with the connected compatible device when the judgment unit judges that the external device is the compatible device; wherein the power supply unit supplies electric power corresponding to the value of voltage acquired by the acquisition unit to the compatible device when the judgment unit judges that the external device is the compatible device. | 2013-02-14 |
20130042037 | Method and Apparatus for Enabling Enhanced USB Interaction - Methods and apparatuses for configuring a universal serial bus (USB) connection. The method comprises receiving, at a USB port, first identification data that includes a generic device class code and a vendor identifier. Receiving the first identification causes one of enabling interaction with a peripheral device in accordance with functionality specified by the generic device class code if the host device does not support software associated with the vendor identifier, or sending, at the USB port, a query to the peripheral device if the host device does support software associated with the vendor identifier, the query sent to determine whether the peripheral device supports at least one function different from the functionality specified by the generic device class code. | 2013-02-14 |
20130042038 | NON-BLOCKING PROCESSOR BUS BRIDGE FOR NETWORK PROCESSORS OR THE LIKE - Described embodiments provide a system having a bridge for connecting two different processor buses. The bridge receives a request from a first bus, the request having an identification field having a value. The request is then entered into one of a plurality of buffers having requests therein with the same identification field values. Which buffer receives the request may be based on a variety of techniques, such as random, least recently used, most full, prioritized, or sequential. Next, the buffered request is transmitted over a second bus. A response to the request is eventually received from the second bus, the response is transmitted over the first bus, and the request is then removed from the buffer. By entering the received request to the buffer with request with the same identification value, there is a reduced possibility of head-of-line request blocking when compared to a single buffer implementation. | 2013-02-14 |
20130042039 | DEADLOCK PREVENTION - Methods, systems, and computer-readable media with executable instructions stored thereon for preventing deadlocks are provided. An inter-device mutex (IDM) can be locked for a first client. An error message can be sent to a second client in response to a received first lock command from the second client while the IDM is locked for the first client. A number of second lock commands from the second client while the IDM is locked for the first client can be received. The IDM can be unlocked for the first client in response to an unlock command received from the first client. The IDM can be locked for the second client in response to a received third lock command from the second client, wherein the third lock command is received subsequent to unlocking the IDM for the first client. | 2013-02-14 |
20130042040 | CONNECTOR ASSEMBLY - A connector assembly includes first to fifth connectors, two PCIe slots, and an adapter board. When the first connector is connected to the fifth connector, and the third connector is connected to the fourth connector, signals at the pins of the third connector are transmitted to the second group of pins of the first PCIe slot through the fourth connector, the fifth connector, and the first connector in series. When the second connector is connected to the fifth connector, and the third connector is connected to the fourth connector, signals at pins of the third connector are transmitted to the fourth group of pins of the second PCIe slot through the fourth connector, the fifth connector, and the second connector in series. | 2013-02-14 |
20130042041 | CONNECTOR ASSEMBLY - A connector assembly includes first and second connectors, a flexible printed circuit board, first and second peripheral component interconnection express (PCIe) slots, and a jumper card. When the jumper card is plugged into the first PCIe slot, pins of the jumper card are connected to pins of the first PCIe slot for transmitting signals to the pins of the first PCIe slot and to the pins of the first connector in that order. When the first connector is connected to the second connector, signals at the pins of the first PCIe slot are transmitted to pins of the second PCIe slot. | 2013-02-14 |
20130042042 | Synchronization Of Data Between An Electronic Computing Mobile Device And An Electronic Computing Dockstation - Methods, apparatuses, and computer program products are provided for synchronization of data between an electronic mobile device and an electronic computing dockstation. Embodiments include detecting, by the dockstation, completion of a docking procedure connecting the mobile device to the dockstation; identifying, by the dockstation, applications that are open on the mobile device; opening, by the dockstation, the identified applications on the dockstation; identifying, by the dockstation, files that are open on the mobile device; syncing, by the dockstation, the identified files with corresponding files within the dockstation, including updating an existing file within the dockstation; and opening on the dockstation, by the dockstation, the synced files with the open applications on the dockstation. | 2013-02-14 |
20130042043 | Method and Apparatus for Dynamic Channel Access and Loading in Multichannel DMA - An arbiter detects waiting states of N buffers holding direct memory access (DMA) requests, and detects an availability of R core channels of a core R-channel DMA memory. The arbiter, based on the detection, dynamically grants up to R of the N buffers access to the R core channels. An N-to-R controller communicates DMA requests from the N buffers to currently granted ones of the R core channels, and maintains a location record of different data from each of the N buffers being written into different ones of the R core channels. | 2013-02-14 |
20130042044 | BRIDGE, SYSTEM AND THE METHOD FOR PRE-FETCHING AND DISCARDING DATA THEREOF - A bridge system includes a request device, connected to a first bus; a target device, connected to a second bus; and a bridge, communicated with the first bus and the second bus, and the bridge has a buffer, wherein when the request device asks the bridge for reading data of a target address from the target device, a transaction is started, and the bridge asks the target device to transfer data of the target address and following addresses, and then the target device retrieves and transfers the data of the target address and following addresses to the bridge, that is stored in the buffer and then transferred to the request device in turn, and wherein as amount of transferred data to the request device reaches a threshold, the bridge continuously asks data of a following address of the target device before the transaction is finished. | 2013-02-14 |
20130042045 | METHOD AND APPARATUS TO FACILITATE SYSTEM TO SYSTEM PROTOCOL EXCHANGE IN BACK TO BACK NON-TRANSPARENT BRIDGES - A dual host system and method with back to back non-transparent bridges and a proxy packet generating mechanism. The proxy packet generating mechanism enables the hosts to send interrupt generating packets to each other. | 2013-02-14 |
20130042046 | COMPUTING MODULE WITH SERIAL DATA CONNECTIVITY - A computing module includes an interface to asynchronously, serially exchange parallel system bus data with one or more other modules of a computer system that includes the computing module. The computing module can asynchronously, serially transfer first parallel bus data to another module of the computer system, and can asynchronously, serially receive second parallel bus data from another module of the computer system. | 2013-02-14 |
20130042047 | MEMORY SYSTEM, MEMORY DEVICE AND MEMORY INTERFACE DEVICE - In memory system in which the processing unit ( | 2013-02-14 |
20130042048 | Techniques to store configuration information in an option read-only memory - Method and apparatus to store configuration information in an option read-only memory are described. | 2013-02-14 |
20130042049 | ENHANCED COPY-ON-WRITE OPERATION FOR SOLID STATE DRIVES - A method for increasing the efficiency of a “copy-on-write” operation performed on an SSD to extend the life of the SSD is disclosed herein. In one embodiment, such a method includes receiving a first logical address specifying a logical location where new data should be written to an SSD. The first logical address maps to a first physical location, storing original data, on the SSD. The method further receives a second logical address specifying a logical location where the original data should be available on the SSD. The second logical address maps to a second physical location on the SSD. To efficiently perform the copy-on-write operation, the method writes the new data to a new physical location on the SSD, maps the first logical address to the new physical location, and maps the second logical address to the first physical location. A corresponding apparatus is also disclosed. | 2013-02-14 |
20130042050 | METHOD AND SYSTEM FOR EFFICIENTLY SWAPPING PIECES INTO AND OUT OF DRAM - A system and method for managing swaps of pieces of an address mapping table is disclosed. The method may include a controller of a storage device receiving a stream of requests for accesses to the mapping table, analyzing the stream of requests to determine at least one characteristic of the stream of requests, and determining whether to copy a piece of the mapping table stored in non-volatile memory into the volatile memory based on the determined at least one characteristic. The system may include a storage device with a controller configured to perform the method noted above. | 2013-02-14 |
20130042051 | PROGRAM METHOD FOR A NON-VOLATILE MEMORY - A program method for a non-volatile memory is disclosed. At least two blocks in the non-volatile memory are configured as 1-bit per cell (1-bpc) blocks. The data of the configured blocks are read and written to a target block in such a way that the data of each said configured block are moved to pages of a same significant bit. In another embodiment, the data of the configured blocks excluding one block are read and written to the excluded block. | 2013-02-14 |
20130042052 | LOGICAL SECTOR MAPPING IN A FLASH STORAGE ARRAY - A system and method for efficiently performing user storage virtualization for data stored in a storage system including a plurality of solid-state storage devices. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels. Each level stores pairs of a key value and a pointer value. The levels are sorted by time. New records are inserted in a created newest (youngest) level. No edits are performed in-place. All levels other than the youngest may be read only. The system may further include an overlay table which identifies those keys within the mapping table that are invalid. | 2013-02-14 |
20130042053 | Method and Apparatus for Flexible RAID in SSD - A solid state drive (SSD) employing a redundant array of independent disks (RAID) scheme includes a flash memory chip, erasable blocks in the flash memory chip, and a flash controller. The erasable blocks are configured to store flash memory pages. The flash controller is operably coupled to the flash memory chip. The flash controller is also configured to organize certain of the flash memory pages into a RAID line group and to write RAID line group membership information to each of the flash memory pages in the RAID line group. | 2013-02-14 |
20130042054 | Methods of Managing Meta Data in a Memory System and Memory Systems Using the Same - A method of managing meta data can be provided by generating log entry information including log data in response to changes to meta data that includes a plurality of groups of the meta data. A group of the meta data can be selected from among the plurality of groups of the meta data to provide a selected group of meta data in response to detecting that a number of pieces of the log entry information is equal to or greater than a particular threshold value. The selected group of the meta data and associated log data can be stored in a non-volatile memory device. | 2013-02-14 |
20130042055 | MEMORY SYSTEM INCLUDING KEY-VALUE STORE - According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes a first memory, a control circuit and a second memory. The first memory is configured to contain a data area for storing data, and a table area containing the key-value data. The control circuit is configured to perform write and read to the first memory by addressing, and execute a request based on the key-value store. The second memory is configured to store the key-value data in accordance with an instruction from the control circuit. The control circuit performs a set operation by using the key-value data stored in the first memory, and the key-value data stored in the second memory. | 2013-02-14 |
20130042056 | Cache Management Including Solid State Device Virtualization - A method of caching data is performed by a respective computer having one or more processors storing one or more storage management programs for execution by the one or more processors, non-volatile secondary storage and non-volatile cache memory. The method includes receiving from the non-volatile cache memory information identifying an amount of available storage in the non-volatile cache memory, and identifying a size of the management units in the non-volatile cache memory. The method further includes identifying write requests to write data to the non-volatile cache memory, sequentially writing to the non-volatile cache memory the write data for the identified write requests, to sequentially arranged locations in an address space of the non-volatile cache memory, and storing in memory metadata that maps the addresses or storage offsets of the write data to respective locations in the address space of the non-volatile cache memory. | 2013-02-14 |
20130042057 | Hybrid Non-Volatile Memory System - A hybrid non-volatile system uses non-volatile memories based on two or more different non-volatile memory technologies in order to exploit their relative advantages. In an exemplary embodiment, the memory system includes a controller and a flash memory, where the controller has a non-volatile RAM based on an alternate technology such as FeRAM. The flash memory is used for the storage of user data and the non-volatile RAM in the controller is used for system control data. The use of an alternate non-volatile memory technology in the controller allows for a non-volatile copy of the most recent control data to be accessed more quickly as it can be updated on a bit by bit basis. In another exemplary embodiment, the alternate non-volatile memory is used as a cache where data can safely be staged prior to its being written to the memory or read back to the host. | 2013-02-14 |
20130042058 | SOLID STATE MEMORY (SSM), COMPUTER SYSTEM INCLUDING AN SSM, AND METHOD OF OPERATING AN SSM - In one aspect, data is stored in a solid state memory which includes first and second memory layers. A first assessment is executed to determine whether received data is hot data or cold data. Received data which is assessed as hot data during the first assessment is stored in the first memory layer, and received data which is first assessed as cold data during the first assessment is stored in the second memory layer. Further, a second assessment is executed to determine whether the data stored in the first memory layer is hot data or cold data. Data which is then assessed as cold data during the second assessment is migrated from the first memory layer to the second memory layer. | 2013-02-14 |
20130042059 | PAGE MERGING FOR BUFFER EFFICIENCY IN HYBRID MEMORY SYSTEMS - In a first embodiment of the present invention, a method for managing memory in a hybrid memory system is provided, wherein the hybrid memory system has a first memory and a second memory, wherein the first memory is smaller than the second memory and the first and second memories are of different types, the method comprising: identifying two or more pages in the first memory that are compatible with each other based at least in part on a prediction of when individual blocks within each of the two or more pages will be accessed; merging the two or more compatible pages, producing a merged page; and storing the merged page in the first memory. | 2013-02-14 |
20130042060 | MEMORY SYSTEM INCLUDING KEY-VALUE STORE - According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes an interface, a memory block, an address acquisition circuit and a controller. The interface receives a data write/read request or a request based on the key-value store. The memory block has a data area for storing data and a metadata table containing the key-value data. The address acquisition circuit acquires an address in response to input of the key. The controller executes the data write/read request for the memory block, and outputs the address acquired to the memory block and executes the request based on the key-value store. The controller outputs the value corresponding to the key via the interface. | 2013-02-14 |
20130042061 | APPARATUSES AND METHODS PROVIDING REDUNDANT ARRAY OF INDEPENDENT DISKS ACCESS TO NON-VOLATILE MEMORY CHIPS - A controller may include a RAID controller and an access controller. The RAID controller exchanges data with a host and select ones of a plurality of RAID levels responsive to RAID level information. The access controller is connected to the RAID controller and to a plurality of channels that are each connected to a plurality of non-volatile memory chips. The access controller accesses data in at least one of the non-volatile memory chips connected to each of the channels according to the selected RAID level. The controller can include a storage device and a main processor. The main processor logically partitions a plurality of non-volatile memory chips connected to each of a plurality of channels into a normal partition region and a RAID level partition region, where data access is performed according to a selected RAID level, in response partition information that is stored in the storage device. | 2013-02-14 |
20130042062 | FIRMWARE MANAGEMENT OF STORAGE CLASS MEMORY - A computer program product is provided and includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes detecting connections of two or more input/output (I/O) adapters, each of the two or more I/O adapters having one or more solid state devices (SSDs) connected thereto, and presenting a storage class memory address space for all of the connected SSDs that is independent of connections and disconnections between each of the one or more SSDs and each of the two or more I/O adapters and the processing unit. | 2013-02-14 |
20130042063 | SYSTEM AND METHOD FOR CONTROLLING DUAL MEMORY CARDS - A system and method controls dual memory cards of an electronic device. The electronic device includes a first memory card and a second memory card. The method sets a first trigger command for connecting the first memory card to a processor, and a second trigger command for connecting the second memory card to the processor. The first memory card is set as a default memory card to connect to the processor. If the electronic device has received the second trigger command, the method controls a processor to communicate with the second memory card through an analog switch and a second connector. If the electronic device has received the first trigger command, the method further controls the processor to communicate with the first memory card through the analog switch and a first connector. | 2013-02-14 |
20130042064 | SYSTEM FOR DYNAMICALLY ADAPTIVE CACHING - The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space. | 2013-02-14 |
20130042065 | CUSTOM CACHING - Methods and systems are presented for custom caching. Application threads define caches. The caches may be accessed through multiple index keys, which are mapped to multiple application thread-defined keys. Methods provide for the each index key and each application thread-defined key to be symmetrical. The index keys are used for loading data from one or more data sources into the cache stores on behalf of the application threads. Application threads access the data from the cache store by providing references to the caches and the application-supplied keys. Some data associated with some caches may be shared from the cache store by multiple application threads. Additionally, some caches are exclusively accessed by specific application threads. | 2013-02-14 |
20130042066 | STORAGE CACHING - The present disclosure provides a method for processing a storage operation in a system with an added level of storage caching. The method includes receiving, in a storage cache, a read request from a host processor that identifies requested data and determining whether the requested data is in a cache memory of the storage cache. If the requested data is in the cache memory of the storage cache, the requested data may be obtained from the storage cache and sent to the host processor. If the requested data is not in the cache memory of the storage cache, the read request may be sent to a host bus adapter operatively coupled to a storage system. The storage cache is transparent to the host processor and the host bus adapter. | 2013-02-14 |
20130042067 | FLUSHED DATA ALIGNMENT WITH PHYSICAL STRUCTURES - A method and system are disclosed herein for performing operations on a parallel programming unit in a memory system. The parallel programming unit includes multiple physical structures (such as memory cells in a row) in the memory system that are configured to be operated on in parallel. The method and system perform a first operation on the parallel programming unit, the first operation operating on only part of the parallel programming unit and not operating on a remainder of the parallel programming unit, set a pointer to indicate at least one physical structure in the remainder of the parallel programming unit, and perform a second operation using the pointer to operate on no more than the remainder of the parallel programming unit. In this way, the method and system may realign programming to the parallel programming unit when partial writes to the parallel programming unit occur. | 2013-02-14 |
20130042068 | SHADOW REGISTERS FOR LEAST RECENTLY USED DATA IN CACHE - A cache for use in a central processing unit (CPU) of a computer includes a data array; a tag array configured to hold a list of addresses corresponding to each data entry held in the data array; a least recently used (LRU) array configured to hold data indicating least recently used data entries in the data array; a line fill buffer configured to receive data from an address in main memory that is located external to the cache in the event of a cache miss; and a shadow register associated with the line fill buffer, wherein the shadow register is configured to hold LRU data indicating a current state of the LRU array. | 2013-02-14 |
20130042069 | Apparatus And A Method For Obtaining A Blur Image - [Problem] The present invention provides an apparatus and a method, which can reduce required memory, for obtaining blur image in computer graphics. | 2013-02-14 |
20130042070 | Shared cache memory control - A data processing system | 2013-02-14 |
20130042071 | Video Object Placement for Cooperative Caching - A method, an apparatus and an article of manufacture for placing at least one object at at least one cache of a set of cooperating caching nodes with limited inter-node communication bandwidth. The method includes transmitting information from the set of cooperating caching nodes regarding object accesses to a placement computation component, determining object popularity distribution based on the object access information, and instructing the set of cooperating caching nodes of at least one object to cache, the at least one node at which each object is to be cached, and a manner in which the at least one cached object is to be shared among the at least one caching node based on the object popularity distribution and cache and object sizes such that a cumulative hit rate at the at least one cache is increased while a constraint on inter-node communication bandwidth is not violated. | 2013-02-14 |
20130042072 | ELECTRONIC SYSTEM AND METHOD FOR SELECTIVELY ALLOWING ACCESS TO A SHARED MEMORY - An electronic system, an integrated circuit and a method for display are disclosed. The electronic system contains a first device, a memory and a video/audio compression/decompression device such as a decoder/encoder. The electronic system is configured to allow the first device and the video/audio compression/decompression device to share the memory. The electronic system may be included in a computer in which case the memory is a main memory. Memory access is accomplished by one or more memory interfaces, direct coupling of the memory to a bus, or direct coupling of the first device and decoder/encoder to a bus. An arbiter selectively provides access for the first device and/or the decoder/encoder to the memory based on priority. The arbiter may be monolithically integrated into a memory interface. The decoder may be a video decoder configured to comply with the MPEG-2 standard. The memory may store predicted images obtained from a preceding image. | 2013-02-14 |
20130042073 | Hybrid Automatic Repeat Request Combiner and Method for Storing Hybrid Automatic Repeat Request Data - The invention provides a method for storing hybrid automatic repeat request (HARQ) data, the method including: when receiving new data of a coded block, a HARQ processor writing the new data into a high rate buffer memory (Cache) and a channel decoder; the Cache writing the new data into a data memory of the Cache or an external memory; and when receiving retransmitted data of the coded block, the HARQ processor obtaining a previous data corresponding to the retransmitted data from the data memory of the Cache or the external memory through the Cache, combining the retransmitted data and the previous data, and writing the combined data to the Cache and the channel decoder; the Cache writing the combined data into the data memory of the Cache or the external memory. The invention also provides a HARQ combiner. | 2013-02-14 |
20130042074 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 2013-02-14 |
20130042075 | SYSTEM AND METHOD FOR SLICE PROCESSING COMPUTER-RELATED TASKS - A computer-based systems and methods for task processing in a computing device are provided. A method includes the step of entering a slice mode for at least one task, the entering comprising reserving one or more portions of a cache memory to yield a slice cache memory for the task. The method also includes the step of storing a slice in the slice cache memory, wherein the slice comprises at least one program residing in at least one memory space outside of the slice cache memory and associated with the at least one task. The method further includes the step of processing the at least one task utilizing the at least one program by accessing the at least one slice cache memory until the slice mode is terminated. | 2013-02-14 |
20130042076 | CACHE MEMORY ACCESS METHOD AND CACHE MEMORY APPARATUS - A cache memory access method is to be implemented by a cache memory apparatus that includes a data storage unit which includes a plurality of storage sets each including a plurality of storage elements corresponding respectively to a plurality of access ways. The method includes: receiving from a processer a target address; determining whether the data storage unit stores target data corresponding to the target address; receiving the target data from a main memory if negative; selecting a chosen way from the plurality of access ways according to whether the storage elements of the storage set which corresponds to the target address store valid data and whether the target address corresponds to a predefined lock range in the main memory; and writing the target data in the data storage unit based on the chosen way. | 2013-02-14 |
20130042077 | Data hazard handling for copending data access requests - A data processing system that manages data hazards at a coherency controller and not at an initiator device is disclosed. The data processing system process write requests in a two part form, such that a first part is transmitted and when the coherency controller has space to accept data it responds to the first part and the data and state of the data prior to the write are sent as a second part of the write request. When there are copending reads and writes to the same address the writes are stalled by the coherency controller by not responding to the first part of the write and the initiator device proceeds to process any snoop requests received to the address of the write regardless of the fact that the write is pending. When the pending read has completed the coherency controller will respond to the first part of the write and the initiator device will complete the write by sending the data and an indicator of the state of the data following the snoop. The coherency controller can then avoid any potential data hazard using this information to update memory as required. | 2013-02-14 |
20130042078 | Snoop filter and non-inclusive shared cache memory - A data processing apparatus | 2013-02-14 |
20130042079 | METHOD FOR PROCESSING DATA OF A CONTROL UNIT IN A DATA COMMUNICATION DEVICE - A method for processing data of a control unit in a data communication device, which has a first memory area and a second memory area, and is connected to the control unit through an interface. Data from the control unit is transmitted to the data communication device through the interface. A value is stored identically in the first memory area and in the second memory area. The data communication device tests whether a first trigger is present, and if present, storage in the first memory area is discontinued, or the trigger class of the first trigger is tested and storage in the first memory area is discontinued only in the presence of a predefined trigger class. Subsequently, values of the data are read out from the first memory area, whereby values arriving chronologically after the first trigger are stored in the second memory area by the data communication device. | 2013-02-14 |
20130042080 | PREVENTION OF RACE CONDITIONS IN LIBRARY CODE THROUGH MEMORY PAGE-FAULT HANDLING MECHANISMS - Protection of shared data in a multi-core processing environment is disclosed. A page-fault handling mechanism is adapted to synchronize access to shared memory. An application of the present invention is for synchronizing access to potentially shared data, where the shared data is opaque in that it does not have a well-defined structure. | 2013-02-14 |
20130042081 | MAGNETIC TUNNELING JUNCTION DEVICES, MEMORIES, MEMORY SYSTEMS, AND ELECTRONIC DEVICES - Provided is a magnetic tunneling junction device including a first structure including a magnetic layer; a second structure including at least two extrinsic perpendicular magnetization structures, each including a magnetic layer and; a perpendicular magnetization inducing layer on the magnetic layer; and a tunnel barrier between the first and second structures. | 2013-02-14 |
20130042082 | INFORMATION PROCESSING APPARATUS AND STORAGE CONTROL METHOD - An information processing apparatus includes a first storage unit and a processor. The first storage unit includes a first storage area. The processor receives a first request to write first data into the first storage area. The processor requests an external apparatus to write the first data into a second storage area in a second storage unit included in the external apparatus. The processor determines whether a first response has been received from the external apparatus. The first response indicates that the first data has been written into the second storage area. The processor writes the first data into the first storage area when the first response has been received. The processor requests, without writing the first data into the first storage area, the external apparatus to write second data stored in the first storage area into the second storage area when the first response has not been received. | 2013-02-14 |
20130042083 | Data Replication System - Systems and methods are provided for an asynchronous data replication system in which the remote replication reduces bandwidth requirements by copying deduplicated differences in business data from a local storage site to a remote, backup storage site, the system comprising: a local performance storage pool for storing data; a local deduplicating storage pool for storing deduplicated data, said local deduplicating storage pool further storing metadata about data objects in the system and which has metadata analysis logic for identifying and specifying differences in a data object over time; a remote performance storage pool for storing a copy of said data, available for immediate use as a backup copy of said data to provide business continuity to said data; a remote deduplicating storage pool for storing deduplicated data; and a controller for synchronizing the remote performance storage pool to have the second version of the data object using deduplicated data. | 2013-02-14 |
20130042084 | LOOSE SYNCHRONIZATION OF VIRTUAL DISKS - In order to synchronize copies of a virtual disk, a virtualization layer maintains a first record of file system blocks of a first copy of the virtual disk that are modified during an access session by a virtual machine using the first copy of the virtual disk. The file system blocks correspond to a file system of the virtual disk. During an attempt to synchronize the first copy with a second copy of the virtual disk, (i) a second record of file system blocks that are currently used by the file system is obtained from the guest operating system, and (ii) file system blocks in the first copy of the virtual disk that are present in both the first record and the second record are copied into the second copy of the virtual disk. | 2013-02-14 |
20130042085 | Group-By Size Result Estimation - A method and system for accurately estimating a result size of a Group-By operation in a relational database. The estimate utilizes the probability of union of the columns involved in the operation, as well as the relative cardinality of each column with respect to the other columns in the operation. In addition, the estimate incorporates the use of table filters when indicated such that table filters are applied prior to determining the size of the tables in the operation, as well as including equivalent columns into the list of columns that are a part of the Group-By operation. Accordingly, the estimate of the result size of the operation includes influencing factors that provide an accurate estimation of system memory requirements. | 2013-02-14 |
20130042086 | Dynamic Network Adapter Memory Resizing and Bounding for Virtual Function Translation Entry Storage - An approach is provided which a system selects a first virtual function from a plurality of virtual functions executing on a network adapter that includes a memory area. Next, the system allocates, in the memory area, a memory corresponding to the first virtual function. The system then stores one or more translation entries in the allocated memory partition, which are utilized to send data traversing through the first virtual function. As such, the system sends, utilizing one or more of the translation entries, the data packets from the network adapter to one or more destinations. In turn, the system dynamically resizes the memory partition based upon an amount of the memory partition that is utilized to store the one or more translation entries. | 2013-02-14 |
20130042087 | Autonomic Self-Tuning of Database Management System in Dynamic Logical Partitioning Environment - Database partition monitoring and dynamic logical partition reconfiguration in support of an autonomic self-tunable database management system are provided by an automated monitor that monitors one or more resource parameters in a logical partition running a database application in a logically partitioned data processing host. The monitor initiates dynamic logical partition reconfiguration in the event that the parameters vary from predetermined parameter values. In particular, the monitor can initiate removal of resources if one of the resource parameters is being underutilized and initiate addition of resources if one of the resource parameters is being overutilized. The monitor can also calculate an amount of resources to be removed or added. The monitor can interact directly with a dynamic logical partition reconfiguration function of the data processing host or it can utilize an intelligent intermediary that listens for a partition reconfiguration suggestion from the monitor. In the latter configuration, the listener can determine where available resources are located and attempt to fully or partially satisfy the resource needs suggested by the monitor. | 2013-02-14 |
20130042088 | Collective Operation Protocol Selection In A Parallel Computer - Collective operation protocol selection in a parallel computer that includes compute nodes may be carried out by calling a collective operation with operating parameters; selecting a protocol for executing the operation and executing the operation with the selected protocol. Selecting a protocol includes: iteratively, until a prospective protocol meets predetermined performance criteria: providing, to a protocol performance function for the prospective protocol, the operating parameters; determining whether the prospective protocol meets predefined performance criteria by evaluating a predefined performance fit equation, calculating a measure of performance of the protocol for the operating parameters; determining that the prospective protocol meets predetermined performance criteria and selecting the protocol for executing the operation only if the calculated measure of performance is greater than a predefined minimum performance threshold. | 2013-02-14 |
20130042089 | WORD LINE LATE KILL IN SCHEDULER - A method for picking an instruction for execution by a processor includes providing a multiple-entry vector, each entry in the vector including an indication of whether a corresponding instruction is ready to be picked. The vector is partitioned into equal-sized groups, and each group is evaluated starting with a highest priority group. The evaluating includes logically canceling all other groups in the vector when a group is determined to include an indication that an instruction is ready to be picked, whereby the vector only includes a positive indication for the one instruction that is ready to be picked. | 2013-02-14 |
20130042090 | TEMPORAL SIMT EXECUTION OPTIMIZATION - One embodiment of the present invention sets forth a technique for optimizing parallel thread execution in a temporal single-instruction multiple thread (SIMT) architecture. When the threads in a parallel thread group execute temporally on a common processing pipeline rather than spatially on parallel processing pipelines, execution cycles may be reduced when some threads in the parallel thread group are inactive due to divergence. Similarly, an instruction can be dispatched for execution by only one thread in the parallel thread group when the threads in the parallel thread group are executing a scalar instruction. Reducing the number of threads that execute an instruction removes unnecessary or redundant operations for execution by the processing pipelines. Information about scalar operands and operations and divergence of the threads is used in the instruction dispatch logic to eliminate unnecessary or redundant activity in the processing pipelines. | 2013-02-14 |
20130042091 | BIT Splitting Instruction - An instruction specifies a source value and an offset value. Upon execution of the instruction, a first result of the instruction and a second result of the instruction are generated. The first result is a first portion of the source value and the second result is a second portion of the source value. | 2013-02-14 |
20130042092 | MERGE OPERATIONS OF DATA ARRAYS BASED ON SIMD INSTRUCTIONS - A method and apparatus are provided to perform efficient merging operations of two or more streams of data by using SIMD instruction. Streams of data are merged together in parallel and with mitigated or removed conditional branching. The merge operations of the streams of data include Merge AND and Merge OR operations. | 2013-02-14 |
20130042093 | CONTEXT STATE MANAGEMENT FOR PROCESSOR FEATURE SETS - Embodiments of an invention related to context state management based on processor features are disclosed. In one embodiment, a processor includes instruction logic and state management logic. The instruction logic is to receive a state management instruction having a parameter to identify a subset of the features supported by the processor. The state management logic is to perform a state management operation specified by the state management instruction. | 2013-02-14 |
20130042094 | COMPUTING SYSTEM WITH TRANSACTIONAL MEMORY USING MILLICODE ASSISTS - A computing system processes memory transactions for parallel processing of multiple threads of execution with millicode assists. The computing system transactional memory support provides a Transaction Table in memory and a method of fast detection of potential conflicts between multiple transactions. Special instructions may mark the boundaries of a transaction and identify memory locations applicable to a transaction. A ‘private to transaction’ (PTRAN) tag, directly addressable as part of the main data storage memory location, enables a quick detection of potential conflicts with other transactions that are concurrently executing on another thread of said computing system. The tag indicates whether (or not) a data entry in memory is part of a speculative memory state of an uncommitted transaction that is currently active in the system. Program millicode provides transactional memory functions including creating and updating transaction tables, committing transactions and controlling the rollback of transactions which fail. | 2013-02-14 |
20130042095 | Method of Initializing Operation of a Memory System - Provided is a method of initializing operation of a memory system. The method includes receiving an initialization signal, performing a first initializing operation that uses initialization data in response to the receiving of the initialization signal, setting a forced reset mode when an operation standby signal is not enabled by the first initializing operation, and performing a second initializing operation that does not use the initialization data in response to the setting of the forced reset mode. | 2013-02-14 |
20130042096 | DISTRIBUTED MULTI-CORE MEMORY INITIALIZATION - In a system having a plurality of processing nodes, a control node divides a task into a plurality of sub-tasks, and assigns the sub-tasks to one or more additional processing nodes which execute the assigned sub-tasks and return the results to the control node, thereby enabling a plurality of processing nodes to efficiently and quickly perform memory initialization and test of all assigned sub-tasks. | 2013-02-14 |
20130042097 | METHOD OF UPDATING BOOT IMAGE FOR FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE METHOD - A method of updating a boot image for fast booting an image forming apparatus. In the method, when a request for changing software installed in the image forming apparatus is received, a boot image is deleted from the image forming apparatus and the boot image is re-generated by rebooting the image forming apparatus. | 2013-02-14 |
20130042098 | METHOD OF GENERATING BOOT IMAGE FOR FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE METHOD, AND METHOD OF PERFORMING FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE METHOD - A method of generating a boot image for fast booting an image forming apparatus. In the method, a boot image is generated to contain information regarding a system state after processes that are not used to execute an operating system and at least one application are terminated. Then, the image forming apparatus is fast booted using the boot image. | 2013-02-14 |
20130042099 | ELECTRONIC APPARATUS, SYSTEM AND MEDIUM FOR STORING PROGRAM - An electronic apparatus includes a processor and a memory coupled to the processor. The processor executes a process including calculating a first accumulated time during which the battery device feeds power to the electronic apparatus while being attached to the electronic apparatus in a first attachment state in which a first surface of the battery device faces a reference surface provided in the electronic apparatus, calculating a second accumulated time during which the battery device feeds power to the electronic apparatus while being attached to the electronic apparatus in a second attachment state in which a second surface of the battery device faces the reference surface, the second surface being different from the first surface, and providing an instruction to change an attachment state of the battery device when a difference between the first accumulated time and the second accumulated time exceeds a given time. | 2013-02-14 |
20130042100 | METHOD AND APPARATUS FOR FORCED PLAYBACK IN HTTP STREAMING - Systems and methods for enforcing playback of a specific portion of the content in an open non-certified media player/renderer are provided. In accordance with such systems and methods, a key is extracted from a content portion for which playback is to be forced. The extracted key allows a client the ability to gain access to additional/remaining content. Moreover, the existence of forced content, the mechanism(s) utilized for forcing playback, as well as a particular position in the timeline associated with the forced playback are signaled to the client on/through which the open non-certified media player/renderer is implemented. | 2013-02-14 |
20130042101 | SYSTEM AND METHOD FOR USING DIGITAL SIGNATURES TO ASSIGN PERMISSIONS - According to one embodiment of the invention, a method for setting permission levels is described. First, an application and digital signature is received by logic performing the permission assessment. Then, a determination is made as to what permission level for accessing resources is available to the application based on the particulars of the digital signature. Herein, the digital signature being signed with a private key corresponding to a first public key identifies that the application is assigned a first level of permissions, while the digital signature being signed with a private key corresponding to a second public key identifies the application is assigned a second level of permissions having greater access to the resources of an electronic device than provided by the first level of permissions. | 2013-02-14 |
20130042102 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing system including a medium where a content to be played is stored; and a playing apparatus for playing contents stored in the medium; with the playing apparatus being configured to discriminate the content type of a content selected as an object to be played, to selectively obtain a device certificate correlated with the discriminated content type from a storage unit, and to transmit the selectively obtained device certificate to the medium; with the device certificate being a device certificate for content types in which content type information where the device certificate is available is recorded; and with the medium determining whether or not an encryption key with reading being requested from the playing apparatus is an encryption key for decrypting an encrypted content matching an available content type recorded in the device certificate, and permitting readout of the encryption key only in the case of matching. | 2013-02-14 |
20130042103 | Digital Data Content Authentication System, Data Authentication Device, User Terminal, Computer Program and Method - A file is created in which digital data and a certificate are integrated and content authentication for the digital data and the certificate are performed simultaneously. A data authentication device ( | 2013-02-14 |
20130042104 | Certificate-based cookie security - A cookie attribute for use during secure HTTP transport sessions. This attribute points to a server-supplied certificate and, in particular, a digital certificate. The cookie attribute includes a value, and that value is designed to correspond to one or more content fields in the digital certificate. During a first https session, a first web application executing on a first server provides a web browser with the cookie having the server certificate identifier attribute set to a value corresponding to a content field in a server certificate. Later, when the browser is accessing a second server during a second https session, the browser verifies that the value in the cookie matches a corresponding value in the server certificate received from the second server before sending the cookie to the second server. This approach ensures that the cookie is presented only over specified https connections and to trusted organizations. | 2013-02-14 |
20130042105 | SYSTEMS AND METHODS FOR SECURING DATA IN MOTION - Two approaches are provided for distributing trust among a set of certificate authorities. Each approach may be used to secure data in motion. One approach provides methods and systems in which the secure data parser is used to distribute trust in a set of certificate authorities during initial negotiation (e.g., the key establishment phase) of a connection between two devices. Another approach provides methods and systems in which the secure data parser is used to disperse packets of data into shares. A set of tunnels is established within a communication channel using a set of certificate authorities, keys developed during the establishment of the tunnels are used to encrypt shares of data for each of the tunnels, and the shares of data are transmitted through each of the tunnels. Accordingly, trust is distributed among a set of certificate authorities in the structure of the communication channel itself. | 2013-02-14 |
20130042106 | Security Management In A Group Based Environment - Techniques are provided for securely storing data files in, or retrieving data files from, cloud storage. A data file transmitted to cloud storage from a client in an enterprise computing environment is intercepted by at least one network device. Using security information received from a management server, the data file is converted into an encrypted object configured to remain encrypted while at rest in the cloud storage. | 2013-02-14 |
20130042107 | System and Method for Enabling Device Dependent Rights Protection - A system and method for enhancing the protection of digital properties while also increasing the flexibility of distribution of the digital properties. In one embodiment, the digital property is protected through the binding of at least one unique client device identifier with the digital property prior to distribution. Decryption at a client device would therefore be dependent on a comparison of the unique client device identifier that is extracted from the encrypted digital property with a unique client device identifier of the device that is seeking to access the digital property. | 2013-02-14 |
20130042108 | PRIVATE ACCESS TO HASH TABLES - A server and a client mutually exclusively execute server-side and client-side commutative cryptographic processes and server-side and client-side commutative permutation processes. The server has access to a hash table, while the client does not. The server and client perform a method including: encrypting and reordering the hash table using the server; communicating the encrypted and reordered hash table to the client; further encrypting and further reordering the hash table using the client; communicating the further encrypted and further reordered hash table back to the server; and partially decrypting and partially undoing the reordering using the server to generate a double-blind hash table. To read an entry, the client hashes and permute an index key and communicates same to the server which retrieves an item from the double-blind hash table using the hashed and permuted index key and sends it back to the client which decrypts the retrieved item. | 2013-02-14 |
20130042109 | METHOD FOR PRODUCING ACKNOWLEDGED TRANSACTION DATA AND CORRESPONDING DEVICE - A method and a display preparation unit are proposed for the execution of a transaction during which transaction data are processed which have to be confirmed by a user. The display preparation unit has a converter unit which converts transaction data to be interpreted into pixel values and displays them on a monitor, an interface of its own for directly attaching an input unit via which a user confirms displayed transaction data, as well as a crypto unit for generating a signature for a record of confirmed transaction data. In a variant the confirmation can be effected by the crypto unit generating and displaying a random number which has to be inputted by the user via a conventionally attached input unit. | 2013-02-14 |
20130042110 | CENTRALIZED AUTHENTICATION SYSTEM WITH SAFE PRIVATE DATA STORAGE AND METHOD - A token-based centralized authentication method for providing access to a service provider to user information associated with a user's relationship with the service provider includes the steps of: authenticating a user presenting a user token at a user terminal, the user token having stored thereon a user ID; deriving a resource identifier using at least two data input elements, the at least two data input elements including the user ID of the user and a service provider ID of the service provider, wherein the user information is stored in a storage network and the resource identifier is associated with the user information; retrieving the user information from the storage network using the resource identifier; and providing the retrieved user information to the service provider. | 2013-02-14 |
20130042111 | SECURING TRANSACTIONS AGAINST CYBERATTACKS - Methods and systems are provided for performing a secure transaction. Users register biometric and/or other identifying information. A registration code and an encryption key are generated from the biometric information and/or information obtained from a unpredictable physical process and are stored in a secure area of a device and also transmitted to a service provider. A transaction passcode generator may be computed based on the stored registration code. In at least one embodiment, a unique transaction passcode depends upon the transaction information, so that on the next step of that transaction, only that unique transaction passcode will be valid. In an embodiment, the passcode includes the transaction information. In at least one embodiment, if the transaction information has been altered relative to the transaction information stored in the device's secure area, then the transaction passcode sent during this step will be invalid and transaction may be aborted. | 2013-02-14 |
20130042112 | USE OF NON-INTERACTIVE IDENTITY BASED KEY AGREEMENT DERIVED SECRET KEYS WITH AUTHENTICATED ENCRYPTION - A sender private key is created from a master key. The sender private key and public information about a recipient is used to produce a secret key. Data is encrypted with the secret key. The encryption uses authentication data. The encrypted data is sent to the recipient. A recipient private key is created from the master key. The recipient private key is different from the sender private key. The recipient private key and public information about the sender is used to recreate the secret key. At the recipient, the secret key is used to decrypt the encrypted data and the authentication data is used to authenticate the data. | 2013-02-14 |
20130042113 | DATA SHARING SYSTEM, DATA DISTRIBUTION SYSTEM, AND DATA PROTECTION METHOD - Embodiments of the present invention provide a data protection method, used by a data owner to share data with a data sharer securely through a data distribution system. The data owner first establishes a proxy relationship with the data sharer, while the data distribution system is configured to maintain a proxy relationship between the data owner and the data sharer, and after receiving encrypted shared data sent by the data owner, the data distribution system changes the encrypted shared data according to the proxy relationship, so that the data sharer may decrypt the data. By using the data protection method in the embodiments of the present invention, both encryption and decryption of data are a result of coordination of three parties, thereby avoiding a problem of data leakage caused by a problem of a single party. | 2013-02-14 |
20130042114 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing system including a medium where a content to be played is stored; and a playing apparatus for playing a content stored in the medium; with the playing apparatus being configured to selectively activate a playing program according to a content type to be played, to obtain a device certificate correlated with the playing program from storage by executing the playing program, and to transmit the obtained device certificate to the medium; with the device certificate being a device certificate for content types in which content type information where the device certificate is available is recorded; and with the medium determining whether or not an encryption key with reading being requested from the playing apparatus is an encryption key for decrypting an encrypted content matching an available content type recorded in the device certificate, and permitting readout of the encryption key only in the case of matching. | 2013-02-14 |
20130042115 | SYSTEMS AND METHODS FOR IMPLEMENTING SECURITY IN A CLOUD COMPUTING ENVIRONMENT - Computer systems and methods are provided in which an agent executive, when initially executed in a virtual machine, obtains an agent API key from a user. This key is communicated to a grid computer system. An agent identity token, generated by a cryptographic token generation protocol when the key is valid, is received from the grid and stored in a secure data store associated with the agent executive. Information that evaluates the integrity of the agent executive is collected using agent self-verification factors. The information, encrypted and signed with a cryptographic signature, is communicated to the grid. Commands are sent from the grid to the agent executive to check the security, compliance, and integrity of the virtual machine processes and data structures. Based on these check results, additional commands are sent by the grid to the agent executive to correct security, compliance or integrity problems and/or to prevent security compromises. | 2013-02-14 |
20130042116 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus including a message generator generating a message based on a set F=(f | 2013-02-14 |
20130042117 | CRYPTOGRAPHIC DATA DISTRIBUTION AND REVOCATION FOR HANDHELD MEDICAL DEVICES - A method includes: receiving a revocation list from a remote data server at a configuration device. The revocation list includes N cryptographic certificates associated with N computer software entities, respectively, that are not to be executed by any of a group of medical devices including a handheld medical device. N is an integer greater than or equal to zero The method further includes receiving data from the handheld medical device at the configuration device. The data includes a cryptographic certificate that is associated with a given computer software entity that is presently installed in memory of the handheld medical device for execution by the handheld medical device. The method further includes comparing the cryptographic certificate with the revocation list; and selectively executing a protective function by the configuration device when the cryptographic certificate is the same as one of the N cryptographic certificates of the revocation list. | 2013-02-14 |
20130042118 | SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY - One or more techniques and/or systems are provided for assigning power management classifications to a process, transitioning a computing environment into a connected standby state based upon power management classifications assigned to processes, and transitioning the computing environment from the connected standby state to an execution state. That is, power management classifications, such as exempt, throttle, and/or suspend, may be assigned to processes based upon various factors, such as whether a process provides desired functionality and/or whether the process provides functionality relied upon for basic operation of the computing environment. In this way, the computing environment may be transitioned into a low power connected standby state that may continue executing desired functionality, while reducing power consumption by suspending and/or throttling other functionality. Because some functionality may still execute, the computing environment may transition into the execution state in a responsive manner to quickly provide a user with up-to-date information. | 2013-02-14 |
20130042119 | INTERCONNECTION SYSTEM - An interconnection system, apparatus and method is described for arranging elements in a network, which may be a data memory system, computing system or communications system where the data paths are arranged and operated so as to control the power consumption and data skew properties of the system. A configurable switching element may be used to form the interconnections at nodes, where a control signal and other information is used to manage the power status of other aspects of the configurable switching element. Time delay skew of data being transmitted between nodes of the network may be altered by exchanging the logical and physical line assignments of the data at one or more nodes of the network. A method of laying out an interconnecting motherboard is disclosed which reduces the complexity of the trace routing. | 2013-02-14 |
20130042120 | CONTROL APPARATUS AND METHOD - A disclosed control apparatus includes: a first data storage unit storing data representing whether transition to an energy-saving state is prohibited, for each of memory blocks in a memory device, wherein power control is carried out for each of the memory blocks; a second data storage unit storing the number of times that access to a memory block that is in the energy-saving state is requested, for each of threads of a program; and a first controller that increments the number of times of a requesting source thread of a memory request, upon detecting a memory block including an access destination of the memory request received from a processing unit is in the energy-saving state, and sets data representing the transition to the energy-saving state is prohibited for the memory block including the access destination of the memory request upon detecting the number of times after incrementing exceeds a threshold. | 2013-02-14 |
20130042121 | METHODS AND SYSTEMS FOR EFFICIENT BATTERY CHARGING AND USAGE - Battery charging methods and systems for devices that have rechargeable batteries provide an efficient way to know when to charge a device's battery, and when to switch between the device's battery and an external power source as the device's power source. The methods and systems access thresholds for a plurality of power rates, obtain information about when different power rates are in effect and, after determining a current power rate based on the information, compare the threshold of the current power rate to the device's battery's charge level. Based on such a comparison, the methods and systems can determine whether the battery should be charged, and the methods and system can determine whether the device's battery or an external power source should be used as the device's power source. | 2013-02-14 |
20130042122 | PROVIDING A USER WITH FEEDBACK REGARDING POWER CONSUMPTION IN BATTERY-OPERATED ELECTRONIC DEVICES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing a user with feedback regarding power consumption in a battery-operated electronic device. In one aspect, a method performed by data processing apparatus includes identifying, using the data processing apparatus, usage of a hardware component of a battery-operated electronic device that includes the data processing apparatus, attributing the usage of the hardware component to the hardware component or to a software application that uses the hardware component, recording, using the data processing apparatus, a power consumption resulting from the usage, and presenting power consumption feedback to a user using the data processing apparatus. The power consumption feedback identifies the hardware component or the software application of the electronic device and the power consumption resulting from the usage. | 2013-02-14 |
20130042123 | Methods and Systems for Evaluating Historical Metrics in Selecting a Physical Host for Execution of a Virtual Machine - Methods and systems for improved management of power utilization and resource consumption among physical hosts in a cloud computing environment. The management server may provide functionality facilitating the identification and optimized placement of a virtual machine within a cloud computing environment by evaluating historical and heuristic metrics data associated with both the physical hosts and the virtual machines. The management server utilizes the metrics data to generate scores for a plurality of physical host based on physical resources available in a cloud of computing resources. The management server identifies a physical host on which to place a virtual machine using the metrics data, generated scores, and numerous, configurable criteria. The management server responds to the identification of the physical host on which to place a virtual machine by adjusting processor performance and/or operating states for one or more of the physical hosts in the cloud computing environment. | 2013-02-14 |
20130042124 | ENERGY MANAGEMENT DEVICE AND POWER MANAGEMENT SYSTEM - An energy management system has an application storage, an application executing unit, a plurality of network interfaces, a policy setting unit configured to set whether each application should be permitted to access each of the network interfaces, a policy storage configured to store identification information for each application set by the policy setting unit, and access permit/inhibit information showing whether the application is permitted to access each of the network interfaces, an I/F management unit managing a correspondence relationship between a network address and each of the network interfaces, and to specify a network interface used by the application executed by the application executing unit, and an access controller configured to judge whether the application executed by the application executing unit is permitted to access the network interface to be used thereby, based on the access permit/inhibit information stored in the policy storage. | 2013-02-14 |
20130042125 | SYSTEM AND METHOD FOR REDUCING POWER CONSUMPTION IN TELECOMMUNICATION SYSTEMS - Various exemplary embodiments relate to a method for controlling power consumption in a telecom system. The method includes selecting a power profile command based upon a desired power consumption and performance characteristic, translating the power profile command into at least one subcommand, and initiating at least one power reduction technique in a telecom component based upon the at least one subcommand. | 2013-02-14 |
20130042126 | MEMORY LINK POWER MANAGEMENT - Embodiments of the invention describe systems and processes directed towards improving link power-management during memory subsystem idle states. Embodiments of the invention control memory link operations when various components of a memory subsystem enter low power states under certain operating conditions. Embodiments of the invention similarly describe exiting low power states for memory links and various components of a memory subsystem upon detecting certain operating conditions. | 2013-02-14 |
20130042127 | IDLE POWER REDUCTION FOR MEMORY SUBSYSTEMS - Embodiments of the invention describe systems and processes directed towards reducing memory subsystem idle power consumption. Embodiments of the invention enable low power states for various components of a memory subsystem under certain operating conditions, and exiting said low power states under certain operating conditions. | 2013-02-14 |
20130042128 | SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY - One or more techniques and/or systems are provided for assigning power management classifications to a process, transitioning a computing environment into a connected standby state based upon power management classifications assigned to processes, and transitioning the computing environment from the connected standby state to an execution state. That is, power management classifications, such as exempt, throttle, and/or suspend, may be assigned to processes based upon various factors, such as whether a process provides desired functionality and/or whether the process provides functionality relied upon for basic operation of the computing environment. In this way, the computing environment may be transitioned into a low power connected standby state that may continue executing desired functionality, while reducing power consumption by suspending and/or throttling other functionality. Because some functionality may still execute, the computing environment may transition into the execution state in a responsive manner to quickly provide a user with up-to-date information. | 2013-02-14 |
20130042129 | IMAGE FORMING APPARATUS, MICROCONTROLLER, AND METHODS FOR CONTROLLING IMAGE FORMING APPARATUS AND MICROCONTROLLER - An image forming apparatus, a microcontroller, and methods for controlling the image forming apparatus and the microcontroller are provided. The microcontroller include: a memory controller which is connected to an external memory operating in a self-refresh mode if a normal mode changes to a low power mode and outputs a preset signal which is to cancel the self-refresh mode if the low power mode changes to the normal mode; a memory interface unit which transmits the preset signal to a main memory; and a signal detector which detects whether the preset signal has been output. Here, the memory controller powers off the memory interface unit if the normal mode changes to the low power mode and powers on the memory interface unit if the low power mode changes to the normal mode, and the output of the preset signal is detected by the signal detector. | 2013-02-14 |
20130042130 | CIRCUITS AND METHODS FOR CONTROLLING BATTERY MANAGEMENT SYSTEMS - A controller for a battery management system includes a first terminal, a second terminal, and communication circuitry. The first terminal receives power from a battery in the battery management system. The second terminal receives a clock signal. The communication circuitry coupled to the first and second terminals detects the clock signal, and generates a first switching signal according to a result of detecting the clock signal to control the battery management system to switch from operating in a ship mode to operating in a non-ship mode according to the first switching signal. The detecting and generating are performed with the battery management system in the ship mode. The battery management system disables controlling of charging and discharging of the battery in the ship mode, and the battery management system enables controlling of charging and discharging of the battery in the non-ship mode. | 2013-02-14 |
20130042131 | SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY - One or more techniques and/or systems are provided for assigning power management classifications to a process, transitioning a computing environment into a connected standby state based upon power management classifications assigned to processes, and transitioning the computing environment from the connected standby state to an execution state. That is, power management classifications, such as exempt, throttle, and/or suspend, may be assigned to processes based upon various factors, such as whether a process provides desired functionality and/or whether the process provides functionality relied upon for basic operation of the computing environment. In this way, the computing environment may be transitioned into a low power connected standby state that may continue executing desired functionality, while reducing power consumption by suspending and/or throttling other functionality. Because some functionality may still execute, the computing environment may transition into the execution state in a responsive manner to quickly provide a user with up-to-date information. | 2013-02-14 |
20130042132 | IMAGE FORMING APPRATUS, MICROCONTROLLER, AND METHODS FOR CONTROLLING IMAGE FORMING APPARATUS AND MICROCONTROLLER - An image forming apparatus, a microcontroller, and methods for controlling the image forming apparatus and the microcontroller are provided. The microcontroller include: a memory controller which is connected to an external memory operating in a self-refresh mode if a normal mode changes to a low power mode, performs a control operation by using the external memory in the normal mode, and outputs a preset signal which is to cancel the self-refresh mode if the low power mode changes to the normal mode; a memory interface unit which transmits the preset signal to a main memory; and a signal detector which detects whether the preset signal has been output. Here, the memory controller powers off the memory interface unit if the normal mode changes to the low power mode and powers on the memory interface unit if the low power mode changes to the normal mode, and the output of the preset signal is detected by the signal detector. | 2013-02-14 |
20130042133 | OVER-CURRENT PROTECTION SYSTEM OF AND METHOD THEREOF - An over-current protection system used in a computer system is disclosed. The computer system includes a current supply module, a processor, and a battery. The over-current protection system commands the processor to disable its boost state if a first current generated by the current supply module when the processor is in the boost state is greater than a second current affordable by the current supply module. | 2013-02-14 |
20130042134 | SYSTEM FOR MEASURING PHASE CURRENT - A system for measuring phase current includes a power control module, a processor, and a display device. The power control module is connected to the CPU power source to store values of phase current output by the CPU power source. The processor is connected to the power control module to obtain the values of the phase currents stored in the power control module, and calculates a difference among the values of the phase currents. The display device is connected to the processor to display the values of the phase currents and the calculated difference. | 2013-02-14 |
20130042135 | CONTROLLER CORE TIME BASE SYNCHRONIZATION - A system and method for efficiently synchronizing multiple processing cores on a system-on-a-chip (SOC). A SOC includes an interrupt controller and multiple processing cores. The interrupt controller includes a main time base counter. The SOC includes multiple local time base counters, each coupled to a respective one of the processing cores. Synchronization logic blocks are used to update the local counters. These blocks receive a subset of bits from the interrupt controller. The subset of bits represents a number of least significant bits of the main counter less than a total number of bits for the main counter. The logic blocks update an associated local counter according to changes to the received subset of bits. A difference may exist between values of the main counter in the interrupt controller and the local counter in the processing core. However, this difference may be a constant value. | 2013-02-14 |